Enabling TLS 1.3 in Apache >=2.4.38

TLSv1.3 is now available on 85% of web clients, according to caniuse.com. Since I don’t have to support either Internet Explorer or the six microscopic mobile web browsers that don’t support it at all, I have gone ahead and migrated my servers straight over to TLSv1.3.

Below is a sample configuration that will enable TLSv1.3 and the currently recommended ciphers in a reasonable order. You must enable TLSv1.3 globally on the entire server. I made my adjustments in the /etc/apache2/mods-enabled/ssl.conf file. I gave Chacha preference to AES due to the number of mobile devices running modern browsers that don’t have AES hardware acceleration.

        #   SSL Cipher Suite:
        #   List the ciphers that the client is permitted to negotiate. See the
        #   ciphers(1) man page from the openssl package for list of all available
        #   options.
        #   Enable only secure ciphers:
        SSLCipherSuite TLSv1.3 TLS_CHACHA20_POLY1305_SHA256:TLS_AES_256_GCM_SHA384
:TLS_AES_128_GCM_SHA256

        # SSL server cipher order preference:
        # Use server priorities for cipher algorithm choice.
        # Clients may prefer lower grade encryption.  You should enable this
        # option if you want to enforce stronger encryption, and can afford
        # the CPU cost, and did not override SSLCipherSuite in a way that puts
        # insecure ciphers first.
        # Default: Off
        SSLHonorCipherOrder on

        #   The protocols to enable.
        #   Available values: all, SSLv3, TLSv1, TLSv1.1, TLSv1.2
        #   SSL v2  is no longer supported
        SSLProtocol -all +TLSv1.3

ZFS Backup Tool Part 6

Now that I can read and write a snapshot, how do I process a list of snapshots in a useful manner? First, let me define what I mean by a useful manner. I want the tool to keep a copy of all automatic snapshot on the source ZFS tree on the destination tree as an automatic snapshot is aged off of the source it needs to be aged off of the destination as well. It will transfer snapshots one at a time instead of transferring all of the intermediate snapshots at the same time; the ZFS send ‘-i’ option versus the ‘-I’ option.

The best data structure for this is a tree or graph. The tree starts with a list of yearly snapshots. Every snapshot has two slices of children—one for the child frequency snapshots older than it. The younger slice will be populated only if the current snapshot is the youngest child at its frequency strata. A picture demonstrating my idea follows this paragraph. I will delve into implementation details in the next part of the ZFS Backup Tool series.

A diagram of a snapshot tree

ZFS Backup Tool Part 5

Now that I can read a list of snapshots, I need to read a snapshot and transfer it to the destination. The three functions that allow me to do that are exec.StdinPipe(), exec.StdoutPipe(), and io.CopyBuffer().

The process consists of the following steps:

  1. Create an exec.Cmd representing the zfs send command
  2. Use exec.StdoutPipe() to connect a pipe to the output of the command created in step 1.
  3. Create an exec.Cmd representing the zfs receive command
  4. Use exec.StdinPipe() to connect a pipe to the input of the command created in step 3.
  5. Start both commands
  6. Use io.CopyBuffer() to read from the snapshot to the receiver.

You can view the code here.

Self Hosting a Git Server

Which software to use?

With the ZFS backup tool, I want to host the code for it here on my website instead of GitHub. What options are available? If I want to host the bare repo, I can use ssh for write access and add a virtual host for apache so you can have read access. If I want a nice web interface, though, I need a different setup.

A bit of online searching shows four major self-hosted Git web frontends. They are GitLab, Gitea, GitBucket, and Gogs. GitLab and GitBucket are out because they require a lot of extra software to support the service. GitLab could almost qualify as its own Linux distro with a bit more work. GitBucket is nearly as bad. That leaves the two clones, Gogs and Gitea. Gitea is a fork of Gogs with more maintainers. The increase in maintainers gives Gitea a faster issue resolution, so I chose it.

System requirements

Gitea has very moderate system requirements. Golang, about 256MB of RAM, and optionally MariaDB, MySQL, or PostgreSQL. An external database is a recommendation for large sites. I will use MariaDB because I am already using it and have a working scheduled backup of my entire database server.

Installation

Since Ubuntu doesn’t have a current package for Gitea, I followed the From binary instructions on docs.gitea.io. I followed the MySQL portion of the Database preparation page to create the needed MariaDB database. I followed the Using Apache HTTPD as a reverse proxy section of the Reverse Proxies page to finish the setup.

The manual setup was quicker than the Docker setup I played with on my lab network.

You can explore my repositories by clicking the My Git Repositories link in the header menu on desktop or the dropdown menu on mobile.

Mustie1: Good Small Engine Channel

Mustie1 does videos of small engine repair. Most of his videos start with something simple that someone overlooked with the “dead” engine. He fixes that and usually cleans the engine as well.

Here are three videos where he fixed a forklift that someone abandoned because two previous mechanics wouldn’t follow their troubleshooting workflow to the end.

ZFS Backup Tool Part 4

Welcome to Part 4 of my series on my tool for backing up ZFS Snapshots to an external device. In this part, I am discussing how to exec a command and read its output.

To deal with external commands in Go, you use the os/exec package. The primary pieces of the package that I need for now are exec.Command() and CombinedOutput(). exec.Command() sets up the Command structure with the command and any arguments that I am passing to it.

var listCommand = exec.Command("zfs", "list", "-Hrt", "snapshot", "dpool")

That code creates a variable called listCommand, which is ready to run the command zfs with the arguments list, -Hrt, and snapshot as individual arguments.

var snapList, err = listCommand.CombinedOutput()

That line of code runs the command I previously prepared, puts both its Standard Output and Standard Error in a slice of bytes. If the command exited with an error code other than 0, CombinedOutput sets err to a non-nil value. snapList will have the Standard Error of the executed command, so printing snapList’s contents will be useful for debugging.

var snapScanner = bufio.NewScanner(bytes.NewReader(snapList))
	if err != nil {
		fmt.Println(listCommand)
		fmt.Println("Error trying to list snapshots:", err.Error())
		for snapScanner.Scan() {
			fmt.Println(snapScanner.Text())
		}
	}

I will need to use the more complicated IO redirection tools provided in the os/exec package for the zfs send and zfs receive commands. However, for a test run today, I can use a modification of the loop I used to print the output from zfs if it errored.

for snapScanner.Scan() {
		if snapshotLineRegex.MatchString(snapScanner.Text()) {
			var temp = strings.SplitN(snapScanner.Text(), "\t", 2)
			var snapshot = ParseSnapshot(temp[0])
			if snapshot != nil {
				fmt.Println("I found snapshot", snapshot.Name(), "at", snapshot.Path())
			}
		}
	}

ZFS Backup Tool Part 1

I haven’t seen a lot of tools that are designed to backup ZFS snapshots to removable media. So, I am writing my own. I am going to document the process here.

The basic loop for a backup tool is

  1. Read a list of snapshots on the source
  2. Read a list of snapshots on the destination
  3. Find the list of snapshots on the source that are not on the destination. These are the non backed up snapshots.
  4. Find the list of snapshots on the destination that are not on the source. These are the aged out snapshots.
  5. Copy all non backed up snapshots to the destination, preferably one at a time to make recovery from IO failure easier.
  6. Remove the aged out snapshots.

I am designing this tool to only backup snapshots taken by zfs-auto-snapshot. These are named <pool|filesystem>@zfs-auto-snap_<time interval>-<year>-<month>-<day>-<hour><minute>. The command zfs -Hrt snapshot <source poolname> will generate a list of all snapshots in a pool in a machine parseable format.

Issuing the command zfs send -ci <old snapshot> <pool|filesystem>@<new snapshot> will send an incremental snapshot from old to new to the commands standard output. I can estimate the amount of data to be transferred by replacing -ci with -cpvni in the zfs send command.

Issuing the command zfs receive -u <backup location> will store a snapshot from its standard input to the backup location.

Snapshots are removed by zfs destroy -d <pool|filesystem>@<snapshot name>. The snapshot name is the portion of the snapshot pattern mentioned above after the @ symbol.