ZFS Backup Tool Part 6

Now that I can read and write a snapshot, how do I process a list of snapshots in a useful manner? First, let me define what I mean by a useful manner. I want the tool to keep a copy of all automatic snapshot on the source ZFS tree on the destination tree as an automatic snapshot is aged off of the source it needs to be aged off of the destination as well. It will transfer snapshots one at a time instead of transferring all of the intermediate snapshots at the same time; the ZFS send ‘-i’ option versus the ‘-I’ option.

The best data structure for this is a tree or graph. The tree starts with a list of yearly snapshots. Every snapshot has two slices of children—one for the child frequency snapshots older than it. The younger slice will be populated only if the current snapshot is the youngest child at its frequency strata. A picture demonstrating my idea follows this paragraph. I will delve into implementation details in the next part of the ZFS Backup Tool series.

A diagram of a snapshot tree

ZFS Backup Tool Part 5

Now that I can read a list of snapshots, I need to read a snapshot and transfer it to the destination. The three functions that allow me to do that are exec.StdinPipe(), exec.StdoutPipe(), and io.CopyBuffer().

The process consists of the following steps:

  1. Create an exec.Cmd representing the zfs send command
  2. Use exec.StdoutPipe() to connect a pipe to the output of the command created in step 1.
  3. Create an exec.Cmd representing the zfs receive command
  4. Use exec.StdinPipe() to connect a pipe to the input of the command created in step 3.
  5. Start both commands
  6. Use io.CopyBuffer() to read from the snapshot to the receiver.

You can view the code here.

Self Hosting a Git Server

Which software to use?

With the ZFS backup tool, I want to host the code for it here on my website instead of GitHub. What options are available? If I want to host the bare repo, I can use ssh for write access and add a virtual host for apache so you can have read access. If I want a nice web interface, though, I need a different setup.

A bit of online searching shows four major self-hosted Git web frontends. They are GitLab, Gitea, GitBucket, and Gogs. GitLab and GitBucket are out because they require a lot of extra software to support the service. GitLab could almost qualify as its own Linux distro with a bit more work. GitBucket is nearly as bad. That leaves the two clones, Gogs and Gitea. Gitea is a fork of Gogs with more maintainers. The increase in maintainers gives Gitea a faster issue resolution, so I chose it.

System requirements

Gitea has very moderate system requirements. Golang, about 256MB of RAM, and optionally MariaDB, MySQL, or PostgreSQL. An external database is a recommendation for large sites. I will use MariaDB because I am already using it and have a working scheduled backup of my entire database server.

Installation

Since Ubuntu doesn’t have a current package for Gitea, I followed the From binary instructions on docs.gitea.io. I followed the MySQL portion of the Database preparation page to create the needed MariaDB database. I followed the Using Apache HTTPD as a reverse proxy section of the Reverse Proxies page to finish the setup.

The manual setup was quicker than the Docker setup I played with on my lab network.

You can explore my repositories by clicking the My Git Repositories link in the header menu on desktop or the dropdown menu on mobile.

Mustie1: Good Small Engine Channel

Mustie1 does videos of small engine repair. Most of his videos start with something simple that someone overlooked with the “dead” engine. He fixes that and usually cleans the engine as well.

Here are three videos where he fixed a forklift that someone abandoned because two previous mechanics wouldn’t follow their troubleshooting workflow to the end.

ZFS Backup Tool Part 4

Welcome to Part 4 of my series on my tool for backing up ZFS Snapshots to an external device. In this part, I am discussing how to exec a command and read its output.

To deal with external commands in Go, you use the os/exec package. The primary pieces of the package that I need for now are exec.Command() and CombinedOutput(). exec.Command() sets up the Command structure with the command and any arguments that I am passing to it.

var listCommand = exec.Command("zfs", "list", "-Hrt", "snapshot", "dpool")

That code creates a variable called listCommand, which is ready to run the command zfs with the arguments list, -Hrt, and snapshot as individual arguments.

var snapList, err = listCommand.CombinedOutput()

That line of code runs the command I previously prepared, puts both its Standard Output and Standard Error in a slice of bytes. If the command exited with an error code other than 0, CombinedOutput sets err to a non-nil value. snapList will have the Standard Error of the executed command, so printing snapList’s contents will be useful for debugging.

var snapScanner = bufio.NewScanner(bytes.NewReader(snapList))
	if err != nil {
		fmt.Println(listCommand)
		fmt.Println("Error trying to list snapshots:", err.Error())
		for snapScanner.Scan() {
			fmt.Println(snapScanner.Text())
		}
	}

I will need to use the more complicated IO redirection tools provided in the os/exec package for the zfs send and zfs receive commands. However, for a test run today, I can use a modification of the loop I used to print the output from zfs if it errored.

for snapScanner.Scan() {
		if snapshotLineRegex.MatchString(snapScanner.Text()) {
			var temp = strings.SplitN(snapScanner.Text(), "\t", 2)
			var snapshot = ParseSnapshot(temp[0])
			if snapshot != nil {
				fmt.Println("I found snapshot", snapshot.Name(), "at", snapshot.Path())
			}
		}
	}

ZFS Backup Tool Part 1

I haven’t seen a lot of tools that are designed to backup ZFS snapshots to removable media. So, I am writing my own. I am going to document the process here.

The basic loop for a backup tool is

  1. Read a list of snapshots on the source
  2. Read a list of snapshots on the destination
  3. Find the list of snapshots on the source that are not on the destination. These are the non backed up snapshots.
  4. Find the list of snapshots on the destination that are not on the source. These are the aged out snapshots.
  5. Copy all non backed up snapshots to the destination, preferably one at a time to make recovery from IO failure easier.
  6. Remove the aged out snapshots.

I am designing this tool to only backup snapshots taken by zfs-auto-snapshot. These are named <pool|filesystem>@zfs-auto-snap_<time interval>-<year>-<month>-<day>-<hour><minute>. The command zfs -Hrt snapshot <source poolname> will generate a list of all snapshots in a pool in a machine parseable format.

Issuing the command zfs send -ci <old snapshot> <pool|filesystem>@<new snapshot> will send an incremental snapshot from old to new to the commands standard output. I can estimate the amount of data to be transferred by replacing -ci with -cpvni in the zfs send command.

Issuing the command zfs receive -u <backup location> will store a snapshot from its standard input to the backup location.

Snapshots are removed by zfs destroy -d <pool|filesystem>@<snapshot name>. The snapshot name is the portion of the snapshot pattern mentioned above after the @ symbol.

Current Opinions on Web Design

I have been designing websites as a side gig for about a year now. Most of that design work has been CSS modifications to existing themes. Since January, I have needed to do more extensive design changes. That work culminated in a scratch-made WordPress theme designed for people using WordPress as a CMS, not a blog platform.
During that time, I have begun preferring Gutenburg’s design philosophy over Elementor’s. Gutenburg does a better job of separating content, and content-specific layout from the general theme layout than Elementor does. Gutenburg also seems less opinionated about its block styling than Elementor.
The downside to flexibility is a lack of capability to micromanage the layout. I don’t see the appeal of complicated website designs that demand pixel alignment from individual paragraphs or worse letters. I tend towards a utility first approach. That utility first approach doesn’t mean that I do not appreciate any artistry. I cannot entirely agree with form over function.
I like the mobile-first approach to design. However, I do get frustrated at the limitations of mobile devices because they can complicate the implementation of proper form and function. A navigatable mobile interface for tabular data is one example.

OpenWrt on x86-64

Backstory and why

I switched to using OpenWrt as my preferred firmware for SOHO routers 5 or 6 years ago. During that time, I used it on a Linksys WRT 1900AC. However, over the last few months, I have had to use another WRT 1900AC that is using Linksys’ provided firmware. The Linksys firmware works for necessary routing. It’s troubleshooting, and firewall management tools leave a lot to be desired, though.

Due to constraints on the current home/lab network, I can’t reassociate a few WiFi devices with either OpenWrt on the current WRT 1900AC or replace it entirely with another WRT 1900AC running OpenWrt.

So restricting the current router to just the WiFi access point role is what I am limited to doing. The next question is what should replace it in the role of network router? My three options appear to be:

  • The other WRT 1900AC running OpenWrt
  • An old computer running pfSense®
  • Or the same old computer running OpenWrt

Onto the testing

Right now, I am testing the old computer running OpenWrt option. So far, the installation was simple. I downloaded the latest version from the OpenWrt Releases page. For a 64 bit x86 device, the required target is x86/64. I went with the combined-ext4 image. After downloading the device image, I wrote it directly to the hard drive I am dedicating to the new router. I installed the hard drive in the computer and booted it up.

Contrary to the OpenWrt documentation, I could use a keyboard to login to the local console on the test rig. My biggest complaint so far is the lack of software installed by default. Most x86 hardware that you would run OpenWrt on are much more potent than the other devices they support. I expected that Samba and ReadyMedia formerly MiniDLNA would be installed though disabled by default. Most SOHO routers you can buy have the option of being a NAS, so OpenWrt not including the software for that by default in the install image for the most spacious target they support is a bit odd from my perspective.

Interesting stuff learned during the process

While I prefer Rufus for writing ISO or disk images to USB media, the OpenWRT documentation recommended using a tool called balenaEtcher. I decided to try it out. I do like that it validates that the flashing process completed. I also like that it can handle compressed images directly saving the step of uncompressing the image before writing it.

What do I not like about balenaEtcher? File size. Rufus will fit on a single 3.5″ floppy disk. balenaEtcher would need 89 3.5″ floppy disks. For my younger audience who don’t know what a 3.5″ floppy disk is, you can read about them on Wikipedia. They have a standard storage capacity of 1.44 Megabytes.