ZFS Backup Tool Part 2

Recognizing a snapshot made by zfs-auto-snapshot.

First, what does a list of these snapshots look like?

[email protected]:~/src/go/zfs_backup$ zfs list -Hrt snapshot dpool
[email protected]_monthly-2020-05-12-1245     96K     -       148K    -
[email protected]_monthly-2020-06-11-1248     8K      -       23.3G   -
[email protected]_monthly-2020-07-11-1245     0B      -       23.3G   -
[email protected]_weekly-2020-07-26-1242      0B      -       30.5G   -
[email protected]_daily-2020-07-27-1238       4.74G   -       31.3G   -
[email protected]_daily-2020-08-02-1235       0B      -       143G    -
[email protected]_weekly-2020-08-02-1240      0B      -       143G    -
[email protected]_hourly-2020-08-03-1117      0B      -       143G    -
[email protected]_daily-2020-08-03-1236       0B      -       143G    -
[email protected]_frequent-2020-08-04-2030    0B      -       143G    -
[email protected]_frequent-2020-08-04-2045    0B      -       143G    -
[email protected]_frequent-2020-08-04-2100    0B      -       143G    -
[email protected]_frequent-2020-08-04-2115    0B      -       143G    -
[email protected]_hourly-2020-08-04-2117      0B      -       143G    -
dpool/[email protected]_hourly-2020-08-04-1717 0B      -       96K     -
dpool/[email protected]_hourly-2020-08-04-1817 0B      -       96K     -
dpool/[email protected]_hourly-2020-08-04-1917 0B      -       96K     -
dpool/[email protected]_hourly-2020-08-04-2017 0B      -       96K     -
dpool/[email protected]_frequent-2020-08-04-2030       0B      -       96K     -
dpool/[email protected]_frequent-2020-08-04-2045       0B      -       96K     -
dpool/[email protected]_frequent-2020-08-04-2100       0B      -       96K     -
dpool/[email protected]_frequent-2020-08-04-2115       0B      -       96K     -
dpool/[email protected]_hourly-2020-08-04-2117 0B      -       96K     -
dpool/home/[email protected]_frequent-2020-08-04-2115        0B      -       69.1G   -
dpool/home/[email protected]_hourly-2020-08-04-2117  0B      -       69.1G   -
dpool/[email protected]        116G    -       442G    -
dpool/[email protected]_monthly-2020-05-12-1245        8K      -       344G    -

I trimmed the previous list down a bit. So what is a regular expression that will match this? The first question is which regular expression library am I using? I am writing this tool in Go. Thus I will use the regexp Go package. Go’s regexp package is based on Google’s RE2 library. The syntax for it is here.

I will start with the snapshot names. The part after the @. Those start with zfs-auto-snap so "zfs-auto-snap" will match it.

The next section is which timer made the snapshot. This section can also be called the increment. The valid timers are yearly, monthly, weekly, daily, hourly, and frequent for a default install of zfs-auto-snapshot. The regex "yearly|monthly|weekly|daily|hourly|frequent" will match these timers. However, I would like to get which timer created the snapshot without further parsing. That is the perfect job for a capturing sub match. After adding the capturing sub match, the regex looks like "(?P<increment>yearly|monthly|weekly|daily|hourly|frequent)".

The final section is the timestamp of the snapshot. Like with the timer section, it is useful not to have to parse this data a second time. With the sub matches "(?P<year>[[:digit:]]{4})-(?P<month>[[:digit:]]{2})-(?P<day>[[:digit:]]{2})-(?P<hour>[[:digit:]]{2})(?P<minute>[[:digit:]]{2})" will work.

With the snapshot names completed, I need to capture the zfs tree structure before the @ symbol. I haven’t found a reliable regular expression that will capture that tree but "(?:[[:word:]-.]+)+(?:/?[[:word:]-.]+)*" will recognize a subset of all valid zfs trees. Avoid using anything it won’t recognize, or you may end up with inaccessible files.

Including some test code the tool’s source code looks like this so far.

package main

import (

const zfsRegexStart string = "zfs-auto-snap"
const zfsRegexIncrement string = "(?P<increment>yearly|monthly|weekly|daily|hourly|frequent)"
const zfsRegexDateStamp string = "(?P<year>[[:digit:]]{4})-(?P<month>[[:digit:]]{2})-(?P<day>[[:digit:]]{2})-(?P<hour>[[:digit:]]{2})(?P<minute>[[:digit:]]{2})"

var zfsRegex = regexp.MustCompile(zfsRegexStart + "_" + zfsRegexIncrement + "-" + zfsRegexDateStamp)

func testSnapshot(possible string, increment string) (bool, bool) {
	var matches = zfsRegex.FindStringSubmatch(possible)
	if matches == nil {
		return false, false
	var isASnapshot = true
	if matches[1] == increment {
		return isASnapshot, true
	return isASnapshot, false

func isAYearlySnapshot(possible string) bool {
	_, isYearly := testSnapshot(possible, "yearly")
	return isYearly

func isAMonthlySnapshot(possible string) bool {
	_, isMonthly := testSnapshot(possible, "monthly")
	return isMonthly

func isAWeeklySnapshot(possible string) bool {
	_, isWeekly := testSnapshot(possible, "weekly")
	return isWeekly

func isADailySnapshot(possible string) bool {
	_, isDaily := testSnapshot(possible, "daily")
	return isDaily

func isAnHourlySnapshot(possible string) bool {
	_, isHourly := testSnapshot(possible, "hourly")
	return isHourly

func isAFrequentSnapshot(possible string) bool {
	_, isFrequent := testSnapshot(possible, "frequent")
	return isFrequent

const poolNameRegex string = "(?:[[:word:]-.]+)+(?:/?[[:word:]-.]+)*"

var snapshotLineRegex = regexp.MustCompile("^" + poolNameRegex + "@" + zfsRegex.String() + ".*$")

func main() {
	//fmt.Println(snapshotLineRegex.MatchString("dpool/[email protected]_frequent-2020-08-04-1830\t0B\t-\t201M\t-"))
	input := bufio.NewScanner(os.Stdin)
	for input.Scan() {
		if snapshotLineRegex.MatchString(input.Text()) {
		} else {
			fmt.Printf("%s\t%s\n", input.Text(), "Is not a snapshot.")
	if err := input.Err(); err != nil {
		fmt.Fprintln(os.Stderr, "reading Standard Input:", err)

I will continue this tomorrow. See you then!

ZFS Backup Tool Part 1

I haven’t seen a lot of tools that are designed to backup ZFS snapshots to removable media. So, I am writing my own. I am going to document the process here.

The basic loop for a backup tool is

  1. Read a list of snapshots on the source
  2. Read a list of snapshots on the destination
  3. Find the list of snapshots on the source that are not on the destination. These are the non backed up snapshots.
  4. Find the list of snapshots on the destination that are not on the source. These are the aged out snapshots.
  5. Copy all non backed up snapshots to the destination, preferably one at a time to make recovery from IO failure easier.
  6. Remove the aged out snapshots.

I am designing this tool to only backup snapshots taken by zfs-auto-snapshot. These are named <pool|filesystem>@zfs-auto-snap_<time interval>-<year>-<month>-<day>-<hour><minute>. The command zfs -Hrt snapshot <source poolname> will generate a list of all snapshots in a pool in a machine parseable format.

Issuing the command zfs send -ci <old snapshot> <pool|filesystem>@<new snapshot> will send an incremental snapshot from old to new to the commands standard output. I can estimate the amount of data to be transferred by replacing -ci with -cpvni in the zfs send command.

Issuing the command zfs receive -u <backup location> will store a snapshot from its standard input to the backup location.

Snapshots are removed by zfs destroy -d <pool|filesystem>@<snapshot name>. The snapshot name is the portion of the snapshot pattern mentioned above after the @ symbol.


If you use ZFS and you don’t have an auto snapshot tool installed, you need to install one. This tool will make backups a lot easier.

Categorized as Short Tagged

Current Opinions on Web Design

I have been designing websites as a side gig for about a year now. Most of that design work has been CSS modifications to existing themes. Since January, I have needed to do more extensive design changes. That work culminated in a scratch-made WordPress theme designed for people using WordPress as a CMS, not a blog platform.
During that time, I have begun preferring Gutenburg’s design philosophy over Elementor’s. Gutenburg does a better job of separating content, and content-specific layout from the general theme layout than Elementor does. Gutenburg also seems less opinionated about its block styling than Elementor.
The downside to flexibility is a lack of capability to micromanage the layout. I don’t see the appeal of complicated website designs that demand pixel alignment from individual paragraphs or worse letters. I tend towards a utility first approach. That utility first approach doesn’t mean that I do not appreciate any artistry. I cannot entirely agree with form over function.
I like the mobile-first approach to design. However, I do get frustrated at the limitations of mobile devices because they can complicate the implementation of proper form and function. A navigatable mobile interface for tabular data is one example.

OpenWrt on x86-64

Backstory and why

I switched to using OpenWrt as my preferred firmware for SOHO routers 5 or 6 years ago. During that time, I used it on a Linksys WRT 1900AC. However, over the last few months, I have had to use another WRT 1900AC that is using Linksys’ provided firmware. The Linksys firmware works for necessary routing. It’s troubleshooting, and firewall management tools leave a lot to be desired, though.

Due to constraints on the current home/lab network, I can’t reassociate a few WiFi devices with either OpenWrt on the current WRT 1900AC or replace it entirely with another WRT 1900AC running OpenWrt.

So restricting the current router to just the WiFi access point role is what I am limited to doing. The next question is what should replace it in the role of network router? My three options appear to be:

  • The other WRT 1900AC running OpenWrt
  • An old computer running pfSenseĀ®
  • Or the same old computer running OpenWrt

Onto the testing

Right now, I am testing the old computer running OpenWrt option. So far, the installation was simple. I downloaded the latest version from the OpenWrt Releases page. For a 64 bit x86 device, the required target is x86/64. I went with the combined-ext4 image. After downloading the device image, I wrote it directly to the hard drive I am dedicating to the new router. I installed the hard drive in the computer and booted it up.

Contrary to the OpenWrt documentation, I could use a keyboard to login to the local console on the test rig. My biggest complaint so far is the lack of software installed by default. Most x86 hardware that you would run OpenWrt on are much more potent than the other devices they support. I expected that Samba and ReadyMedia formerly MiniDLNA would be installed though disabled by default. Most SOHO routers you can buy have the option of being a NAS, so OpenWrt not including the software for that by default in the install image for the most spacious target they support is a bit odd from my perspective.

Interesting stuff learned during the process

While I prefer Rufus for writing ISO or disk images to USB media, the OpenWRT documentation recommended using a tool called balenaEtcher. I decided to try it out. I do like that it validates that the flashing process completed. I also like that it can handle compressed images directly saving the step of uncompressing the image before writing it.

What do I not like about balenaEtcher? File size. Rufus will fit on a single 3.5″ floppy disk. balenaEtcher would need 89 3.5″ floppy disks. For my younger audience who don’t know what a 3.5″ floppy disk is, you can read about them on Wikipedia. They have a standard storage capacity of 1.44 Megabytes.