I use a lot of software tools for my job - and personally, for that matter. Some live in the forefront of my brain, like emacs. Others live in the background, like my terminal (alacritty). Some of these background tools do their jobs so well and so reliably that I can sometimes forget that they’re humming away for me every day, without any hassle to fix or maintain them.
This is an informal list of those tools that I love to never have to think about.
Syncthing
Remember when Dropbox felt like a novelty? When you finally didn’t have to lug around thumb drives as often? Syncthing keeps the best parts of that experience (a synchronized folder wherever you are) without all the… other stuff Dropbox seems to have added over the years.
In addition to being open source, another distinction of Syncthing is that you’re not constrained by a third party in regards to storage size. Drop Syncthing on your server and the only limit is how much storage you have. Moreover, linking instances of Syncthing together is a surprisingly hassle-free experience - no worrying about sharing IP addresses or other details, once you add a device by ID, the rest sort of just works. The Syncthing developers have added many nice quality-of-life improvements over the years (QR codes, the “introducer” paradigm, and more) without jumping the feature shark.
Syncing git repos has its place, but when you need something super user friendly that is all about actively synchronizing folders across machines, Syncthing reigns supreme. After setting it up on my servers, the Syncthing daemon has essentially been completely hands-off, maintenance-wise, and bringing my synced folder to a new host is equally easy and doesn’t require any thought after initial setup.
OpenZFS on Linux
I set up my four-disk ZFS array years ago using FreeNAS on an old HP Proliant Microserver N40L. Since then, that same RAIDZ pool has been through many upgrades, a few OS moves (to Nas4Free then Arch Linux), and more than one failed disk (some in dramatic fashion). Through it all, my data has been reliably safe and easy to maintain.
I’ve heard the same thing from multiple people: the zfs command line tooling feels like some of most ergonomic command-line interface you’ve ever used. Performing a scrub, swapping out a disk, replicating snapshots - all of these tasks are so simple that it’s hard not to script many of them to make maintenance happen without any manual intervention. The entire system feels like it was written with operators in mind - aspects like zed (the event system) make sending notifications about pool events stupidly easy (I send a notification to myself when an automated weekly scrub finishes, and the notification is a few lines of bash).
The offsite backup story is sort of like unix zen - the fact that you can move a snapshot across the internet securely with a zfs send | ssh zfs receive
and interject arbitrary commands into the pipeline to fit your needs (like gzip or pv) is beautiful.
I once ran into a situation in which buffering over the network would help smooth remote replication, and added the equivalent of an | mbuffer
to my command.
Buffered snapshot network transfer, and all with a standard unix pipe.
Of course, the core of what OpenZFS does is solid. Scrubbing has been a reliable tool, and overall, I just… don’t have to think about it. It checks my array integrity regularly, reports problems, and I get the latest features via dkms.
There are still OpenZFS detractors out there - CDDL licensing, RAM requirements (huh?) and others. The only thing I’ll say is that the tremendous quality of ZFS has outshined any concerns I might’ve had with it initially, and the work is still moving steadily ahead (native encryption is a recent addition).
SSH
At first blush, ssh
sort of seems like a tool an operations engineer would actively think very much about, right?
That would probably be the case for any other featureful remote access tool, but ssh
is so elegantly designed that it feels like a muscle now: it responds affirmatively whenever I need it, without surprise, and gains strength with new features as time goes on.
More than that, becoming more familiar with it over time has yielded numerous ways to leverage it for a variety of uses.
Yes: ssh
gets the remote access job done.
I no longer have any Windows machines in my home (huzzah!) so if I have a host on my home network, ssh
can reach it.
With some light config management on my ~20 personal machines (via ansible) to get the right authorized_keys
defined, and my user id tylerjl
consistent everywhere, getting where I need to go takes as much effort as a reflex.
In the past few years, as I’ve taken more control of my personal data and self-hosted my personal information storage, ssh
has made secure offsite backups easy.
With the aforementioned ZFS setup, I tunnel between storage hosts that live behind ISP NATs with ssh.
In the old days, this was with ProxyCommand
.
However, one great thing about ssh is that it’s still under active development.
Whereas hosts behind a NAT or bastion used to require some arcane configuration lines, now something like this:
ProxyCommand ssh -q -W %h:%p bastion
Is even easier:
ProxyJump bastion
This is also exposed as the -J
flag.
I lean on ssh
in many different ways, and it is consistently the layer I never have to worry about failing.
Whether I’m editing remotely via tramp, transferring files over scp, or scripting between hosts for backups or automation, ssh
works as expected and only improves over time without unexpected surprises.
Coda
Is there a tool you use that’s exceptional in its utility and quality? Continue the discussion in the comments, I’d love to hear about them.