These days, I almost exclusively run Arch Linux in my homelab and personal machines. Had I the brain cells to spare, I’d try and get NixOS running on ARMv7, but in the meantime, a mix of vanilla x86_64 Arch and Arch Linux ARM is my chosen flavor just to keep things consistent. I’ve run Arch as my primary server OS for almost a decade now, and although I’m sure some will balk at the idea, I’ve found that the distribution has performed wonderfully for me, even in contrast to traditional “server” distributions like CentOS. It sounds counterinuitive, but the simple model of Arch Linux has, overall, helped mitigate some maintenance burdens.
Here’s some of best practices I’ve accumulated over the years of personally administering ~20 Arch Linux machines in my homelab.
Prelude: Why Arch?
I don’t want to get into a distribution holy war here; there’s endless forums for that debate. However, I will get into the reasons why I now default to Arch Linux when provisioning hosts, as many of those reasons naturally flow into why I include certain best practices.
- A “rolling release” distribution is a nice methodology for server maintenance. See this other blog post enumerating why rolling releases make sense within the context of openSUSE. Want a “version”? Use the Arch Linux Archive to pinpoint a moment in time. (I still begrudge that CentOS 6’s upgrade path to 7 discourages in-place upgrades)
- In contrast to “kitchen sink” distributions, when I do need to bolt on additions to my install, I know exactly how I’ve done it, so fixing problems and building systems isn’t a game of guesswork about how the distribution chose to configure those services. I’m the one that did it, so I can better maintain it.
- The catalog of packages, when you factor in the AUR, is almost complete for most people.
When I can’t find an Arch package, which is very seldom these days, I package it on the AUR and move on -
PKGBUILD
s aren’t terribly difficult to write. - The aforementioned simplicity of Arch systems is also reflected in the Arch-specific software projects like
pacman
. Setting up a private pacman repo within my LAN was so easy I thought I was doing something wrong.
Pacman Hygiene
One of the trickiest aspects of Linux distribution maintenance that I didn’t understand was keeping configuration files up to date.
By that I mean catching and updating files somewhere like /etc/
so that when package foobar
upgrades from 1.0 to 2.0 and the default configuration file /etc/foobar.conf
adds the setting telemetry = true
, you can merge in the new configuration files and optionally set them to desired values.
The Arch Wiki has a good page about this but the most important thing to remember is that, in the vast majority of cases, an interactive process (via a human at a terminal) is needed to judge which parts of your existing configuration file needs to be edited.
I suggest using pacmatic for this.
The pacmatic web site outlines how to use the utility, but in short: pacmatic
will a) alert you to any announcements from the Arch Linux project in case there’s news about needed intervention in the course of normal system operations and b) automatically prompt and help update configuration files.
I like this method because it’s closely linked to pacman
operations so you can’t forget to update these files and the pacmatic
command is a drop-in replacement for pacman
.
pacmatic
is starting to age a little, but you can still get the same effect by using the pacdiff
command from the pacman-contrib
package.
AUR Automation
Using Arch for non-trivial use-cases is almost certainly going to lead you towards relying on some AUR packages.
Unlike official packages that are curated by Arch Linux Trusted Users, the AUR has less control surrounding submission of packages.
Validating the legitimacy of a PKGBUILD
is always a good idea, and for first-time users, building and installing an AUR package is typically comprised of a) downloading the package snapshot, checking out the PKGBUILD
, and building it with makepkg
.
That’s all well and good, but if you use a non-trivial number of AUR packages, this process is way too manual.
You really want some automation in place around this, but I’m not talking about a tool like yay
that automates the fetching and building for a package’s source.
You need a tool that handles all update steps as part of a schedule so that installing updated AUR packages is inlined as part of the normal -Syu
process.
In my opinion, the most reliable option here is aurto (which, in turn, uses aurutils under the hood).
Why should you use something like aurto
?
- Scheduled updates.
aurto
comes bundled with systemd timers to check for package updates, so you don’t burn time checking for updates and building packages manually. - Custom repositories. Following the Arch paradigm of building and indexing packages into proper repositories means that you don’t think about pulling in updates, they just happen for you. You can also easily share built packages across a network (which I rely on heavily).
aurutils
follows lots of good operational hygiene (for example, building in clean chroots and explicitly trusting maintainers as a build step).
Remember that, like any sort of AUR automation, you’re trusting the maintainer of the packages you track to not bundle malicious code, but that’s a risk you take with any sort of interaction with the AUR.
Snapshotting
If you’re using Linux in 2020, there’s no reason why you can’t be using a modern, advanced, copy-on-write filesystem. Are you comfortable relying on dkms? Then use ZFS on Linux. Honestly, it’s fantastic, and you get copy-on-write, snapshots, compression, and more. It’s the current best-of-breed filesystem for Linux, in my opinion. If you want, btrfs can fit a similar role, but I trust ZFS more.
Providing a point-in-time backup before system upgrades becomes extremely easy on these filesystems. Run a snapshot as a pacman hook before you make changes, and you can rollback to a known good state easily. There are some ready-made packages out there that stitch together ZFS/btrfs with pacman hooks so you don’t even need to write the automation yourself.
If you’re already snapshotting a filesystem, cheap (as in effort) backups are just a small extra step if you want to replicate the snapshot images somewhere else.
My ZFS and btrfs filesystems get duplicated offsite over simple ssh
commands as added disaster recovery insurance, which means that not only do you have system backups saved outside your network, but your whole history of backups if you need to restore from a point in time.
There are actually a wide variety of projects that do this, but personally, I use zfs-snap-manager for my zpools and snazzer for my btrfs volumes.
Both automate retention and incremental backups, which are the tricky parts.
Note: Some of these suggestions eat up disk space. I’d definitely suggest using filesystem compression to help with that.
Grab Bag
The preceding three points have had the greatest impact on my Arch Linux system maintainability, but there are a few minor mentions just for sake of completeness:
- Remember to run
pacman -Sc
every once in a while. Those old packages take up space. - A periodic reflector script can be handy in keeping your mirrorlist healthy.
- Using
aurto
andaurutils
to host a repository of local AUR packages? Share it on your network easily:
Then just add the following section to your other machines’ pacman.conf
:
Edits: rhysperry111 caught a systemctl
typo, and AladW noted I conflated aurto
and aurutils
in a few places.