Tyblog

Technology, open source, unsolicited opinions & digital sovereignty
blog.tjll.net

The Home Lab

Every once in a while I need to summarize what I’ve built and am running in my homelab. I’ll try and keep this up-to-date, although things tend to change rapidly.

My home lab is extensive and built over a number of years. I understand those who might criticize the time spent maintaining this type of infrastructure, but I genuinely enjoy this hobby. Consider this the garden I tend to when I want to relax.

Note: Please be cautioned that, due to a lack of scientific rigor and credentials, this homelab does not meet the stringent requirements for that label set forth by Hacker News.

Really big network hardware diagram

Hardware

NAS

I bought an HP MicroServer N40L a very long time ago and initially used Nas4Free as NAS software. I began with Nas4Free loaded onto a USB stick, then moved onto CentOS loaded on a solid-state drive that I mounted into the optical drive bay. The storage array is four 1TB disks in a RAIDZ2 ZFS configuration for redundancy (I’m aware this isn’t an ideal configuration for performance).

I updated the RAM to a total of ~10GiB to overcome the initially low default memory, and have had to replace one part over the years (power supply) which died in late 2021. Shockingly, despite its age, this system continues to serve in a pretty critical capacity (the “durable” network storage, et cetera).

HP MicroServer N40L (n40l)
$ neofetch
                   -`                    tylerjl@n40l
                  .o+`                   ------------
                 `ooo/                   OS: Arch Linux x86_64
                `+oooo:                  Host: ProLiant MicroServer
               `+oooooo:                 Kernel: 5.14.6-arch1-1
               -+oooooo+:                Uptime: 63 days, 6 hours, 57 mins
             `/:-:++oooo+:               Packages: 917 (pacman)
            `/++++/+++++++:              Shell: zsh 5.8
           `/++++++++++++++:             Terminal: /dev/pts/0
          `/+++ooooooooooooo/`           CPU: AMD Turion II Neo N40L (2) @ 1.500GHz
         ./ooosssso++osssssso+`          GPU: AMD ATI Mobility Radeon HD 4225/4250
        .oossssso-````/ossssss+`         Memory: 7604MiB / 9829MiB
       -osssssso.      :ssssssso.
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/
Raspberry Pis

I have a problem with accruing these. There are several throughout my homelab and house serving in various capacities.

Media

Three Kodi installations are in my house; two are on Raspberry Pis. These run the “headless” Kodi variant that Arch Linux ARM repositories provide. I’ve found that heat-dissipating cases are important for these.

Living room Kodi Pi (crick)
$ neofetch
                   -`                    tylerjl@crick
                  .o+`                   -------------
                 `ooo/                   OS: Arch Linux ARM armv7l
                `+oooo:                  Host: Raspberry Pi 3 Model B Rev 1.2
               `+oooooo:                 Kernel: 5.10.63-8-ARCH
               -+oooooo+:                Uptime: 4 days, 20 hours, 15 mins
             `/:-:++oooo+:               Packages: 411 (pacman)
            `/++++/+++++++:              Shell: zsh 5.8
           `/++++++++++++++:             Resolution: 720x480
          `/+++ooooooooooooo/`           Terminal: /dev/pts/0
         ./ooosssso++osssssso+`          CPU: BCM2835 (4) @ 1.200GHz
        .oossssso-````/ossssss+`         Memory: 456MiB / 676MiB
       -osssssso.      :ssssssso.
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/

Home Theater Kodi Pi (ptolemy)
$ neofetch
                   -`                    tylerjl@ptolemy
                  .o+`                   ---------------
                 `ooo/                   OS: Arch Linux ARM armv7l
                `+oooo:                  Host: Raspberry Pi 4 Model B Rev 1.1
               `+oooooo:                 Kernel: 5.4.64-1-ARCH
               -+oooooo+:                Uptime: 39 days, 21 hours, 33 mins
             `/:-:++oooo+:               Packages: 336 (pacman)
            `/++++/+++++++:              Shell: zsh 5.8
           `/++++++++++++++:             Resolution: 1920x1080
          `/+++ooooooooooooo/`           Terminal: /dev/pts/0
         ./ooosssso++osssssso+`          CPU: BCM2711 (4) @ 1.500GHz
        .oossssso-````/ossssss+`         Memory: 721MiB / 3584MiB
       -osssssso.      :ssssssso.
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/
Voice Assistants

I eschew the Amazon Echo and build my own using Rhasspy. These sit in common areas and publish intents to an instance of mqtt that I run and listen to with my own automation handler.

In addition to the base Pi hardware, I’m using the AIY kit from Google in order to have a simple, cheap solution for audio input and output. The revision 1 AIY kits rely on the Pi 3, which I actually prefer - the latest revision 2 AIY kits are smaller but require a Pi zero, and the performance is noticeably slower. If you want a similar setup, buy the old revision 1 kits from somewhere like eBay.

Voice Assistant (bethe)
$ neofetch
                   -`                    tylerjl@bethe
                  .o+`                   -------------
                 `ooo/                   OS: Arch Linux ARM armv7l
                `+oooo:                  Host: Raspberry Pi 3 Model B Plus Rev 1.3
               `+oooooo:                 Kernel: 5.10.63-8-ARCH
               -+oooooo+:                Uptime: 4 days, 5 hours, 53 mins
             `/:-:++oooo+:               Packages: 254 (pacman)
            `/++++/+++++++:              Shell: zsh 5.8
           `/++++++++++++++:             Resolution: 720x480
          `/+++ooooooooooooo/`           Terminal: /dev/pts/0
         ./ooosssso++osssssso+`          CPU: BCM2835 (4) @ 1.400GHz
        .oossssso-````/ossssss+`         Memory: 299MiB / 918MiB
       -osssssso.      :ssssssso.
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/
Voice Assistant (oppenheimer)
$ neofetch
                   -`                    tylerjl@bethe
                  .o+`                   -------------
                 `ooo/                   OS: Arch Linux ARM armv7l
                `+oooo:                  Host: Raspberry Pi 3 Model B Plus Rev 1.3
               `+oooooo:                 Kernel: 5.10.63-8-ARCH
               -+oooooo+:                Uptime: 4 days, 5 hours, 53 mins
             `/:-:++oooo+:               Packages: 254 (pacman)
            `/++++/+++++++:              Shell: zsh 5.8
           `/++++++++++++++:             Resolution: 720x480
          `/+++ooooooooooooo/`           Terminal: /dev/pts/0
         ./ooosssso++osssssso+`          CPU: BCM2835 (4) @ 1.400GHz
        .oossssso-````/ossssss+`         Memory: 299MiB / 918MiB
       -osssssso.      :ssssssso.
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/

This one is actually a Pi Zero running the aforementioned AIY revision 2 hardware. It works, albeit a bit more slowly.

Voice Assistant (salk)
$ neofetch
                   -`                    tylerjl@oppenheimer
                  .o+`                   -------------------
                 `ooo/                   OS: Arch Linux ARM armv7l
                `+oooo:                  Host: Raspberry Pi 3 Model B Rev 1.2
               `+oooooo:                 Kernel: 5.10.63-8-ARCH
               -+oooooo+:                Uptime: 114 days, 2 hours, 50 mins
             `/:-:++oooo+:               Packages: 197 (pacman)
            `/++++/+++++++:              Shell: zsh 5.8
           `/++++++++++++++:             Resolution: 720x480
          `/+++ooooooooooooo/`           Terminal: /dev/pts/0
         ./ooosssso++osssssso+`          CPU: BCM2835 (4) @ 1.200GHz
        .oossssso-````/ossssss+`         Memory: 278MiB / 918MiB
       -osssssso.      :ssssssso.
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/
Audio

My DIY Sonos-like setup relies on another Pi plugged into Sonos hardware but running a separate stack based on mopidy and snapcast.

Snapcast (kepler)
neofetch
                   -`                    tylerjl@kepler
                  .o+`                   --------------
                 `ooo/                   OS: Arch Linux ARM armv7l
                `+oooo:                  Host: Raspberry Pi 3 Model B Plus Rev 1.3
               `+oooooo:                 Kernel: 5.10.83-1-rpi-legacy-ARCH
               -+oooooo+:                Uptime: 30 days, 37 mins
             `/:-:++oooo+:               Packages: 383 (pacman)
            `/++++/+++++++:              Shell: zsh 5.8
           `/++++++++++++++:             Terminal: /dev/pts/0
          `/+++ooooooooooooo/`           CPU: BCM2835 (4) @ 1.400GHz
         ./ooosssso++osssssso+`          Memory: 142MiB / 918MiB
        .oossssso-````/ossssss+`
       -osssssso.      :ssssssso.
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/
ODroids

ODroids are some of my favorite pieces of hardware: like Raspberry Pis, most are ARM-based (cheap to operate, cheap to buy) and often purpose-built for their use case (like storage).

Storage

The aforementioned N40L ZFS raid is meant for durable storage. My cluster-based storage is intended for really big volumes that I can expand very easily.

I run six HC2s (the older, 32-bit models) and two HC4s with their associated HDD devices for mass storage in a GlusterFS cluster. Clustering across six 32-bit ARM nodes and two 64-bit ARM nodes is dancing on the knife’s edge - I don’t think it’s supported upstream - but we live on the edge in this homelab.

Sample HC2 node (codex01)
$ neofetch
                   -`                    root@codex01
                  .o+`                   ------------
                 `ooo/                   OS: Arch Linux armv7l
                `+oooo:                  Host: Hardkernel Odroid HC1
               `+oooooo:                 Kernel: 4.14.180-3-ARCH
               -+oooooo+:                Uptime: 94 days, 2 hours, 46 mins
             `/:-:++oooo+:               Packages: 218 (pacman)
            `/++++/+++++++:              Shell: bash 5.1.8
           `/++++++++++++++:             Terminal: /dev/pts/0
          `/+++ooooooooooooo/`           CPU: SAMSUNG EXYNOS (Flattened Device Tree) (8) @ 1.500GHz
         ./ooosssso++osssssso+`          Memory: 536MiB / 1993MiB
        .oossssso-````/ossssss+`
       -osssssso.      :ssssssso.
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/
Sample HC4 node (codex07)
$ neofetch
                   -`                    root@codex07
                  .o+`                   ------------
                 `ooo/                   OS: Arch Linux aarch64
                `+oooo:                  Host: Hardkernel ODROID-HC4
               `+oooooo:                 Kernel: 5.10.2-1-ARCH
               -+oooooo+:                Uptime: 2 days, 2 hours, 39 mins
             `/:-:++oooo+:               Packages: 265 (pacman)
            `/++++/+++++++:              Shell: bash 5.1.8
           `/++++++++++++++:             Terminal: /dev/pts/0
          `/+++ooooooooooooo/`           CPU: (4) @ 2.100GHz
         ./ooosssso++osssssso+`          Memory: 738MiB / 3635MiB
        .oossssso-````/ossssss+`
       -osssssso.      :ssssssso.
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/
Compute

My initial “pool of compute resource” machines has been an MC1. It’s a good option for many cheap nodes, but they’re also 32-bit so I don’t intend on stacking any more MC1 Solos on top.

Sample MC1 node (cygnus01)
$ neofetch
                   -`                    root@cygnus01
                  .o+`                   -------------
                 `ooo/                   OS: Arch Linux ARM armv7l
                `+oooo:                  Host: Hardkernel Odroid HC1
               `+oooooo:                 Kernel: 4.14.180-3-ARCH
               -+oooooo+:                Uptime: 94 days, 2 hours, 4 mins
             `/:-:++oooo+:               Packages: 241 (pacman)
            `/++++/+++++++:              Shell: bash 5.1.8
           `/++++++++++++++:             Terminal: /dev/pts/0
          `/+++ooooooooooooo/`           CPU: SAMSUNG EXYNOS (Flattened Device Tree) (8) @ 1.500GHz
         ./ooosssso++osssssso+`          Memory: 402MiB / 1993MiB
        .oossssso-````/ossssss+`
       -osssssso.      :ssssssso.
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/

I deployed an N2+ as an experiment for a potential board to build future aarch64-based compute nodes on. It works well; and I’ll probably replace the failing MC1 nodes with these.

N2+ Compute (neumann)
$ neofetch
                   -`                    tylerjl@neumann
                  .o+`                   ---------------
                 `ooo/                   OS: Arch Linux ARM aarch64
                `+oooo:                  Host: Hardkernel ODROID-N2
               `+oooooo:                 Kernel: 4.9.219-1-ARCH
               -+oooooo+:                Uptime: 94 days, 2 hours, 48 mins
             `/:-:++oooo+:               Packages: 409 (pacman)
            `/++++/+++++++:              Shell: zsh 5.8
           `/++++++++++++++:             Terminal: /dev/pts/0
          `/+++ooooooooooooo/`           CPU: Hardkernel ODROID-N2 (6) @ 1.896GHz
         ./ooosssso++osssssso+`          Memory: 1318MiB / 3710MiB
        .oossssso-````/ossssss+`
       -osssssso.      :ssssssso.
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/

I do have an x86-64 node in the compute pool in the form of an H2+. It’s reasonably performant, and pretty flexible (I run the OS root on an NVMe disk and it accepts a few disks over SATA). I found out while writing this that ODroid has discontinued the H2+ due to chip shortages, which is a real bummer.

I also use an H2+ as my router, which replaced an old espressobin build.

H2+ compute node (pythagoras)
$ neofetch
                   -`                    tylerjl@pythagoras
                  .o+`                   ------------------
                 `ooo/                   OS: Arch Linux x86_64
                `+oooo:                  Host: ODROID-H2 1.0
               `+oooooo:                 Kernel: 5.14.6-arch1-1
               -+oooooo+:                Uptime: 15 days, 8 hours, 42 mins
             `/:-:++oooo+:               Packages: 624 (pacman)
            `/++++/+++++++:              Shell: zsh 5.8
           `/++++++++++++++:             Resolution: 1920x1080i
          `/+++ooooooooooooo/`           Terminal: /dev/pts/1
         ./ooosssso++osssssso+`          CPU: Intel Celeron J4115 (4) @ 2.500GHz
        .oossssso-````/ossssss+`         GPU: Intel GeminiLake [UHD Graphics 600]
       -osssssso.      :ssssssso.        Memory: 4759MiB / 15824MiB
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/

I use an XU4 as a printserver hooked up to an ancient HP P1005 printer.

Printserver (farnsworth)
$ neofetch
                   -`                    tylerjl@farnsworth
                  .o+`                   ------------------
                 `ooo/                   OS: Arch Linux ARM armv7l
                `+oooo:                  Host: Hardkernel Odroid XU4
               `+oooooo:                 Kernel: 4.14.180-3-ARCH
               -+oooooo+:                Uptime: 94 days, 3 hours, 15 mins
             `/:-:++oooo+:               Packages: 511 (pacman)
            `/++++/+++++++:              Shell: zsh 5.8
           `/++++++++++++++:             Terminal: /dev/pts/0
          `/+++ooooooooooooo/`           CPU: ODROID-XU4 (8) @ 1.500GHz
         ./ooosssso++osssssso+`          Memory: 400MiB / 1993MiB
        .oossssso-````/ossssss+`
       -osssssso.      :ssssssso.
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/
Laptops

My daily driver is a custom Framework running NixOS. I’ve been very happy with both the hardware and software.

Framework Laptop (diesel)
tylerjl @ diesel in blog on  master [?] via 💎 via ❄️  impure
 100% ❯ nix-shell -p neofetch --run neofetch
          ▗▄▄▄       ▗▄▄▄▄    ▄▄▄▖
          ▜███▙       ▜███▙  ▟███▛
           ▜███▙       ▜███▙▟███▛
            ▜███▙       ▜██████▛
     ▟█████████████████▙ ▜████▛     ▟▙        tylerjl@diesel
    ▟███████████████████▙ ▜███▙    ▟██▙       --------------
           ▄▄▄▄▖           ▜███▙  ▟███▛       OS: NixOS 21.11 (Porcupine) x86_64
          ▟███▛             ▜██▛ ▟███▛        Host: Laptop A8
         ▟███▛               ▜▛ ▟███▛         Kernel: 5.15.16
▟███████████▛                  ▟██████████▙   Uptime: 1 day, 3 hours, 32 mins
▜██████████▛                  ▟███████████▛   Packages: 739 (nix-system), 1197 (nix-user)
      ▟███▛ ▟▙               ▟███▛            Shell: zsh 5.8
     ▟███▛ ▟██▙             ▟███▛             Resolution: 2256x1504, 3840x2160, 1920x1080
    ▟███▛  ▜███▙           ▝▀▀▀▀              DE: none+i3
    ▜██▛    ▜███▙ ▜██████████████████▛        WM: i3
     ▜▛     ▟████▙ ▜████████████████▛         Theme: Nordic-darker [GTK2/3]
           ▟██████▙       ▜███▙               Icons: Adwaita [GTK2/3]
          ▟███▛▜███▙       ▜███▙              Terminal: tmux
         ▟███▛  ▜███▙       ▜███▙             CPU: 11th Gen Intel i7-1185G7 (8) @ 4.800GHz
         ▝▀▀▀    ▀▀▀▀▘       ▀▀▀▘             GPU: Intel TigerLake-LP GT2 [Iris Xe Graphics]
                                              Memory: 29232MiB / 31899MiB

Prior to my Framework and after my employer enacted tighter restrictions on for-work use of company laptops, I bought two Thinkpad x220 laptops for my personal use. They’re obviously very old, but I used them for a significant period of time successfully. If I could (reasonably) upgrade their guts I’d probably keep using them because the form factor is good and they still have the best keyboards.

x220 Thinkpad (lorentz)
$ neofetch
                   -`                    tylerjl@lorentz
                  .o+`                   ---------------
                 `ooo/                   OS: Arch Linux x86_64
                `+oooo:                  Host: 429046U ThinkPad X220
               `+oooooo:                 Kernel: 5.14.5-arch1-1
               -+oooooo+:                Uptime: 4 hours, 32 mins
             `/:-:++oooo+:               Packages: 1492 (pacman)
            `/++++/+++++++:              Shell: zsh 5.8
           `/++++++++++++++:             Resolution: 1366x768
          `/+++ooooooooooooo/`           Terminal: /dev/pts/0
         ./ooosssso++osssssso+`          CPU: Intel i5-2520M (4) @ 3.200GHz
        .oossssso-````/ossssss+`         GPU: Intel 2nd Generation Core Processor Family
       -osssssso.      :ssssssso.        Memory: 732MiB / 15925MiB
      :osssssss/        osssso+++.
     /ossssssss/        +ssssooo/-
   `/ossssso+/:-        -:/+osssso+-
  `+sso+:-`                 `.-/+oso:
 `++:.                           `-/+/
 .`                                 `/

Services

Buckle up, there’s a lot of these.

Philosophy

From my lab’s meager beginnings with a sole N40L, I’ve branched out > 20 machines with a specific “architecture” in mind. While I’ve worked in traditional, “rack 2U servers in the datacenter” type of scenarios, I wanted to use my homelab as an excuse to experiment and try different things, so the overall setup is not a traditional farm of secondhand Dell machines.

Supporting Infrastructure
Principles

My lab is built on Arch Linux. I use Arch because:

DevOps

Arch Linux has an extensive package database, but I’ve needed to use packages from the AUR frequently. They’re great to have around, but there’s no way I can manage my fleet of machines by building packages ad-hoc with something like yay.

Instead I use aurto as a build system for AUR packages. I define the set of packages I track, hosts on my network regularly build updated packages, and these hosts are listed as repositories on each machine in my network. It looks like this in Ansible:

- name: Configure pacman repository
  blockinfile:
    path: /etc/pacman.conf
    block: |
     [aurto]
     Server = http://pythagoras/aurto
     SigLevel = Optional
  register: pacman_custom_repo
  notify: refresh pacman

One of these packages is telegraf. I use it to aggregate metrics across all my hosts. I install it globally with Ansible, and then define the following systemd service on each machine:

[Unit]
Description=consul autoregistry for %i
Requires=consul.service
After=consul.service

[Service]
Type=oneshot
ExecStart=/usr/bin/env sh -c "\
                       instance=%i ;\
                       service=$(echo $instance | sed -r 's/-.+$//') ;\
                       port=$(echo $instance | sed -r 's/^[^-]+-//') ;\
                       attempt=0 ;\
                       until [[ $attempt -gt 5 ]] || http --ignore-stdin PUT :8500/v1/agent/service/register Name=$service Port:=$port ; do attempt=$(( $attempt + 1)) ; sleep $attempt ; done"
ExecStop=/usr/bin/env sh -c "\
                       instance=%i ;\
                       service=$(echo $instance | sed -r 's/-.+$//') ;\
                       attempt=0 ;\
                       until [[ $attempt -gt 5 ]] || http --ignore-stdin PUT :8500/v1/agent/service/deregister/service/$service ; do attempt=$(( $attempt + 1)) ; sleep $attempt ; done"
RemainAfterExit=yes

Then I drop a file like this into /etc/systemd/system/consul-registrar@telegraf-9273.service.d/override.conf:

[Unit]
After=telegraf.service
BindsTo=telegraf.service

This means that, once I enable the telegraf service, a sidecar service starts that registers the local telegraf service with consul. This is a convenient way for hosts in my lab to announce the availability of a telegraf endpoint to scrape for my prometheus deployment, and means I can stand up hosts and immediately start collecting metrics without reconfiguring my prometheus monitoring stack. I have been extremely happy with this consul+telegraf+prometheus scheme for automatic metrics registration.

Storage

The N40L’s RAIDZ2 setup serves as the backing storage for a number of different services. In addition to the on-host storage for the N40L server, a few datasets are exported via NFS (I’ve been experimenting with securing access over a wireguard interface). The specific uses for these datasets I’ll go into later.

The important datasets are snapshotted on a regular basis and replicated via zfs send to an offsite ZFS host. I won’t go super in-depth here, just that it’s an espressobin running in a LAN in one of my family’s networks that I port-forward to get access to. This has been a pretty reliable backup scheme; I use zfs-snap-manager to do so (I’ve contributed a few patches for things like compression, it’s a nice project!) A systemd timer performs regular scrubs to confirm data integrity and reports back over Slack, more on that later.

Aside from wireguard-available RAIDZ2 NFS exports, the GlusterFS cluster runs a few distributed/replicate volumes that I expand with additional disks when necessary. I have tried using disperse volumes, but I’ve confirmed by speaking with other homelabbers that there are bad bugs when using disperse volumes on ARM/32-bit. Beware!

GlusterFS access is secured via TLS. My deployment of Hashicorp Vault makes this much easier to manage and add nodes; a small script provisions certs as necessary so that adding new nodes isn’t terribly difficult. These volumes are used a few different places that’ll come up later.

These storage nodes that operate GlusterFS also share space with a distributed minio deployment. I don’t use minio super heavily, but it comes in handy when I need storage that can reasonably use object storage over normal filesystem mounts. Things like my private docker registry are easier to operate when they can point to the minio URL and not an on-host ZFS/GlusterFS mount.

Multimedia

The aforementioned raspberry pis (and one H2+) run Kodi. Storage is offloaded to my GlusterFS cluster, which is mounted on each of the machines at a common path (/srv/storage/media) and similar PKI certificates are present on each node to authorize them. Kodi is configured on each machine to point at a MySQL instance running on the N40L for a shared configuration database (so all instances of Kodi share the same viewed information, library, et cetera).

While the Kodi machines drive our TVs, I also run Jellyfin for media not consumed via TV and remote. Jellyfin is one of my “legacy” services (installed via the AUR to a local machine - the NAS - and a simple systemd service rather than something orchestrated over a container scheduler like Nomad).

TO BE CONTINUED…