Posts from 2025
-
On reusing old cases for NAS applications
My home server—the one that acts as my router and NAS, while hosting a multitude of services at home, such as
mirror.quantum5.ca
—had a problem: it was using a nameless case that was at least 20 years old, and it wasn’t doing the job well. The ancient case was from an era when computers were much smaller and emitted a lot less heat. With a modern air cooler, I couldn’t even close the side panel.However, buying a modern case has significant drawbacks. The design philosophy for cases in the 2020s is completely focused on displaying all the internals with as much glass as possible, offering as much cooling as possible for power-hungry components, or both. Given that spinning hard drives (HDDs) have gone completely out of fashion in the PC market, drive bays are sacrificed to improve cooling and aesthetics. Whereas my 20+-year-old case had six 3.5” HDD bays and four more 5.25” bays for optical drives that could be repurposed to house more HDDs, most modern cases, if they still had 3.5” HDD bays, could host at most three. This was perfectly fine for building PCs, but it was far from ideal for building a NAS.
What I really wanted was a full ATX case with good cooling and as many drive bays as possible. There’s effectively only one case on the market that fulfilled these requirements—Fractal Design’s Meshify 2 or the XL variant—and they came at a price of ~$200 CAD and ~$270 CAD, respectively, which always felt a bit too expensive for this hobby. So instead, I kept using the crappy old case. That was until I found an old Antec 1200, which ticked all my requirements, for free.
This post documents my experience of repurposing the 17-year-old Antec 1200 to fit a modern computer acting as a NAS, and my thoughts on the endeavour after doing it.
-
Building a multi-network ADS-B feeder with a $20 dongle
For a while now, I’ve wondered what to do with my old Raspberry Pi 3 Model B from 2017, which has basically been doing nothing ever since I replaced it with the Atomic Pi in 2019 and an old PC in 2022. I’ve considered building a stratum 1 NTP server, but ultimately did it with a serial port on my server instead.
Recently, I’ve discovered a new and interesting use for a Raspberry Pi—using it to receive Automatic Dependent Surveillance–Broadcast (ADS-B) signals. These signals are used by planes to broadcast positions and information about themselves, and are what websites like Flightradar24 and FlightAware use to track planes. In fact, these websites rely on volunteers around the world running ADS-B receivers and feeding the data to them to track planes worldwide.
Since I love running public services (e.g. mirror.quantum5.ca), I thought I might run one of these receivers myself and feed the data to anyone who wants it. I quickly looked at the requirements for Flightradar24 and found it wasn’t even that much—all you need was a Raspberry Pi, a view of the sky, and a cheap DVB-T TV tuner, such as the very cheap and popular RTL2832U/R820T dongle, which has a software-defined radio (SDR) that could be used to receive ADS-B signals.
I have enough open sky out of my window to run a stratum 1 NTP server with a GPS receiver, so I figured that was also sufficient for ADS-B1. Since I found an RTL2832U/R820T combo unit with an antenna for around US$20 on AliExpress, I just decided on a whim to buy one. Today, it arrived, and I set out to build my own ADS-B receiver.
-
A whirlwind tour of systemd-nspawn containers
In the last yearly update, I talked about isolating my self-hosted LLMs running in Ollama as well as Open WebUI in
systemd-nspawn
containers and promised a blog post about it. However, while writing that blog post, a footnote on why I am using it instead of Docker accidentally turned into a full blog post on its own. Here’s the actual post onsystemd-nspawn
.Fundamentally,
systemd-nspawn
is a lightweight Linux namespaces-based container technology, not dissimilar to Docker. The difference is mostly in image management—instead of describing how to build images withDockerfile
s and distributing prebuilt, read-only images containing ready-to-run software,systemd-nspawn
is typically used with a writable root filesystem, functioning more similarly to a virtual machine. For those of you who remember usingchroot
to run software on a different Linux distro, it can also be described aschroot
on steroids.I find
systemd-nspawn
especially useful in the following scenarios:- When you want to run some software with some degree of isolation on a VPS, where you can’t create a full virtual machine due to nested virtualization not being available1;
- When you need to share access to hardware, such as a GPU (which is why I run
LLMs in
systemd-nspawn
); - When you don’t want the overhead of virtualization;
- When you want to directly access some files on the host system without
resorting to
virtiofs
; and - When you would normally use Docker but can’t or don’t want to. For reasons, please see the footnote-turned-blog post.
In this post, I’ll describe the process of setting up
systemd-nspawn
containers and how to use them in some common scenarios. -
Docker considered harmful
In the last yearly update, I talked about isolating my self-hosted LLMs running in Ollama, as well as Open WebUI, in
systemd-nspawn
containers. However, as I contemplated writing such a blog post, I realized the inevitable question would be: why not run it in Docker?After all, Docker is super popular in self-hosting circles for its “convenience” and “security.” There’s a vast repository of images that exist for almost any software you might want. You could run almost anything you want with a simple
docker run
, and it’ll run securely in a container. What isn’t there to like?This is probably going to be one of my most controversial blog posts, but the truth is that over the past decade, I’ve run into so many issues with Docker that I’ve simply had enough of it. I now avoid Docker like the plague. In fact, if some software is only available as a Docker container—or worse, requires Docker compose—I sigh and create a full VM to lock away the madness.
This may seem extreme, but fundamentally, this boils down to several things:
- The Docker daemon’s complete overreach;
- Docker’s lack of UID isolation by default;
-
Docker’s lack of
init
by default; and - The quality of Docker images.
Let’s dive into this.
-
On ECC RAM on AMD Ryzen
Last time, I talked about how a bad stick of RAM drove me into buying ECC RAM for my Ryzen 9 3900X home server build—mostly that ECC would have been able to detect that something was wrong with the RAM and also correct for single-bit errors, which would have saved me a ton of headache.
Now that I’ve received the RAM and ran it for a while, I’ll write about the entire experience of getting the RAM working and my attempts to cause errors to verify the ECC functionality.
Spoilers: Injecting faults was way harder than it appeared from online research.