For the past two years, I’ve been writing year-end reviews to look back upon the year that had gone by and reflect on what had happened. I thought I might as well continue the tradition this year.

However, I’ll try a new format—instead of grouping by month, I’d group it by area. I’ll focus on the following areas:

  1. BGP and operating my own autonomous system;
  2. My homebrew CDN for this blog;
  3. My home server;
  4. My new mechanical keyboard;
  5. My travel router project; and
  6. My music hobby.

Without further ado, let’s begin.

BGP and operating my own AS

My network underwent some dramatic changes this year, mostly due to RIRs making changes to their fee schedules. What can I say? I’d like to save a buck.

Due to ARIN’s fee schedule change taking effect from the start of 2024, requesting an ASN from ARIN came at no extra cost given that I was already a member. So I requested an ASN and was allocated AS54148.

Then, RIPE also changed its fee schedule, slapping a 50 EUR annual fee on ASNs, taking effect at the start of 2025. This meant that keeping my existing ASN, AS200351, with RIPE would cost me at least 50 EUR every year, plus a bit extra to cover administrative fees for any sponsoring LIR1. This proved irritating.

The smart thing to do would have been to abandon AS200351 and switch to the shiny new AS54148, but I decided to do something crazier—transfer AS200351 to ARIN. This resulted in a long tale and a lot of pain, which you can read about in the dedicated blog post.

The silly thing is that due to the cursed nature of AS200351 after the transfer and my desire to use a 16-bit ASN, which is more compatible with standard BGP communities, I ended up doing the renumber anyway, finally completing it in December of this year. It was a long and painful process, but I guess I learned just exactly how irritating the whole process was.

At least in the end, I get to keep my first ASN around for sentimental reasons. Next year, perhaps I’ll explore making a fully IPv6 network with AS200351, with 464XLAT to access any resources only available on IPv4.

Server-wise, I’ve expanded my network quite a bit, adding new points of presence in the following locations:

  • Sydney, Australia;
  • Tokyo, Japan;
  • London, United Kingdom; and
  • Hong Kong, China.

These new nodes should improve my anycast coverage in the Asia-Pacific region significantly, resulting in lower latency to my anycast DNS servers. These nodes also double as nodes on my homebrew CDN, resulting in even better latency to this website.

I also upgraded my node in Toronto from a cheap VM to a fully dedicated server from Xenyth. While this cost quite a bit, I really enjoyed having a dedicated server in a datacentre that’s 3 ms from my home. This opened up a lot of opportunities for self-hosting services with more stable connectivity than my home server, while adding basically no latency. Given the issues that my poor home server suffered this year, as I’ll describe later, this proved rather handy.

Finally, I lost the node in Seattle due to the provider suddenly and unexpectedly shutting down2. Fortunately, since everything important was fully redundant and capable of automatic failover through health checks, I suffered zero downtime as a result. This leads us to my CDN for this blog…

My homebrew CDN

As I’ve alluded to before, this website has backend servers in multiple locations, and my anycast DNS cluster returns the nearest server that’s alive. This ensures that pages load as quickly as possible no matter where you are. At the beginning of the year, backend servers existed in:

  • Montréal, Canada;
  • Toronto, Canada;
  • Amsterdam, Netherlands;
  • Kansas City, Missouri, United States;
  • Las Vegas, Nevada, United States;
  • Miami, Florida, United States;
  • Seattle, Washington, United States; and
  • Singapore.

This year, I’ve lost the Seattle node (it’s the same server I was using for BGP), but added four new nodes:

  • Sydney, Australia;
  • Tokyo, Japan;
  • Bangalore, India;
  • Hong Kong, China.

I’ve also unified the DNS solution I’ve been using. Before, I was using gdnsd to host the dynamic GeoDNS to resolve the backend, but it lacked a better mechanism to propagate DNS updates than physically copying zone files around, so I also used bind and AXFR for regular DNS. Running two DNS servers proved rather painful, so I decided to replace both of them with PowerDNS, which can generate dynamic DNS records through Lua code and propagate DNS records throughout the whole anycast cluster with MariaDB replication.

Overall, PowerDNS was surprisingly easy to set up. There were two slight annoyances though:

  1. PowerDNS’s Lua DNS records have a mechanism to pick the geographically closest IP to the user, but it doesn’t let you specify the coordinates of the server. Instead, it looks up the IP geolocation of the server in the same database it uses to look up the user’s IP. This forced me to submit multiple geolocation corrections to MaxMind before all my servers were located correctly. Apparently, submitting geolocation feeds isn’t something that all server providers do.
  2. Lua is a cursed language and unpleasant to work with. There are various crazy design decisions, such as 1-based indexing and the lack of separate array and mapping types. Instead, all “arrays” are just special mappings from integers to other values. This, along with other pain points, made me write a Python program to generate the Lua code for PowerDNS instead of writing Lua myself.

I am not sure what next year might bring, except perhaps adding more servers in exotic locations. However, due to poor network connectivity in certain regions, adding a server could potentially be harmful to performance. For example, ISPs in different South American or African countries may not connect with each other locally, but route all the traffic out of their country to Miami or Europe, respectively. Given that I already have servers in Miami and Amsterdam, it means that getting a server in either Africa or South America could actually cause traffic in neighbouring countries to go to the US or Europe and back, increasing latency as a result.

At this point, the only logical location to add new nodes would be somewhere in the Middle East, Eastern Europe, or Central Asia. Due to current events in those regions, this may prove futile and my homebrew CDN may have already reached its final form.

My home server

Due to my acquisition of the 3060 Ti for my Windows VM in 2022, I moved the old 1060 into my home server. This year, I decided to experiment with large language models (LLMs) by running some smaller models at home.

So I now run Open WebUI at home, acting as the interface to a collection of self-hosted LLMs running through Ollama and some more in the cloud. I can use cloud models like OpenAI’s GPT-4o, Anthropic’s Claude, and Google’s Gemini, as well as self-hosted models like Mistral, Gemma, Phi 3, and Qwen—all through a single, unified web interface. This is pretty convenient3.

Better yet, since I am using the API for OpenAI and Anthropic, I only pay for what I use and my input will not be harvested for training4. Since I don’t use LLMs that heavily, this is actually way cheaper than paying for a monthly subscription5.

To add some degree of isolation for the AI stuff, I decided to run them in a systemd-nspawn container, passing through the device files for the Nvidia GPU. This was actually pretty easy: after ensuring that the container has the same GPU driver version as the host machine, it was simply a matter of putting the following lines into the nspawn config:

[Files]
Bind=/dev/nvidia0
Bind=/dev/nvidiactl
Bind=/dev/nvidia-modeset
Bind=/dev/nvidia-uvm
Bind=/dev/nvidia-uvm-tools

And now I have an isolated container that shares the same GPU with the host machine, which uses it for media transcoding and other related tasks. I’ll probably go into more detail on systemd-nspawn next year in a separate post. It definitely is a really convenient way to run a small, semi-isolated system that integrates very well with modern systemd-based Linux distros when full virtualization is either impossible or overkill. It also has better security than Docker6 and doesn’t randomly mangle firewall rules.

Despite the successes, there were also problems. A week or so ago, my poor server suffered from memory corruption. This was one of the worst things to have happened to the poor server and its impact is still being determined. For more details, read the post about the incident7. The main idea: ECC is very good to have, and if you can’t get ECC, avoid ADATA/XPG RAM (or SSDs) at all costs.

Fortunately, some of my more important systems, like my self-hosted notebook, are on my Toronto dedicated server and not at home. Imagine the amount of fun I’d have if I was left without any of my documentation while trying to fix the home server…

My ECC RAM should arrive next year. Once it does, I’ll probably write a separate blog post about running ECC on Ryzen.

My mechanical keyboard

I also built a custom mechanical keyboard this year and wrote some custom RGB lighting code for the QMK firmware on it. It was a nice adventure, which is documented in its own post.

Unfortunately, this project has proved to be cursed on multiple fronts:

  1. The “south-facing” LEDs that the keyboard came with are one of the worst things ever invented—it solved a tiny problem by creating a massive problem, resulting in my keys being completely invisible in the dark, which was irritating even though I could touch type just fine;
  2. The keyboard proved to be unergonomic in some unknown way—I managed to acquire de Quervain’s tenosynovitis not long after starting to use the new keyboard. This really wasn’t fun. At its worst, my right wrist was hurting so bad that I was completely unable to type (or do anything else that involved the hand) and had to resort to using voice control at work to not be completely useless; and
  3. Somehow, the s and c keys stopped registering sometimes. This proved super annoying, as those letters are quite common in English and are also frequently used in shortcut keys. Given that the problem persisted after replacing the switches multiple times, I was forced to conclude that something was wrong with the circuit board.

Given all these problems, I was forced to revert to the old Corsair K70 MK.2 keyboard with the Cherry MX brown switches. After using the Akko Lavender Purple switches for a while, the Cherry MX brown switches ended up feeling way too heavy and mushy, which wasn’t something I’d ever imagine I’d say about Cherry MX switches. Still, it’s better than dealing with endless headaches and handaches.

At least my hand is fully functional now and I can use it painlessly after switching back to my old keyboard and some wrist exercises… De Quervain’s tenosynovitis sucks big time. It remains to be seen what I’ll do next year about the keyboard situation. A lot of money and effort was certainly wasted. I guess not all projects succeed.

My travel router project

For Black Friday, I managed to get myself a travel router—a GL.iNet GL-MT3000, a.k.a. Beryl AX. This is a tiny, pocket-sized router with a decently powerful CPU, which lets your devices connect to a separate WiFi network free of potential bad actors or tunnel all traffic through a VPN. Its firmware is OpenWrt-based with a custom UI that’s more user-friendly, though you can easily drop to the real OpenWrt interface for anything more advanced, which is quite nice.

Of course, there’s the way such a travel router is supposed to be used, and how I am actually using it. Instead of connecting it to a single, trusted VPN server like a normal person, I instead got it to connect to multiple servers via Wireguard, then run BGP and OSPF on top. This allowed me to safely connect to AS54148 on the go, automagically routing my traffic out of the nearest point of presence, no matter what sketchy hotel or airport WiFi I am using. It really is quite convenient when travelling, though perhaps also slightly insane.

I might write about this more next year, but a prerequisite for a post about this would be a post about my “global Wireguard backbone” which, as the name suggests, connects all my locations through Wireguard tunnels and uses OSPF to determine the optimal path to any destination inside my network.

My music hobby

And finally, my music hobby. As I mentioned last year, I was learning to play Debussy’s Rêverie, which I’ve finally succeeded this year. I managed to create this recording last night:

While there were definitely some mistakes, I was still happy to be able to play the whole piece after learning in my spare time, especially given that two years ago, I couldn’t play anything at all.

After three Debussy pieces, I think I’ll go for La fille aux cheveux de lin (meaning “the girl with the flaxen hair”) next in 2025. We’ll see if I manage to learn it. After that one, I might try a different composer next.

Conclusion

That’s about it for 2024. I am not sure what 2025 will bring, but I suppose we’ll find out.

If you like my content and would like to support me, you can do so via GitHub Sponsors, Ko-fi, or Stripe (CAD)—though of course, this is strictly optional.

Alternatively, you can also check out my referral links for services that I use. Most of them will pay me some sort of commission if you sign up with my link.

Notes

  1. For more details on this, see the post on what I wish I knew when I got my ASN

  2. The provider, LimeWave, just vanished from the Internet one day with no explanation. This is what happens when you use sketchy providers. The only reason I dare to use these providers is because the services I am running on them are sufficiently redundant and can automatically failover if the provider vanishes. 

  3. In fact, I used GPT-4o through Open WebUI as a proofreader for this post instead of bugging one of my friends to read the whole thing right away. 

  4. Or so they say, though I imagine no business would trust them with their data if these companies were discovered to be harvesting API inputs… They’d basically be completely trashing their brand for a tiny, short-term gain. 

  5. Unfortunately, there is a minimum API credit deposit requirement on both OpenAI and Anthropic, and credits also expire if you don’t use them, so there’s effectively a minimum annual cost to using LLMs this way. 

  6. By default, Docker doesn’t use a separate user ID space, and a lot of containers just run their stuff as root. This is super irritating and insecure if malware manages to break through the other security barriers. It is also annoying when the container runs with a common UID such as 1000 instead of root. The first user on the system is often UID 1000 and ps or htop will show the docker process running as that user, not to mention that it may compromise all the user’s files if malware breaks out of the container’s filesystem… Basically, a poorly built Docker container is a security nightmare waiting to happen. 

  7. I am weirdly proud of the “screenshots” I have for memtest86+. It’s some CSS magic with a web font made to look like the default VGA font. Try selecting and copying from the “screenshots”!