Building highly available services: global anycast PowerDNS cluster
As I’ve written about before, this blog has multiple geographically distributed backend servers serving the content, with my anycast PowerDNS cluster selecting the geographically closest backend server that’s up and returning it to the user.
Due to various outages I’ve experienced recently, I’ve been thinking a lot more about making my self-hosted services highly available (HA), staying up even if a few servers go down. This is mostly for the sake of my sanity, so that I could just shrug if a server goes down and wait for the provider to bring it back up, instead of panicking. Of course, the added availability also helps, but it’s probably a bigger concern in the enterprise space than it is for hobbyists. As a bonus, if you have nodes spread out across multiple locations, you can also route the user to the geographically closest one for lower latency and faster response times.
Either way, I thought it was time to start a series about building highly available services. We begin with the most important building block—DNS, which is basically required to make any other service highly available.
The stack I’ve chosen for this is MariaDB and PowerDNS, mostly because these are fairly easy to set up and I already have experience with them. Many other alternative tech stacks are probably equally viable, but that’s left as an exercise for the reader. The general idea should apply anyway. Note that anycast isn’t really required, since you can still follow along and deploy two unicast DNS servers for redundancy.
Without further ado, let’s dive in.
Table of contents
- Why make DNS highly available?
- Choosing a tech stack
- MariaDB replication
- Choosing a replication topology
- Setting up your first MariaDB node
- Setting up PowerDNS
- Setting up poweradmin
- Setting up replication
- Regular unicast DNS
- Deploying anycast
- Using Lua records to select backends
Why make DNS highly available?
Perhaps the first question you might have is why a DNS server is necessary for HA, when I am using anycast to make the DNS highly available and could theoretically just use anycast for everything else too. There are three main reasons:
-
You need DNS anyway if you want your domain to resolve, so you need HA for DNS anyway. I suppose you could just use any free or cheap DNS hosting provider with anycast instead of hosting your own if that was the only reason though.
-
Most services (including HTTP) use TCP. With anycast, multiple devices share the same IP address, with the traffic routed to the “closest” device (as BGP understands it). Notably, this happens on OSI layer 3—the network layer—which is one layer below TCP on the transport layer (layer 4). This means that anycast doesn’t understand anything about TCP connections. Routers will happily send traffic to whichever device they think is the closest at any given time, even if you have a TCP connection with some other device. The new device will get very confused and send a “connection reset” packet back, forcing the connection to be re-established. While this may not matter so much for HTTP due to the very short-lived connections, for other services, it would prove disastrous.
-
BGP’s idea of “closest” may be very wrong. As mentioned in the post on anycast, BGP prefers the shortest AS path instead of the shortest round-trip time, and ASes prefer routes from their own customers over routes from peers. As such, it’s quite easy for it to get into pathological situations where it chooses to route traffic to another continent. I have to regularly run traces on my anycast from around the world to fix the pathological routing, which is very annoying. While IP geolocation is not an exact science either1, it’s less likely to have such pathological behaviour2.
Choosing a tech stack
For the DNS server, I chose PowerDNS since it’s popular, in the Debian repository3, and extensible with Lua, enabling complex logic to generate DNS records, such as based on server availability and geolocation. It also supports letting the database take care of the replication, instead of building something bespoke with AXFRs, so I can reuse the underlying database replication mechanism for other services, which will become important later.
Other contenders included:
-
bind9
, which is immediately ruled out because it can’t do health checks. It also has a very cursed way of doing other things. For example, instead of automatically finding which server is the closest based on geographical distance, you are supposed to use ACLs to match against individual countries, and then use different views serving different zone files based on that. That just doesn’t seem very ergonomic… The main way of defining zones is also through plain text zone files, which are sort of a pain to work with too, especially with automation. -
gdnsd
, which can do the availability and geolocation thing very well, but only those. I used to use it, and then the built-in HTTP health check turned out to be burning a ton of CPU cycles due to a bug, so I had to replace it with acurl
script… It also relies on cursed bind-style zone files to define records, which is annoying, and it doesn’t support DNSSEC4 either. For replication, it requires copying the zone files around. -
CoreDNS, which is a DNS server written in Go that makes everything
a plugin. On the surface, there is a
geoip
plugin, but there’s no way to use the data it generates short of writing my own plugins in Go. There is another external plugin,gslb
, which has availability checks and geolocation, but requires manually declaring locations instead of finding the nearest one automatically. I’d also need to deploy my own Go application, since it’s not packaged.
For the database storing zone data in PowerDNS, I chose to use MariaDB because I am familiar with it, it’s easy to work with, it has really nice built-in replication, and there’s the Galera cluster extension that supports building highly available multi-master clusters. We will not be using Galera in this post, but the next one in the series will use it to address some of the shortcomings in this setup.
I know the PostgreSQL crowd is going to inevitably say that PostgreSQL is the only correct choice for a database (just like how the ZFS crowd insists that I am doing it wrong with LVM and btrfs), but I have PostgreSQL deployed for applications that insist on it, and it always seemed rather painful to deal with for various reasons:
- The need to deploy a connection pooler for good performance;
- The pain of major version upgrades, since physical replication doesn’t work across major versions, and logical replication doesn’t replicate any schema changes, so I can’t just use that; and
- The lack of a simple, built-in multi-master solution like Galera. There are various third-party solutions that handle failover, but they are all quite complex compared to Galera, which has first-party support and just works out of the box.
Furthermore, none of the advanced PostgreSQL features matter for this use case. When replicating to 512 MiB VMs5 on the other side of the world, MariaDB gets the job done and is just a lot easier to work with.
MariaDB replication
Before we go any further, we should understand how MariaDB replication works.
The form of replication we are interested in is called standard replication, which is the replication from a master (also called primary or source) node to a “slave” (also called replica, secondary, or standby) node.6
With standard replication, only master nodes can handle write requests, since any writes to a slave would not be replicated to any other node. Slave nodes can only handle read requests as a result. On the other hand, if the slave is disconnected from master, it can and will serve reads with potentially stale data, and keep trying to connect.
Furthermore, each slave can only replicate a single table from a single master, even with multi-source replication. This means that the master node is the single point of failure by default. A failover solution can be used to promote slave to master and reconfigure all other slaves to replicate from it to ensure availability, but this is quite annoying to do in practice. There are products, such as MariaDB MaxScale, that handles failovers by reconfiguring replication.
Choosing a replication topology
I have 10+ anycast nodes around the world. All these nodes will need read-only access to the PowerDNS database. For simplicity’s sake, I decided to use standard replication, running a replication slave on each PowerDNS node, since that’s the most reliable setup and tolerates network disconnection events.
I first started out with a single master node, figuring it was enough because the anycast nodes would still stay up even if the master node is down. While this is true, it does render me completely unable to update the DNS while the master node is down, which is annoying. I’ll cover how to eliminate this single point of failure next time.
This is what the topology looks like, with the arrow showing the flow of data when a change is made on the master node:
Note that if you don’t intend to use anycast, you can just start with one master node for your DNS without any slaves. The next post in the series will discuss how to create multiple masters, at which point you can just use the master nodes as your nameservers.
Setting up your first MariaDB node
To start, we need to deploy the master node for the database. We’ll use Debian as an example here, but the procedure is similar on other distros.
First, we set up MariaDB’s repositories. You can follow the instructions on the MariaDB documentation, or use the packages provided by your distro.
Alternatively, just run the following script as root
to install MariaDB 11.4,
which is the latest version when I started deploying this cluster (the latest
version at the time of writing is 11.8):
apt install apt-transport-https curl
mkdir -p /etc/apt/keyrings
curl -o /etc/apt/keyrings/mariadb-keyring.pgp 'https://mariadb.org/mariadb_release_signing_key.pgp'
cat > /etc/apt/sources.list.d/mariadb.sources <<'EOF'
# MariaDB 11.4 repository list
# https://mariadb.org/download/
X-Repolib-Name: MariaDB
Types: deb
URIs: https://deb.mariadb.org/11.4/debian
Suites: bookworm
Components: main
Signed-By: /etc/apt/keyrings/mariadb-keyring.pgp
EOF
apt update
apt install -y mariadb-server mariadb-backup
Note that mariadb-backup
will become very important later.
The default settings are insecure, so you should run sudo
mariadb-secure-installation
. You are highly advised to use unix_socket
authentication for root
, disable anonymous users, disallow remote root logins,
and remove the test database.
Now, your master node is ready.
Setting up PowerDNS
For PowerDNS, we will be using the generic MySQL/MariaDB backend
(gmysql
). Since this is a generic backend that supports various different
schemas, PowerDNS will not create a database or any tables for you. Instead,
you’ll need to do this yourself.
Run sudo mariadb
to enter the MariaDB shell, then run the following SQL
queries:
CREATE DATABASE powerdns;
CREATE USER 'powerdns-reader'@'%' IDENTIFIED BY 'hunter2';
CREATE USER 'powerdns-writer'@'%' IDENTIFIED BY 'hunter2';
GRANT SELECT ON powerdns.* TO 'powerdns-reader'@'%';
GRANT SELECT, INSERT, UPDATE, DELETE ON powerdns.* TO 'powerdns-writer'@'%';
You can replace @'%'
to restrict which hostname or IP address can connect. For
example, if you only expect PowerDNS to connect from localhost
, you could
restrict it to @'localhost'
. Since I am running this exclusively on a VPN, it
doesn’t really matter. Also, for obvious reasons, do not use hunter2
as the
password, and use a different password for each user.
Next, you want to create the schema. Run USE powerdns
to switch to the newly
created database, and paste in the schema from the PowerDNS
documentation.
Then, you want to install PowerDNS Authoritative Server:
sudo apt install pdns-server pdns-backend-mysql pdns-backend-lua2 pdns-backend-geoip pdns-backend-bind-
This command will install the MySQL/MariaDB backend, the GeoIP and Lua plugins,
and avoid installing the useless bind
zone file backend that we aren’t going
to use.
We’ll create the following config file /etc/powerdns/pdns.d/common.conf
for
configuration shared between writable nodes and read-only nodes in the cluster:
# MariaDB
launch += gmysql
gmysql-dbname = powerdns
gmysql-user = powerdns-reader
gmysql-password = hunter2
# Lua records
enable-lua-records = yes
# Enable geolocation capability
launch += geoip
geoip-database-files = /var/lib/GeoIP/GeoLite2-City.mmdb
edns-subnet-processing = yes
You can obtain the GeoLite2-City.mmdb
file through various means, such as the
geoipupdate
tool from MaxMind, which you can install on Debian
with sudo apt install geoipupdate
if the contrib
repository is enabled.
We then create the following config file /etc/powerdns/pdns.d/master.conf
specifically for the master nodes:
gmysql-user = powerdns-writer
gmysql-password = hunter2
# Enable API
api = yes
api-key = hunter2
# Remember to change the API key!
Now, we can restart PowerDNS with sudo systemctl restart pdns.service
, and it
should be querying from MariaDB. There currently is nothing to serve, but we’ll
rectify that soon enough.
Setting up poweradmin
Poweradmin is the most popular web frontend for PowerDNS, and the only one that’s actively being maintained at the time of writing. It’s somewhat unfortunate that it’s written in PHP, but oh well.
There are many ways to deploy PHP applications. Since I use nginx everywhere, I
decided to deploy it with nginx + php-fpm. Basically, we’ll deploy poweradmin’s
code into /srv/poweradmin
and set up a php-fpm pool for it, and run
poweradmin’s installer.
First, we install nginx
. There are many ways of doing it, but here’s one way:
sudo apt -y install curl gnupg2 ca-certificates lsb-release debian-archive-keyring
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/debian $(lsb_release -cs) nginx" | sudo tee /etc/apt/sources.list.d/nginx.list > /dev/null
sudo apt update
sudo apt -y install nginx
We then install php-fpm and the required PHP extensions:
apt install -y php-fpm php-intl php-mysql php-mcrypt
Now, we create a new php-fpm pool specifically for poweradmin to encourage
isolation in /etc/php/8.2/fpm/pool.d/poweradmin.conf
with the following
contents:
[poweradmin]
user = poweradmin
group = poweradmin
listen = /run/php/poweradmin.sock
listen.owner = poweradmin
listen.group = nginx
listen.mode = 660
pm = dynamic
pm.max_children = 3
pm.start_servers = 1
pm.min_spare_servers = 1
pm.max_spare_servers = 2
pm.max_requests = 1000
env[PATH] = /usr/local/bin:/usr/bin:/bin
catch_workers_output = yes
php_admin_flag[log_errors] = on
You may need to change the PHP version in the path to whatever you have
installed. Debian bookworm
uses 8.2
. You may also need to change
listen.group
to whatever group nginx
is running under, which is either
nginx
or www-data
, depending on whose package you installed.
You will naturally need to create the poweradmin
user and group for php-fpm:
adduser --system --group poweradmin
You’ll also need to create a poweradmin
user in MariaDB by running this in
sudo mariadb
:
CREATE USER 'poweradmin'@'%' identified by 'hunter2';
GRANT ALL ON powerdns.* TO 'poweradmin'@'%';
Finally, we are going install to poweradmin 3.9.5, which is the latest version at the time of writing:
mkdir -p /srv/poweradmin
cd /srv/poweradmin
curl -L https://github.com/poweradmin/poweradmin/archive/refs/tags/v3.9.5.tar.gz | tar -xz --strip-components=1
We now configure nginx
to serve poweradmin by passing it to php-fpm. Create
/etc/nginx/conf.d/poweradmin.conf
:
server {
listen 80;
listen [::]:80;
server_name poweradmin.example.net;
root /srv/poweradmin;
location / {
try_files $uri $uri/ /index.php?$query_string;
index index.php;
}
location /inc/ { return 404; }
location ~ [^/]\.php(/|$) {
fastcgi_pass unix:/run/php/poweradmin.sock;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Setting up HTTPS is left as an exercise for the reader, but it is highly recommended, especially if you are allowing access outside of your LAN without a VPN.
Reload your configuration with sudo systemctl reload nginx.service
php8.2-fpm.service
(you may need to systemctl start nginx.service
if it’s not
already running), and you should now be able to install poweradmin at
https://poweradmin.example.net/install/
.
Simply follow the instructions there, installing into the same database with the
specifically created poweradmin
database user, and then once you are done,
delete the installer:
rm -r /srv/poweradmin/install
You should now be able to go to https://poweradmin.example.net
, log in, and
create your zones.
Note that if you plan to use DNSSEC, you should edit
/srv/poweradmin/inc/config.inc.php
and append the following snippet to allow
poweradmin to invoke the PowerDNS API to rectify the zone:
$pdnssec_use = true;
$pdns_api_url = 'http://localhost:8081';
$pdns_api_key = 'hunter2'; // change to whatever you set api-key to in PowerDNS
Once you have created your zone, you should be able to use dig
and hit your
local powerdns instance. For example, if you created an A record for
example.com
, you should be able to see it with dig A example.com @localhost
on the master node.
Setting up replication
Now for the more interesting part—setting up MariaDB replication. We’ll assume
all servers are on the VPN 192.0.2.0/24
, with the master node at 192.0.2.1
and an example slave node at 192.0.2.10
. Setting up this VPN is out of scope
for this post and will be left as an exercise for the reader.
Additionally, we assume they have public IPv4 addresses 198.51.100.1
and
198.51.100.10
, respectively, and IPv6 addresses 2001:db8::1
and
2001:db8::10
, respectively. Note that by default, MariaDB replication is
unencrypted, so you should run it on a LAN or VPN. There are ways to encrypt the
traffic, and setting those up when replicating over the public Internet is left
as an exercise for the reader.
First, install MariaDB on every slave node.
On every node, create /etc/mysql/mariadb.conf.d/70-powerdns.cnf
(we avoid
modifying existing configuration files to avoid conflicts when upgrading):
[mariadbd]
skip_name_resolve = 1
performance_schema = 1
read_only = 1
Then, on the master node, create /etc/mysql/mariadb.conf.d/72-master.conf
:
[mariadbd]
server_id = 1
bind_address = 127.0.0.1,192.0.2.1
log_bin = powerdns
binlog_format = mixed
read-only = 0
On each slave node, create /etc/mysql/mariadb.conf.d/72-slave.conf
:
[mariadbd]
server_id = 10
Remember to set the server_id
to something different on each node! Also, since
we are running PowerDNS on the same node as MariaDB, we don’t need to adjust
bind_address
on the slaves. The default localhost
binds are sufficient.
We also need to create a replication user on the master node. Run sudo mariadb
there and run the following queries:
CREATE USER 'replication'@'%' IDENTIFIED BY 'hunter2';
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'%';
Feel free to change the username and password, along with adding any host-based restrictions. Note that if you have a firewall, you need to allow TCP port 3306 on the master from any IPs that the slaves might use to connect.
We then use mariadb-backup
to copy the database over to the slaves. The
easiest way to use mariadb-backup
here is to export as xbstream
and pipe it
over SSH. For example:
ssh root@slave mkdir /root/snapshot
ssh root@master 'mariadb-backup --backup --stream=xbstream | zstd' | ssh root@slave 'unzstd | mbstream -x -C /root/snapshot'
Then, as root
on the slave, you can run the following command to load the
snapshot into MariaDB:
systemctl stop mysql.service
rm -rf /var/lib/mysql
mariadb-backup --prepare --target-dir=/root/snapshot
mariadb-backup --move-back --target-dir=/root/snapshot
chown -R mysql:mysql /var/lib/mysql/
systemctl start mysql.service
We now need to tell MariaDB to replicate from the master node. First, we read
/root/snapshot/mariadb_backup_binlog_info
, which contains exactly where the
replication got to. It should look something like:
powerdns.000000 1234 0-1-5678
We are interested in the third whitespace-separated value, e.g. 0-1-5678
,
which is the GTID (Global Transaction ID). This is the new and recommended way
of setting up replication at the time of writing.
We then run sudo mariadb
and run the following query:
STOP SLAVE;
SET GLOBAL gtid_slave_pos = "[the GTID from mariadb_backup_binlog_info]";
CHANGE MASTER TO
MASTER_HOST="192.0.2.1",
MASTER_PORT=3306,
MASTER_USER="replication",
MASTER_PASSWORD="hunter2",
MASTER_USE_GTID=slave_pos;
START SLAVE;
Wait a bit, then run SHOW SLAVE STATUS\G
. If all goes well, you should see
lines like:
Slave_IO_State: Waiting for master to send event
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
This means it’s working. You should probably monitor Slave_IO_Running
and
Slave_SQL_Running
and generate a notification if the value ever becomes No
,
such as using mysqld-exporter and Prometheus,
but doing so is left as an exercise for the reader.
You can follow the same procedure as above to set up PowerDNS on the slave node,
but only create /etc/powerdns/pdns.d/common.conf
and not
/etc/powerdns/pdns.d/master.conf
. Once you make a change to your zones on
poweradmin, you should be able to see it by running dig
against
198.51.100.10
.
Repeat the same procedure for any other slave node you have.
Regular unicast DNS
At this point, you should have two instances of PowerDNS. That’s enough for a
traditional unicast DNS setup. You could just use them to serve your zone, say
example.com
. There are two ways of doing this.
If you own a domain example.net
and have the DNS hosted elsewhere, you can
create the following records:
ns1.example.net. 86400 IN A 198.51.100.1
ns1.example.net. 86400 IN AAAA 2001:db8::1
ns2.example.net. 86400 IN A 198.51.100.10
ns2.example.net. 86400 IN AAAA 2001:db8::10
(This is in bind-style zone file format. The fields are as follows: domain name, TTL, class7, record type, value.)
Then you can set the DNS servers as ns1.example.net
and ns2.example.net
.
Alternatively, if your registrar supports glue records, you can make the DNS
resolution faster by directly storing the nameserver IPs for example.com
and
eliminating the dependency on example.net
. To do this, first create the
following records in poweradmin for example.com
:
ns1.example.com. 86400 IN A 198.51.100.1
ns1.example.com. 86400 IN AAAA 2001:db8::1
ns2.example.com. 86400 IN A 198.51.100.10
ns2.example.com. 86400 IN AAAA 2001:db8::10
Then, on the registrar, create glue records for the DNS for ns1.example.com
and ns2.example.com
, specifying the same IP addresses for each nameserver, and
set them as the DNS server for example.com
.
Deploying anycast
Let’s assume you are using 203.0.113.0/24
and 2001:db8:1000::/48
for
anycast. You can add the IPs 203.0.113.1
, 203.0.113.2
, 2001:db8:1000::1
,
and 2001:db8:1000::2
on every DNS server you want participating in anycast. In
my case, I did this on all the slave nodes, since I am reserving the master node
for writes.
Then, announce the prefixes 203.0.113.0/24
and 2001:db8:1000::/48
from every
server to your upstream, following instructions in the previous post to
tune your announcements to avoid pathological routing. Since this strongly
depends on your upstreams and the type of equipment you have, the details are
left as an exercise for the reader.
You can then follow a very similar procedure as unicast, except using
203.0.113.1
, 203.0.113.2
, 2001:db8:1000::1
, and 2001:db8:1000::2
instead.
Once this is done, DNS requests for example.com
should go to the “closest”
server as determined by BGP.
Using Lua records to select backends
Once you have the PowerDNS master node, you can use Lua records to perform uptime checks and select nodes via geolocation.
For example, say you have two backend instances serving example.com
:
- node 1 has addresses
198.51.100.101
and2001:db8:2000::1
; and - node 2 has addresses
198.51.100.102
and2001:db8:2000::2
.
You can check if the backend returns 200 when https://example.com
is
requested, and always prefer node 1 if it’s up, by creating the following Lua
records:
example.com. 300 IN LUA A "ifurlup('https://example.com', { {'198.51.100.101'}, {'198.51.100.102'} })"
example.com. 300 IN LUA AAAA "ifurlup('https://example.com', { {'2001:db8:2000::1'}, {'2001:db8:2000::2'} })"
Note that in poweradmin, select LUA
as the record type and enter A
"ifurlup(...)"
as the content of the record.
You can also tell PowerDNS to instead choose the nearest node, and also check
that the response contains the string Example
:
example.com. 300 IN LUA A "ifurlup('https://example.com', { {'198.51.100.101', '198.51.100.102'} }, {stringmatch='Example', selector='pickclosest'})"
example.com. 300 IN LUA AAAA "ifurlup('https://example.com', { {'2001:db8:2000::1', '2001:db8:2000::2'} }, {stringmatch='Example', selector='pickclosest'})"
pickclosest
works by looking up the IP geolocation for each backend server IP
in the list and also the user’s IP, then checking which backend server is the
closest to the user geographically. When doing this, make sure the IP
geolocation for your backend servers is correct!
You can also override the result for certain countries by using more complex Lua scripts. For example, to send all users from China to node 1, you can use the following Lua script:
;if country('CN') then return '198.51.100.101' else return ifurlup('https://example.com', { {'198.51.100.101', '198.51.100.102'} }, {stringmatch='Example', selector='pickclosest'}) end
Note that if the Lua code starts with ;
, the PowerDNS will treat it as a full
script. Otherwise, it would treat it as a simple Lua expression. In this case,
we use a script. Like before, put the script content in A "..."
and AAAA
"...."
, and place the resulting string into the “content” field in poweradmin.
Also note that on the first request, PowerDNS will not have the availability
information and will randomly return one of the backends. This is somewhat
annoying but unavoidable due to the dynamic nature of Lua records. A hack is
using a cron job to query example.com
for A
and AAAA
every hour to ensure
PowerDNS is always performing health checks.
For more details, consult the PowerDNS documentation.
Conclusion
Since this post has gone on for long enough, I’ll end it here. At this point, we’ve successfully constructed a highly available PowerDNS cluster with availability checks and geolocation to steer users towards nearby available instances of services, enabling high availability. We’ve demonstrated how to do this with a simple service that has multiple backends, which you can use to make any static website highly available.
The current setup has one flaw: there is a single master node, which is the single point of failure for writes. When it goes down, it becomes impossible to make changes to the DNS, even though the slave nodes in the anycast will continue to serve the zone, including performing availability checks.
I hope you found this post useful. Next time, we’ll look into how to eliminate the dependency on the single master node, enabling highly available DNS modifications, and also demonstrate how to make dynamic web applications highly available.
Notes
-
The thing that always shocks non-networking people about IP geolocation is how much it relies on people submitting corrections, and also how much it relies on random CSV files maintained by networks. I might write a post about how it all works one day. ↩
-
Networks are typically incentivized to improve connectivity, which should improve latencies to any given IP. On the other hand, improving connectivity by adding a new upstream has the potential to make downstream anycast worse due to the new upstream always preferring customer routes, even if it originated on the other side of the world and there are other routes that came from closer locations. ↩
-
I really like it when software is in the Debian repository, because then security patches are the Debian security team’s problem, not mine. I also trust them to patch stuff on time, unlike random third-party repositories. There are only a few vendors whose Debian repositories I trust, such as
mariadb
andnginx
. The whole point of this exercise is to reduce my stress levels, andunattended-upgrades
really helps. ↩ -
Yes, I know many people don’t like DNSSEC and the standard has many problems, but I still feel like having DNSSEC is better than letting anyone inject fake records. I refuse to build my infrastructure on something that locks me out of DNSSEC. ↩
-
Yes, DNS isn’t some super heavy application, especially given the size of my zones and the amount of queries I am getting. I’d rather have something simple. If my blog somehow becomes super popular, I am sure I can just upgrade the smallest nodes. ↩
-
For various reasons, people have been trying to replace the terms used for the different node types in replication. For simplicity and to reduce confusion, we are going to stick with the traditional terminology since they are the only ones that work in every context with MariaDB.
The newer terms are inconsistently applied at the time of writing, with the MariaDB documentation freely mixing different terminology, making it hard to understand and harder to know which commands to use. Even worse, the new commands they introduced are also internally inconsistent, leading to confusing situations like the following interaction, which doesn’t happen with the old commands:
MariaDB [(none)]> SHOW REPLICA STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event ...
Needless to say, this should not be construed as any endorsement of the practice of enslaving humans. ↩
-
DNS has the concept of classes for different types of networks, though it’s basically always
IN
for Internet. The only other class you might see these days isCH
for Chaosnet, but it’s really just being used as a way to query information about the DNS server itself, not anything to do with the real Chaosnet. ↩