As mentioned in previous articles, I’ve run a homelab for a few years now. Having just rebuilt most of my personal server infrastructure, I thought it would be worth writing it up.
It should be noted that this network is separate to my home network, which I built and share with my housemate who has an even more insane personal network than me.
I’ve stubbornly insisted on running dual-stack for everything I’ve built previously. Partially because I’m terrified of NAT64 breaking and not being able to access anything, and partially because I have to use IPv4-only services sometimes (cough UniFi Controller). However maintaining two IP stacks is just a lot of effort and pretty much doubles the amount of problems that I have to deal with; so I cut out the v4… ish. The current rule is that I only allow public addressing, so public v4 addresses are fine, as are public unicast v6 addresses.
There is a small issue with using IPv6 though, and that comes from trying to multi-home things. I considered applying for some PI space and an ASN1 from RIPE so I can run BGP to multiple providers, but it’s probably a bit overkill2 for what I want to do. I’ve sort-of avoided this problem by asking for a v6 allocation from every provider that I am using and running a dynamic routing protocol over a VPN (see below). It means that some things will fall off the internet during an outage, but hey it’s not like I’m running CNI3.
I’ve run a variety of hypervisor / container orchestration solutions over the years and frankly all of the are a bit rubbish. I’ve currently settled on LXD, which is pretty good, albeit absolutely not without hiccups. I have a half-written blog post on getting it setup nicely, so look forward to that if it ever gets further than a draft!4
Above is a roughly put together diagram of my current setup. There are two site at the moment and I plan to setup at least one more once I’ve found a suitable provider.
The basic architecture is a number of sites, each with a IPv6 allocation from their provider. The sites are linked using Wireguard tunnels between routers, with OSPF used as a dynamic routing protocol. This allows site-to-site communication over backup connections (e.g the backup 4G connection that we have at home in case the DSL circuit fails), even if the backup circuit is from a different provider or doesn’t support v6.
Each site has a routed IPv6 prefix, I’ve been asking for at least a
/56 which is a sensible amount to ask for and allows be to have a minimum of 255
/64 prefixes to use for LANs.5
Each site is linked to at least one other site using a WireGuard site-to-site VPN tunnel. These are point to point links only and I use link-local addressing on them. This means that I can then run OSPF over the top as a routing protocol to ensure that traffic for sites goes the right way.
Example Config for WireGuard
[Interface] Address = fe80::1/64 ListenPort = 51820 PrivateKey = <Some Private Key> Table = off # Don't add to the routing table as we are using our own routing protocol [Peer] Endpoint = <site-a> PublicKey = <Some Public Key> AllowedIPs = ::/0 PersistentKeepalive = 25
NB: Technically the wg tunnels aren’t needed and traffic would still get to the right places with the right firewall connections. The difference is that the wireguard tunnels encrypt any plaintext traffic between sites and also enable sites to stay online if the primary connection to that site fails.
Now whilst I’d love to imagine that I can completely strip the legacy Internet away, there are some things that I like to access that are still stuck in the 1990s. The solution here is to use a so-called transition mechanism, the most common of which is NAT64 + DNS64.
DNS64 is pretty easy to configure, I just added a config option to my unbound resolver configurations.
For NAT64, I use Tayga in spite of the lack of any software that has been updated in the last decade. I also set static mapping to my HTTP proxies so that v4-only clients can access my websites without having to put a private v4 address on my proxies.
The majority of external traffic that comes inbound to my network is HTTPS to a number of services that are hosted in various places. To direct this traffic to the right places, I’m using a non-terminating TLS proxy. This means that traffic stays encrypted and the proxies don’t contain any SSL certificates. The proxies are also entirely stateless which is nice.
- Terminate v4 and v6 HTTP(S) traffic at proxy, traffic forwarded based on SNI header.
- TLS is not terminated at proxy, so traffic within network is encrypted.
- More proxies can easily add more for redundancy. If a site goes down, only services hosted exclusively at that site are lost.
- Traffic can be sent over vpn so that services hosted at other sites are accessible
Yes, that mysterious
api-sov box does go back into sniproxy. It’s an “Authentication Transit Proxy”, more info in a later blog post / when the code is more stable.4
Autonomous System Number (ASN), i.e becoming an “ISP” and being able to route my traffic over a number of providers as part of the global routing table. ↩︎
As if this isn’t overkill? I might do the ASN thing in the future anyway. ↩︎
Critical National Infrastructure ↩︎
There are currently at least 5 half-written articles that may eventually get finished. I do not make any guarantees about frequency of posts and your statutory rights are affected. ↩︎
Some providers will give you more than this which is no issue. The awesome folks at Mythic Beasts routed me a
/48with no questions asked within 12 hours (I justified my use in my original ticket). ↩︎