For years, my personal server infrastructure has been a testament to a core belief: the future is IPv6. It’s the cleaner, more expansive, and more logical evolution of the internet. In that spirit, I built my digital kingdom as an IPv6-native environment. It was elegant, efficient, and, in my mind, perfectly complete.

Then, the real world came knocking. A friend messaged me, "Hey, I can't seem to load your website."

After a quick back-and-forth, the culprit was clear. My friend's home network, like a surprising portion of the internet, didn't speak IPv6. My server, a purist, refused them by design. I had to go back to the drawing board and figure out how to handle this unexpected new requirement.

The Tollbooth

The obvious solution was to add an IPv4 address to my server. For years, this was effectively free on AWS if the IP was attached to a running instance. But a recent policy change turned this on its head: every public IPv4 address now comes with a small but persistent hourly fee. Seriously? It's not much, but for my lean setup, it would nevertheless be an unwelcome toll. If you read my blogs at all, you know I'm drawn to challenges and driven by principle; and this hits both.

This irked me on two levels. First, there's the principle. Paying for a legacy protocol I was trying to move beyond felt like paying a tax on the past. Surely I was fighting the good fight? Second, and more importantly, this wasn't just about my friend; it was about universal accessibility. If he couldn't connect, neither could I when traveling, nor could anyone else on a similar legacy network.

The goal became clear: find a way to offer an IPv4 on-ramp to my IPv6-only world without paying the toll. Was it even possible?

But How?

My first thought was a dynamic DNS script to update a record whenever my server's non-static IP changed. But a quick check of the fine print revealed the flaw: the new charge applies to any public IPv4 address, not just the static ones. This path was a dead end.

I was stuck. How could I get an IPv4 address without actually having an IPv4 address on my server? The problem, reframed, was that I needed a doorman -- a service that could greet IPv4 visitors and translate for my IPv6-only host.

Doorman... Doorman?

And then, a revelation. I already had a doorman in my toolbelt; I just hadn't recognized it yet: a Content Delivery Network (CDN).

While the primary purpose of a CDN like AWS CloudFront is to cache content closer to users, its most important feature for my purposes is a side effect of its design: all CloudFront edge locations are dual-stack. They speak both IPv4 and IPv6 fluently.

This was the key. I could configure CloudFront not as a cache, but as a clever protocol gateway. The flow would be:

  1. An IPv4-only user tries to visit my site.
  2. DNS points them to the nearest CloudFront edge location's IPv4 address.
  3. CloudFront receives the request and, acting as a translator, forwards it to my origin server using its native IPv6 address.

My server could continue living in its pristine IPv6-only world, completely unaware that the original request came from a legacy network. And because CloudFront's free tier is incredibly generous (1TB of data and 10 million requests per month), this entire translation service would be completely free.

There's Always a Plot Twist

So I did it.

The initial setup was a success, but it revealed a new, more subtle problem. I have a number of internal services -- metrics collectors, authentication requests -- that call each other constantly. Now, all this high-frequency, internal "chatter" was also being routed out to CloudFront and back again.

I ran some back of the envelope calculations: my own automated services could easily burn through those 10 million CDN requests I get for free a month. It's like I had built a public lobby in a hotel but had forgotten to build a private staff entrance.

The solution was to split my DNS into two distinct universes.

  • *.scottliu.com (The Public Lobby): This wildcard record points to CloudFront. It's the front door for all public, user-facing traffic.
  • *.internal.scottliu.com (The Staff Entrance): This new, more specific wildcard points directly to my server's IPv6 address. This re-mapping creates a direct, internal-only path that bypasses the CDN, while authentication and other core functions remain unchanged.

By reconfiguring my internal service chatter to use the *.internal domain, I created a secure, direct bypass route. This high-volume traffic now never touches the CDN, leaving the entire free tier available for actual human visitors. It's a clean separation that makes the architecture more robust, more efficient, and preserves the cost-effectiveness of the solution.

Takeaways

So, what did we learn?

The biggest lesson here is that the best solution isn't always a direct one. By reframing the problem from "How do I get an IPv4 address?" to "How do I translate IPv4 requests?", the answer was hiding in a tool I already used. My server now enjoys the best of both worlds: it remains a modern, IPv6-native citizen, while a friendly CDN acts as its perpetual, free-of-charge translator for the user-facing part of the internet.