The IPv8 Proposal and a bit of an opinion on it

I read the IPv8 draft, and there are a lot of thoughts coming out.

The short version:
I like that there is a proposal thinking about the routing table, addressing, management, authentication and operational complexity as one bigger problem. That part might be good, but the proposal already then has accumulated a lot of complexity, and solving everything in one go is not the way to go with the Internet.

I do not think this draft is anywhere near complete enough to describe a useable and production capable Internet protocol. It reads less like a protocol specification and more like a large architectural wish list. Some of the ideas are interesting, but many of the operational details that actually matter to network operators like me are missing.

The draft says that IPv8 is a managed network protocol suite, where devices are authorized via OAuth2 JWT tokens, services are delivered through DHCP8, packets are validated against DNS8 and WHOIS8, and routing is structurally bounded to one global route per ASN. It also says every ASN holder receives 4,294,967,296 host addresses (source).

That sounds simple and structured on paper. In production networks, a lot of it will not work or not happen.

The 900k routes feels outdated

One of the motivations in the draft is routing table growth. It mentions that BGP exceeded 900,000 IPv4 prefixes in 2024 (source).

That number feels like the draft was written a while ago and only published now. Today we are already at and above one million IPv4 routes depending on vantage point. APNIC shows that advertised IPv4 prefixes grew from around 300,000 in 2011 to around 1.2 million in early 2026 (source). Another measurement already passed 1,002,006 routes in 2025 (source).
This does not negate the point. But it does show that the draft is not quite up to date with the current numbers. If a new Internet protocol is proposed in 2026, it feels odd if the underlying data is maybe 4 years old. Then what else didn’t get into it that we developed since then?

One route per ASN is not how networks can work

The draft says that the global routing table is structurally bounded at one entry per ASN.

This is probably the biggest irritation for me. A prefix is not only a block of addresses. Prefixes are also an operational tool.

Network Operators use more specific prefixes for traffic engineering, DDoS mitigation, regional routing, anycast routing, passive backup paths and maintenance windows, capacity balancing, selective announcements, customer separation and commercial optimization.

If every ASN only gets one globally visible route, how do I still have that?

Assume an operator is present in Zurich, Frankfurt, Helsinki and Amsterdam. These are not the same market. They do not have the same upstreams, IXPs, customers, latency, capacity or commercial agreements. Sometimes I want traffic for one service to be in Frankfurt. Sometimes I want Finnish eyeball traffic to stay in Finland. Sometimes I want to isolate or split apart a POP for maintenance. Maybe I gotta announce a more specific to a DDoS scrubbing provider. Someone might want anycast, but not everywhere.

With one global ASN route, all of this becomes impossible. The draft introduces CF, a composite forwarding metric, to select paths. But a single metric is not a replacement for routing policy. Routing policy is not only “best path”. It is also intent, traffic engineering and careful tuning.

Does this mean one ASN per location?

If the answer is “use one ASN per location”, then we did not simplify the Internet.

It would create just another bloat elsewhere. Large operators would start using multiple ASNs to still have their routing policies working. Enterprises with distinct sites would need more ASNs. Anycast networks would need new design. RIR policy and fees would need to change yet again. IRR, PeeringDB, ASPA, peerings, AS-PATH routing decisions – all need to change.

Route validation is not path validation

The draft talks about validating packets against WHOIS8 registered active routes.

But I didn’t read about a path validation model. Unless I missed it?

Validating that an ASN may originate something is not the same as validating that the AS path is legit/correct. We already got and improved this with RPKI. ROAs help with origin validation, but they do not solve AS-Hijacks or forged path problems.

ASPA exists exactly to adress this. It is being standardised in IETF SIDROPS and describes AS_PATH verification using ASPA objects in RPKI (source). RIPE NCC also describes ASPA as a new RPKI object that lets an AS holder list legitimate upstream providers (source).

IPv8 appears to plan its own route validation, but does not address path validation. This seems to me a major lack of understanding of operational problems and real world attacks/leaks

If you design a new global routing architecture in 2026, you shouldn’t ignore the RPKI and ASPA work and replace it with a WHOIS-like mechanism without an actual explanation why it is better, how it is secured, how trust anchors work, etc. Right now, the proposal just states that it will be replaced.

About the CF Metric

The CF metric is interesting. It describes a metric that may include RTT, packet loss, congestion, stability, capacity, economic preference and distance.

But if you write a metric like this, I can imagine a lot of other metrics missing:

  • load on a link (that’s neither congestion or capacity, but the actual usage of the link so load balancing is actually balancing and not only when a link is full, then another one gets preferred.
  • Multiple paths: If the path gets full, and the route then has a new link, should not all new flows go via the new link and then load the new link until capacity is full? This proposal feels like it would lead to constant flaps between such links, where each time a link gets full, a new link is filled and the old one (as it’s not the best path anymore) gets drained once all flows expire
  • The draft mentions Path MTU Discovery for the larger header, but I do not see MTU treated as a routing metric

Backward compatibility sounds too optimistic

The draft says IPv4 is a proper subset of IPv8, that no existing device, application or network requires modification, and that there is no flag day.

That is a problem and a blessing. If there is no flag day and there is backwards compatibility, then there is almost no incentive to move from IPv4 to IPv8 as there is backwards compatibility so no action needed, everything will keep “running”

My issue is that it tries to remove clunk from the routing table, but does add an insane amount of operational complexity.

Today, more-specific routes are wasteful, but they can be also useful. BGP communities are difficult, but they are very helpful. De-aggregation is nasty, but it gives operators full control. RPKI is deployed and understood. ASPA is still emerging, which addresses a missing safety feature.

I miss a few things on this draft – the operational parts:

  • path validation, not just route validation
  • multi-location traffic engineering
  • anycast
  • DDoS mitigation workflows
  • MTU-aware path selection
  • clear CF overflow and balancing handling
  • interaction with RPKI and ASPA as it will be a process, so subsets should be intergrated, not ignored.
    future of our current toolkits
  • how to debug if something goes wrong or wont work, or is out of order (e.g. the mandatory DHCP8 server)
  • routing policy control (e.g. announcing a prefix only in one political area)

OSI Layer Model

Combining JWT-based authentication, DNS naming, DHCP configuration and routing semantics directly into what is supposed to be a new Layer 3 protocol fundamentally disregards the separation of layers that the OSI model is built on:

Layer 3 is meant to provide best-effort packet delivery, not identity, policy, service discovery and lifecycle management. If the intention is to merge these layers into a single Network Stack – then this is not just a new IP protocol but effectively a replacement for all parts of Layers 3-7, which would require re-specifying everything from transport to application, and the combined operational tools.

If that is not the Idea, then the proposal is crossing layer boundaries without clearly defining it.

In both cases, the draft feels incomplete: either the authors underestimate the massive scope to replacing the layered model, or they do not bring to the table yet why breaking those boundaries leads to a better & safer system.


I am open for correction on my thoughts, but for me, this whole proposal feels like a thought experiment and not a realworld proposal.

Share this post