Cloudflare Hits 500 Tbps, Capping 16 Years of Global Network Scaling
Photo by Markus Spiske on Unsplash
Sixteen years ago Cloudflare’s backbone could barely handle a few gigabits; today the Blog reports it boasts 500 Tbps of provisioned external capacity across 330+ cities—a stark leap in global scaling.
Key Facts
- •Key company: Cloudflare
Cloudflare’s latest engineering sprint has turned a once‑tiny “reverse‑proxy‑by‑changing‑two‑nameservers” startup into a global traffic juggernaut, now boasting 500 terabits per second of provisioned external capacity across more than 330 cities, the company’s own blog explains. That figure isn’t a fleeting peak; it’s the sum of every port facing transit providers, private peers, Internet exchanges and Cloudflare Network Interconnect (CNI) links. In practice, daily utilization sits at a fraction of the total, with the surplus acting as a built‑in DDoS budget that lets the network absorb massive attacks without breaking a sweat.
The road to half‑a‑terabit began in a modest Palo Alto office above a nail salon in 2010, where Cloudflare’s first transit partner was nLayer Communications (now GTT). From that single line of capacity, the firm embarked on a city‑by‑city rollout—Chicago, Ashburn, San Jose, Amsterdam, Tokyo—each new data center demanding a fresh round of colocation contracts, fiber pulls and peering negotiations. The expansion was anything but smooth; the blog recounts “missing hardware, customs strikes, and even dental floss” as occasional roadblocks. Yet the pace accelerated dramatically: in January 2018, Cloudflare opened 31 cities in just 24 days, stretching from Kathmandu to Reykjavík and Chișinău, a blitz that underscored the company’s ambition to blanket the globe with its edge.
Today’s network underpins more than 20 % of the web, protecting over 7 million Internet properties when the 127th data center launched in Macau, and now scaling to 20 % with the 330‑plus sites in operation. The shift from pure caching to a full‑stack security layer has been pivotal. Enterprises that once relied on aging MPLS circuits now tap Cloudflare for secure tunnels to private subnets and BGP‑advertised IP space directly from the edge. This evolution has forced the backbone to handle not just user traffic but also the 31.4 Tbps DDoS attack recorded in 2025—originating from the Aisuru‑Kimwolf botnet of compromised Android TVs—that lasted 35 seconds and was mitigated automatically, according to the blog. “No engineer was paged,” the post notes, highlighting how the network’s intelligence now lives on every server.
The secret sauce behind that lightning‑fast mitigation is a tightly integrated packet‑processing pipeline. Incoming packets hit the network interface card (NIC) and instantly enter an eXpress Data Path (XDP) program chain managed by xdpd in driver mode. Early in the chain, the l4drop program evaluates each packet against mitigation rules compiled in eBPF. Those rules are generated by dosd, the denial‑of‑service daemon that runs on every server. Each dosd instance samples traffic, builds a table of the heaviest hitters, and broadcasts it fleet‑wide, ensuring that every edge node knows exactly which traffic to drop. This distributed, real‑time intelligence is what makes operating at a 500 Tbps scale feasible without a human‑in‑the‑loop.
Looking ahead, Cloudflare’s engineers see the 500 Tbps milestone as a platform rather than a finish line. The blog frames it as “moving the intelligence to every server in our network so the network can defend itself,” a philosophy that will guide future expansions of both capacity and autonomous security. As the internet continues to densify—think more IoT devices, higher‑resolution video, and ever‑larger cloud workloads—the need for a backbone that can absorb traffic spikes and fend off megabit‑scale attacks will only grow. Cloudflare’s 16‑year journey from a single transit line to a half‑terabit, globally distributed mesh suggests it’s ready to meet that demand, one city at a time.
Sources
Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.