On February 20, 2026, Cloudflare, a leading content delivery network and Internet security provider, experienced a significant global service outage lasting over six hours. This disruption rendered numerous websites and applications inaccessible, affecting a substantial portion of the Internet.
Incident Overview
The outage commenced at 17:48 UTC and persisted for six hours and seven minutes. It was triggered by an internal configuration change related to Cloudflare’s Bring Your Own IP (BYOIP) service. This change inadvertently withdrew customer Border Gateway Protocol (BGP) routes from the Internet, making affected services unreachable. Approximately 25% of all BYOIP prefixes globally were impacted, leading to widespread HTTP 403 errors, including on Cloudflare’s 1.1.1.1 public recursive DNS resolver. ([blog.cloudflare.com](https://blog.cloudflare.com/cloudflare-outage-february-20-2026/?utm_source=openai))
Technical Breakdown
The root cause was traced to a bug within Cloudflare’s Addressing API, introduced during an automated cleanup task deployment. This task aimed to replace manual removal processes for BYOIP prefixes as part of the company’s Code Orange: Fail Small resilience initiative. The system executed an API query with an unassigned `pending_delete` flag, causing the server to queue all returned BYOIP prefixes for deletion instead of only those slated for removal. This oversight led to the systematic deletion of approximately 1,100 BYOIP prefixes and their dependent service bindings before the process was manually terminated. ([blog.cloudflare.com](https://blog.cloudflare.com/cloudflare-outage-february-20-2026/?utm_source=openai))
Impact on Services
The outage affected multiple core products relying on BYOIP configurations:
– Core CDN and Security Services: Traffic failed to route to Cloudflare, resulting in connection timeouts for advertised websites.
– Spectrum: Applications operating on BYOIP completely failed to proxy traffic.
– Dedicated Egress: Users leveraging BYOIP or Dedicated IPs could not send outbound traffic to their destinations.
– Magic Transit: End users connecting to protected applications experienced complete connection failures and timeouts.
Recovery Efforts
Recovery was delayed due to the varied impact on customer prefixes, necessitating intensive data recovery operations. While some users could self-remediate by toggling their advertisements back on via the Cloudflare dashboard, approximately 300 prefixes required manual restoration by engineers. This involved pushing global configuration updates to reapply settings across every machine on the edge network. ([blog.cloudflare.com](https://blog.cloudflare.com/cloudflare-outage-february-20-2026/?utm_source=openai))
Planned Remediation
To prevent future incidents, Cloudflare is implementing several critical architecture changes under its Code Orange mandate:
– API Schema Standardization: Improving the API schema to prevent flag interpretation errors.
– Circuit Breakers: Implementing mechanisms to detect abnormally fast BGP prefix deletions.
– Operational State Snapshots: Establishing health-mediated snapshots to separate customer configurations from production rollouts.
These measures aim to enhance the resilience and stability of Cloudflare’s services, ensuring a more robust infrastructure for its global user base.
Twitter Post
Cloudflare’s 6-hour global outage on Feb 20, 2026, disrupted countless services. Learn about the cause and their steps to prevent future incidents. #CloudflareOutage #InternetSecurity #BYOIP
Focus Key Phrase
Cloudflare global outage February 2026
Article X Post:
Hashtags:
Article Key Phrase:
Category: Security News