AWS Power Outage in Middle East Disrupts EC2 and Networking Services
On March 1, 2026, Amazon Web Services (AWS) experienced a significant power outage in its Middle East region (me-central-1), leading to substantial disruptions in Amazon Elastic Compute Cloud (EC2) services and networking APIs. The incident was triggered by an unusual physical event where external objects struck a data center, causing sparks and a subsequent fire.
Incident Overview
The outage commenced around 4:30 AM PST, with AWS officially acknowledging connectivity and power issues by 4:51 AM PST. By 6:09 AM PST, AWS confirmed a localized power failure in the mec1-az2 Availability Zone. The fire department mandated a complete shutdown of power to the facility, including backup generators, to manage the situation safely. This power loss incapacitated EC2 Instances, Amazon Elastic Block Store (EBS) volumes, and Amazon Relational Database Service (RDS) databases within the affected zone.
Impact on Services
The power outage severely impacted EC2 networking APIs, leading to widespread throttling errors and failures in critical networking functions such as AllocateAddress, AssociateAddress, DescribeRouteTable, and DescribeNetworkInterfaces. Customers reported significant challenges in managing their cloud resources due to these API failures.
Mitigation Efforts
AWS engineers initiated traffic weighting strategies to route requests away from the damaged facility, shifting loads to unaffected Availability Zones within the region. Throughout the afternoon, multiple configuration changes were deployed to mitigate the API failures. By 2:28 PM PST, the AllocateAddress API began showing signs of recovery. However, the AssociateAddress API remained problematic, leaving customers unable to reassign Elastic IP addresses from downed resources to active ones in healthy zones.
At 6:01 PM PST, AWS confirmed the successful recovery of the AssociateAddress API requests. A critical update was deployed, allowing customers to forcefully disassociate Elastic IP addresses from resources trapped in the powerless data center. This mitigation enabled organizations to restore connectivity by associating their existing IP addresses with newly launched resources in unaffected Availability Zones.
Ongoing Recovery and Recommendations
Despite progress in restoring API functionality, the underlying physical infrastructure remained offline. AWS stated they were awaiting clearance from local authorities to safely restore power to the damaged facility. The company emphasized the importance of multi-Availability Zone architectures, noting that customers running redundant applications across multiple zones were largely insulated from the outage.
For organizations requiring immediate recovery of affected workloads, AWS recommended launching replacement resources in unaffected zones or utilizing alternative AWS Regions, restoring data from their most recent EBS snapshots or backups. Due to the influx of traffic shifted from the downed zone, customers might experience longer provisioning times or require retries when launching specific instance types in the healthy ME-CENTRAL-1 zones.
As of the final update at 6:01 PM PST, AWS did not have an estimated time for physical power restoration at the mec1-az2 facility. The company continues to advise customers to operate out of alternate Availability Zones or Regions where applicable while recovery efforts are ongoing.