Microsoft has warned that Azure customers may see elevated latency and intermittent packet loss after multiple subsea fiber cables were cut in the Red Sea, disrupting key internet routes between Europe, the Middle East, and Asia. The company said it has rerouted traffic to stabilize performance and continues to rebalance flows across its global backbone to minimize impact.
Monitoring groups have also observed broader regional degradation. NetBlocks reported that concurrent cable outages in the Red Sea coincided with reduced connectivity in several countries, including India and Pakistan. Microsoft did not attribute the cuts to a specific cause or actor and emphasized that undersea repairs can take significant time.

Where Azure felt the impact
Azure traffic traversing the Middle East or terminating in Asia and Europe was most affected, according to Microsoft’s status communications. Customers may have noticed higher round‑trip times on east‑west workloads, content delivery to Indian and Gulf markets, replication between European and Asian regions, and VPN tunnels relying on Suez‑routed transit.
Rerouting adds physical distance. A typical London–Singapore path via Suez often measures roughly 160–180 milliseconds; sending the same flow around Africa’s Cape of Good Hope can push latency toward 250–300 milliseconds. The difference is enough to slow chatty applications, cross‑region database syncs, and certain trading or media workloads, even if uptime remains intact.
Why the Red Sea is a chokepoint
The Red Sea–Suez corridor is one of the world’s densest subsea fiber routes, linking European hubs to the Gulf, India, and onward to East Asia. Industry trackers such as TeleGeography estimate that hundreds of submarine cables crisscross the planet—spanning more than a million kilometers—but only a limited set traverse this narrow passage. That creates a high‑consequence bottleneck for cloud backbones and content providers.
Subsea systems carry the vast majority of international data—well over 95% by most estimates. When several cables in a shared trench are severed, there’s an additive effect: capacity contracts quickly and traffic is pushed onto alternate routes that were never designed to absorb full peak loads, amplifying latency and jitter.
Microsoft’s mitigation playbook
Hyperscalers build for this. Microsoft operates a private WAN that interconnects its regions and peers with multiple carrier‑owned systems. During cable faults, the company uses software‑defined traffic engineering—segment routing, fast re‑route, and dynamic BGP policies—to shift flows across surviving Mediterranean, Arabian Sea, and West African paths, as well as trans‑Atlantic and trans‑Pacific links when needed.
These measures preserve availability but cannot defy physics. Customers running latency‑sensitive services between Europe, the Middle East, and Asia may still experience slower responses until capacity in the Red Sea corridor is restored. To cushion performance, enterprises can pin traffic to the least‑impacted Azure regions, temporarily relax replication SLAs, or use edge caching and read‑replicas closer to end users.
Attribution is unsettled
Microsoft has not indicated who or what caused the cuts. The Associated Press has reported denials from Yemen’s Houthi movement regarding attacks on fiber infrastructure amid ongoing maritime tensions in the region. Historically, subsea faults are most often linked to anchors, trawling, earthquakes, or landslides, though deliberate interference remains a persistent concern for operators and governments.
Repairing deep‑water cables requires specialized ships, weather windows, and coordination with coastal authorities. In politically sensitive waters, insurance and security constraints can extend timelines. Operators typically splice and test each damaged segment before returning it to service, which means partial restorations may precede full capacity recovery.
Ripple effects beyond Microsoft
Although Azure is the most prominent service to acknowledge impact, the underlying infrastructure is shared across carriers and content networks. Analytics from independent observatories such as NetBlocks and regional IXPs suggest that the broader internet experienced congestion and slower paths as traffic converged on remaining circuits. Other hyperscalers and telcos rely on many of the same conduits, even when they operate diverse routes.
What enterprises should do now
Monitor end‑to‑end latency, not just availability, in affected corridors. If feasible, shift latency‑sensitive workloads to regions with stable paths, and avoid unnecessary cross‑region chatter. Validate failover policies for VPNs and private peers, as some tunnels may prefer suboptimal routes during global reconvergence. For customer‑facing apps, increase cache TTLs and serve static assets from edge locations nearest users.
The incident underscores a broader reality: cloud resilience depends on subsea resilience. Diversified carriers, multi‑region architectures, and clear incident runbooks are the best defense against chokepoint failures—even when providers like Microsoft move quickly to steady the ship.