“There were reports that Microsoft’s Azure customers faced increased latency as a result of the damage to the cables, which impacted traffic originating from the Middle East, Asia and Europe,” the cable company said in a statement. The company said it had redirected traffic to limit the impact and described service as normal, with continued monitoring.
The incident reverberated far beyond a single cloud provider. NetBlocks said it observed deteriorating internet connections in multiple countries, including India and Pakistan, in line with a regional capacity squeeze. The source of the cable damage was unclear; according to the Associated Press, Yemen’s Houthi movement has denied targeting communications cables.

Microsoft’s Routing Response, and the Customer Impact:
In a status note, Microsoft said that workloads that transit the Middle East, or terminate in Africa, Asia or Europe, were the “most impacted” as users experienced lags in responses rather than absolute outages. Azure’s mitigation was based on rebalancing traffic over other paths, a common playbook when a high-capacity corridor suddenly goes dark for some of its fibers.
Rerouting helps keep capacity available but carries tens to hundreds of milliseconds of additional latency on some routes, particuarly on Europe–Asia routes that have to detour around obstacles. It’s enterprises with latency-sensitive applications — trading, real-time collaboration, and interactive gaming come to mind — that typically experience those bumps first but most workloads are able to absorb the temporary increase in the round-trip.
Why the Red Sea Is a Digital Chokepoint
The Red Sea and the Suez are a choke point of dense clusters of subsea cables linking Europe to Asia, and they are among the most strategic segments of internet infrastructure. TeleGeography industry maps depict more than a dozen systems squeezed through this tight corridor, with major trunk lines that support cloud and telecom backbones.
With so many systems converging within a small geography, single-point hazards — an anchoring incident, seismic activity, or man-made conflicts — can take significant capacity quickly. And even when only a fraction of the cables are down, the rest of the systems quickly clog, causing carriers and clouds to reroute traffic over longer, more circuitous paths.
Subsea Cables Take Time to Fix
Undersea fiber repairs are logistical and regulatory marathons. Cable owners have to send a specialized repair ship, obtain permits, find the fault, hoist the cable, splice in new fiber and test — all in a busy shipping lane. Under normal conditions, fixes can take one to three weeks, according to operators and analysts; poor weather, safety factors and restricted access can lengthen those to a few months.

The Red Sea introduces more complexity: heavy shipping traffic, regional hostilities, and shallow seabed areas in which undersea cables are in closer contact with anchors or fishing lines. For long outages, carriers will usually lease temporary capacity on other systems or route around Africa, preserving continuity but adding more latency to the flow to/from Europe-Asia.
Cloud and Telecom Ripples
Azure’s pace of remediation exemplifies the path diversity that hyperscalers design for, though no one is inoculated to chokepoint shocks. Azure is the second largest public cloud by revenue and has approximately a quarter of the global market, according to Synergy Research Group, so even latency that is localized to one region draws the attention of multinational customers.
Outside the enterprise apps, the cable cuts can also put a squeeze on consumer services as ISPs throttle or reprioritize traffic to maintain stability. These restrictions show up as countrywide slowdowns, which network measurement outfits like NetBlocks frequently spot. The previous Red Sea disturbances earlier in the year led to some apparent bottlenecks in East Africa and South Asia, a pattern in line with the current reports.
What to Watch Next
Key indicators to watch for are recovery times by cable consortia, capacity auctions for short-term routes and if more systems report faults. Cloud status pages might stay green even as back-end routing still changes, so customers on tight latency budgets should be monitoring their own telemetry, and considering region failovers where possible.
Resilience will depend in the longer term on diversifying away from single corridors. New system circles of navigational beacons popping up across Africa and in the Eastern Mediterranean are providing additional waypoints, but the Red Sea, locked in by geography, will continue to play an outsized role. So far, the rerouting by Microsoft has limited the immediate damage to Azure, but the episode is a new reminder that the cloud ultimately rides on strands of glass on the seafloor.
