A far-reaching internet disruption on Tuesday briefly overwhelmed a wide array of major corporate websites — from FedEx and Delta Air Lines to HSBC and McDonald’s. Early signals indicated the problem originated at Amazon Web Services and then cascaded through apps and networks that rely on its service.
Reports of the outage surged across tools that track internet traffic, and a number of companies confirmed they were experiencing degraded service. AWS said it had found a likely cause and was rolling out a fix, adding that services were recovering even while backlogs cleared and some workloads continued to experience erratic behavior.
- What happened during the widespread internet disruption
- Which companies and services were affected by outages
- Why AWS’ Problems Send Ripples so Far and Wide
- What companies are saying about the AWS-related outage
- What you can do now to minimize disruption and risk
- The bigger picture and lessons from this major outage

What happened during the widespread internet disruption
Symptoms were different by service: the inability to log in, stalled video streams, timeouts during checkout and brittle push notifications. The trend was consistent with an attack on a cloud-platform service, like identity, API gateway or networking. When the essential services that underlie them shudder, applications perched on top of many layers can fall out and fail even if their code is healthy.
The AWS Service Health Dashboard showed progress toward recovery after teams found and isolated a potential root cause. Like most of the big cloud incidents in the history of the world, it’s not like it scaled back up instantly — queued requests, retries, and cache warm-ups can keep the turbulence in motion for a while after you’ve fixed the core problem.
Which companies and services were affected by outages
Consumers experienced issues that ranged from streaming to communication, finance, gaming and smart homes. Disney+, Hulu, Max, Roku and Prime Video all had widespread complaints. Workplace and messaging tools such as Slack and Signal had service outages for some users. Financial apps such as Coinbase and Venmo were said to experience transaction delays and sign-in problems.
Customers with cell service on AT&T, T-Mobile and Verizon reported experiencing connection problems — including for some on 5G home internet. Gamers reported sign-in and matchmaking issues on PlayStation Network and the Epic Games Store, as well as interruptions with service for games including Fortnite, Roblox and Rocket League.
Amazon’s own ecosystem was not spared: Alexa responses were slow, Ring camera feeds timed out, and Amazon Music and Prime Video had errors.
Outage trackers such as Downdetector recorded spikes in the tens of thousands of reports across several brands at the height of the disruption, and they appeared to trend downward as recovery continued.
Why AWS’ Problems Send Ripples so Far and Wide
AWS powers a significant portion of the internet’s back end. Consumer-facing apps tend to depend on a familiar stack — CloudFront for content delivery, S3 for storage, DynamoDB or Aurora for data, Cognito for identity management and managed API gateways. An outage in any shared dependency can ripple through when the services share the same region or blast radius.

Contemporary microservice deployments increase both resilience and fragility. Features like logins, payments and notifications might touch dozens of services; if one path stalls, the user-facing function can break. That means that DNS problems, oversaturated service meshes or busy control planes can all manifest as ‘the internet is down’ even though the problem actually lies with a relatively few bottlenecks.
In large incidents, network intelligence companies like ThousandEyes (where Bishop works), Cloudflare Radar and Kentik regularly record these patterns: sudden surges in the number of errors, rerouting anomalies and partial regional impact followed by staggered recovery as caches refill and retries taper off.
What companies are saying about the AWS-related outage
AWS said on its website that it was deploying a fix, and that most requests were starting to see success as teams worked through backlogs. Other status pages from affected brands carried the same theme: degraded service, spotty logins and ongoing monitoring. Although many services were quick to get back to normal, some — like specific streaming sites and device ecosystems — took longer due to all the dependencies and traffic spikes during restoration.
Outage monitoring sites continued to display elevated (but decreasing) incident reports for some of the affected providers, charting a now familiar recovery curve from previous cloud and CDN outages: an upswing followed by choppy waters on a plateau as things are dealt with and queues are worked down before descending toward normalcy.
What you can do now to minimize disruption and risk
If an app is not working, don’t log in over and over again or try constant refreshing — retry storms can slow things down. Monitor trendlines on official status pages or reputable monitors like Downdetector. Power-cycle modems and streaming devices only after checking with your ISP that local problems have been cleared; otherwise you will want to wait for platform-side fixes. For smart home gear, rely on manual overrides as much as possible until cloud services are stabilized.
Businesses need to ensure that clients use EBO, timeouts and circuit breakers, and test multi-AZ or multi-region failover paths. Dependency mapping and synthetic monitoring can help uncover those hidden single points of failure before your next incident.
The bigger picture and lessons from this major outage
This outage is yet another reminder that the consumer internet sits atop a small number of cloud and delivery platforms. Centralization provides scale and efficiency, but it also introduces correlated failure modes. Diversifying critical services, regional isolation, and planning for graceful degradation can help minimize customer effects when a large provider falls.
As services continue to recover, one should expect performance to normalize in rolling fashion. The episode is certain to reignite scrutiny of the cloud dependency strategies used by enterprises, regulators and investors — not to mention a sober set of questions about how to keep everyday digital life running when the backbone wobbles.