Amazon server outage caused issues for Alexa, Ring, Disney Plus, and deliveries


Problems with some Amazon Web Services cloud servers cause large chunks of the internet to load slowly or fail. Amazon’s vast network of data centers powers many things that you interact with online, including this website. So, as we’ve seen in previous AWS outage incidents, any problem has massive ripple effects. People started noticing issues around 10:45 am ET, and just after 6:00 pm ET, the AWS status read “Many services have already recovered, but we are working on a full recovery of all services.”

While some affected services that rely on AWS have been restored, the internet is still a bit slower and more unstable than usual. The most important application affected by the outage could be the one used by Amazon employees. CNBC points to Reddit posts from Amazon Flex, warehouses and delivery people that say apps that keep track of packages, tell them where to go, and generally keep your items on time have also gone down.

There have been reports of outages for Disney Plus and Netflix streaming, as well as games like PUBG, League of Legends, and valiant. We’ve also noticed issues accessing Amazon.com and other Amazon products like the Alexa AI Assistant, Kindle eBooks, Amazon Music, and Ring or Wyze’s security cameras. DownDetector’s list of services with concurrent spikes in their crash reports runs on almost every recognizable name: Tinder, Roku, Coinbase, Cash App, and Venmo, and the list goes on.

Network administrators everywhere have reported errors connecting to instances of Amazon and the AWS Management Console, which controls their access to servers. After about an hour of problems, Amazon’s official status page added an update with messages confirming the outage.

[11:26 AM PST] We are seeing an impact on several AWS APIs in the US-EAST-1 region. This issue also affects some of our monitoring and incident response tools, delaying our ability to provide updates. Affected services include: EC2, Connect, DynamoDB, Glue, Athena, Timestream, and Chime and other AWS services in US-EAST-1.

The root cause of this problem is a corruption of several network devices in the US-EAST-1 region. We are pursuing several mitigation paths in parallel and have seen signs of a recovery, but we do not have an ETA for a full recovery at this time. Root connections for consoles in all AWS Regions are affected by this issue, but customers can connect to consoles other than US-EAST-1 by using an IAM role for authentication.

With the issues originating from the AWS US-EAST-1 Region in Virginia, users elsewhere may not have seen as many issues, and even if you were affected, it could manifest as a slightly slower load. while the network redirected your requests elsewhere.

Update December 7 at 3:41 p.m. ET: Added impact information for warehouse and delivery workers, and the most recent status message.

Update December 7 at 7:20 p.m. ET: Added impact information for warehouse and delivery workers, and the most recent status message.


Comments are closed.