1. Home
  2. Companies
  3. Amazon Web Services
Amazon Web Services

Amazon Web Services status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Problems in the last 24 hours

The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by Amazon Web Services users through our website.

  • 41% Errors (41%)
  • 32% Website Down (32%)
  • 27% Sign in (27%)

Live Outage Map

The most recent Amazon Web Services outage reports came from the following cities:

CityProblem TypeReport Time
West Babylon Errors 5 hours ago
Massy Errors 1 day ago
Benito Juarez Errors 5 days ago
Paris 01 Louvre Website Down 9 days ago
Neuemühle Errors 9 days ago
Rouen Website Down 9 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • erossics
    Erossi (@erossics) reported

    Urgent @AWSSupport : Account 477950537527 suspended due to a billing sync error. Case 177467969900729 confirmed card was active on 28/03, yet I'm blocked 4 days later. Dashboard shows $0.00 due/Pending, so I can't pay manually. Production is DOWN. Please unsuspend/retry charge!

  • NeuronSale
    NeuronGarageSale (@NeuronSale) reported

    @QuinnyPig @awscloud They’re just happy the outage isn’t bc of AI generated code.

  • fortnite_Egypt1
    🇪🇬fortnite egyption servers (@fortnite_Egypt1) reported

    @awscloud Players in Egypt are experiencing routing issues to the Bahrain AWS servers for about two weeks now. Ping jumped to ~150ms instead of the usual low latency. Please investigate and fix the routing problem.@AWSSupport @awscloud please fix the problem

  • MC59785335
    NFT and CRYPTO Fan (@MC59785335) reported

    @AWSSupport Thank you for support . Actually i think with support + or even with regular plan such kind of issues like restore limits should be resolved in 1 hour range. However im still have not received any answer regarding my case..

  • Ser_Jettt
    Ser-Jet💜👑(🧩) (@Ser_Jettt) reported

    @JothamNtekim1 @awscloud Interesting approach to the problem. Are you currently exploring any builder challenges or just shipping independently?

  • OstinatoRigore4
    Leonard Clinton Williams III (@OstinatoRigore4) reported

    I apologize for characterizing powerful people as tyrants. This is unfair to you. Most of you do not behave as tyrants, and for the ones I was referring to, I just don’t know what to say. This includes the following companies and whoever within them is managing the situation with me: - @amazon. I said great things about you for years, and you deserved them. Your business changed my life. And yet you participate in this atrocity of crimes against me. This atrocity is not about the economy, as I once thought it was. That rationale is no longer on the table. I’ve offered to settle this in ways that could not possibly have a material impact on this bank, much less on the economy. In making and negotiating these offers, I have been dealt nonstop ego rage and ego rage motivated abuse. Just about a week ago, I called your customer service line, and the man I spoke to treated me with a demeaning and abusive tone that has been etched into my being by this crime spree. It is very recognizable, and I’d like to know how this is at all necessary to maintain the economy. - @Microsoft I have used your software for many years. I’m a long term paying customer. You have enabled and participated in a criminal terrorization of me, including egregious invasions of my privacy, and there is no excuse for this. It is absolutely disgusting. - @awscloud another Amazon entity that has gave me joy and that has been used in crimes against me. I will add more later, but if all of you are not a tyranny, in your actions, what are you? I’d really like to know. In every circumstance, you are a participant in a historic atrocity that was perpetrated against a kindhearted disabled man for no reason whatsoever. I am an extremely capable fighter, but the fact that I am able to fight back obviously does not justify what has been done to me. I mean would you fight back if I did this to you. I don’t even want to go down this absurd line of reasoning, but this attitude is something that has seemed to be at play. The parties involved in these crimes almost became a murderer as of the past week or so. That is one aspect of the level of seriousness of this situation. I’m not on a power trip. I want justice for myself and for everyone else.

  • introsp3ctor
    Mike Dupont (@introsp3ctor) reported

    @AWSSupport oh, now it magicallly worked again! i just logged in. thanks for your help. this is the second multi day outage, once a month it seems

  • HetarthVader
    Hetarth Chopra (@HetarthVader) reported

    @orangerouter and I spent days debugging why our inter-node bandwidth on @awscloud was slow. 8x A100 TP8PP2 serving across machines. bandwidth was ~100 Gbps. should have been 400 Gbps.

  • minorun365
    みのるん (@minorun365) reported

    @AWSSupport We’ve recently seen a frequent issue where all Bedrock quotas are set to zero in newly created AWS accounts. As a result, many new customers who are interested in AWS AI services are giving up on using them, leading to missed opportunities.

  • ChristhylCC
    Christhyl Ceriche (@ChristhylCC) reported

    @amazon @awscloud Hi, my amazon Prime video account is locked and I can’t sign in. When I try to contact support, it asks me to log in and I’m stuck in a loop. Could you please help me recover access?

  • GamingNepr34519
    जहाँ mila,वही खोदूंगा (@GamingNepr34519) reported

    @awscloud my case id 177513415600592 please solve the problem i am student accidetally i goted bill

  • jlgolson
    Jordan Golson (@jlgolson) reported

    @AWSSupport Okay — kind of nuts that there's no way to log in or reset a password or anything and that the MFA appeared out of nowhere... also that you can have the same login for AWS Builder AND AWS Console and there's no great explanation for why they're different.

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @CPGgrowthstudio Production down 5 days. The response commits to nothing. Next customer with this issue finds the same boilerplate.

  • VladimirAtHQ
    Vlad The Dev (@VladimirAtHQ) reported

    @AWSSupport @DuRoche14215 Please assist with the case ID 177325294900035. Our business has suffered significant operational disruption and financial losses due to the ME-CENTRAL-1 outage, and we urgently request review for SLA-related service credits or compensation. And if possible, recovery of db.

  • Cr8DigitalAsset
    Christina Haftman (@Cr8DigitalAsset) reported

    @AnthropicAI Claude Code API access ≠ consumer access. During the recent middle-east @awscloud outage, developers with direct API integrations kept working, while consumer-facing users were locked out of the conversational interface. That’s a meaningful distinction.

  • mjha2088
    manish (@mjha2088) reported

    @AWSSupport Thank you! The entire db.r7i family shows reduced vCPUs for SQL Server & Oracle vs MySQL/PostgreSQL/Aurora in console. The docs page has no mention of this engine-specific difference — undocumented and critical for licensed engine customers planning costs.

  • _ps428
    Pranav Soni (@_ps428) reported

    @awscloud is ec2 down again?

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @adidshaft Fixed the error, still rejected. The process isn't broken - opacity is the feature.

  • gpusteve
    steve (@gpusteve) reported

    is aws cli broken for anyone else ??? i literally can't sign in @awscloud

  • scooblover
    Kane | AI Insider 🤖 (@scooblover) reported

    This isn't Iran's first move against AI/tech infrastructure. Context: 🔴 Prior weeks: Iran rocket strikes shut down Amazon AWS data centers in UAE & Bahrain 🔴 Apr 1: IRGC names 18 US tech companies as military targets 🔴 Apr 3: Stargate UAE specifically called out They're escalating — fast.

  • cataneo339
    CATA_NEO (@cataneo339) reported

    @Theta_Network @SyracuseU @awscloud THETA GOES DOWN

  • SpainGreatAgain
    Bella Iberia (@SpainGreatAgain) reported

    @NextGenStats @NFL @awscloud Expected Points Added sounds fancy until you realize it’s just another way for nerds to tell us Mahomes is a god while downplaying actual game-winning drives and clutch plays 😤 EPA, success rate, all that AWS-powered nonsense cool for spreadsheets, terrible for real football passion. Stop letting models replace what our eyes see on Sundays. Bring back old-school football debate

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.

  • das__subhajit
    Subhajit das (@das__subhajit) reported

    In 2017, Amazon S3 went down and took a massive chunk of the internet with it. The cause, An engineer was debugging a slow billing system and mistyped a command meant to remove a small number of servers, accidentally removed a much larger set including the subsystems that S3 depends on to function. Slack, Trello, GitHub, Quora, Medium, all hit. Even Amazon's own status page went down because it was hosted on S3. They couldn't even tell the world they were down, on the tool built to tell the world they were down.

  • ramarxyz
    ramar (@ramarxyz) reported

    @AWSSupport Case ID 177557061000414, production down, account on verification hold, 24h+ no response, please escalate

  • Md_Sadiq_Md
    Sadiq (@Md_Sadiq_Md) reported

    @AWSSupport Wow, which issues are those which are not been resolved from past 3 days

  • canadabreaches
    canadianbreaches (@canadabreaches) reported

    BREACH ALERT: Duc (Duales) — Toronto fintech. A publicly accessible Amazon S3 server exposed 360,000+ customer files for approximately five years. Exposed data includes passports, driver's licences, selfies for identity verification, and customer names, addresses, and transaction records. Office of the Privacy Commissioner of Canada is investigating. Severity: CRITICAL.

  • Xyzfb2t
    Xyz (@Xyzfb2t) reported

    @awscloud First create a problem by having different services and the solve it by creating a new service.

  • JLSports24
    Joe Sutphin (@JLSports24) reported

    @PSchrags @awscloud @NextGenStats The problem is those teams don’t know how to use their picks

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @vladimirprus The problem isn't bursting. It's that credits are invisible until you're throttled.