1. Home
  2. Companies
  3. Amazon Web Services
Amazon Web Services

Amazon Web Services status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Problems in the last 24 hours

The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by Amazon Web Services users through our website.

  • 42% Errors (42%)
  • 32% Website Down (32%)
  • 26% Sign in (26%)

Live Outage Map

The most recent Amazon Web Services outage reports came from the following cities:

CityProblem TypeReport Time
Palm Coast Errors 11 hours ago
West Babylon Errors 6 days ago
Massy Errors 8 days ago
Benito Juarez Errors 11 days ago
Paris 01 Louvre Website Down 16 days ago
Neuemühle Errors 16 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @CPGgrowthstudio Production down 5 days. The response commits to nothing. Next customer with this issue finds the same boilerplate.

  • das__subhajit
    Subhajit das (@das__subhajit) reported

    In 2017, Amazon S3 went down and took a massive chunk of the internet with it. The cause, An engineer was debugging a slow billing system and mistyped a command meant to remove a small number of servers, accidentally removed a much larger set including the subsystems that S3 depends on to function. Slack, Trello, GitHub, Quora, Medium, all hit. Even Amazon's own status page went down because it was hosted on S3. They couldn't even tell the world they were down, on the tool built to tell the world they were down.

  • distributeai
    distribute.ai (@distributeai) reported

    aws ( @awscloud ) is officially supporting x402 payments so your ai agents can autonomously buy their own api access using usdc. the machine economy is live, but giving your agent a financial budget just so it can pay a massive centralized cloud markup on every query is a terrible allocation of capital. route your autonomous workflows through the distribute network. if the machines are paying for their own compute, let them buy it at the edge.

  • Dreamcatch3r_mk
    Dreamcatcher_MK (@Dreamcatch3r_mk) reported

    @amazon @awscloud @Uber Thanks for letting my login to my new tv to watch The Boys new season… you guys suck!!!

  • Rob_Shenanigans
    Roberto Shenanigans (@Rob_Shenanigans) reported

    @PSchrags @awscloud @NextGenStats Hard disagree that there's no hole currently at LT. Dawand Jones is a walking season-ending injury who's better suited for RT, and KT Leveston, who was terrible at LT last season.

  • RiteshA10965147
    Ritesh (@RiteshA10965147) reported

    @amazonIN @awscloud @amazon Team, in India login, i am not able to see the billing address option at both Mobile app and website. Not sure if this is removed. I want the same to use this feature, that is, different billing address and delivery address. please support.

  • greenfuzon
    Kinjal Dixith (@greenfuzon) reported

    @AWSSupport I have no problem with AWS or AWS support. I am talking about the managed services where there is a local partner who is supposed to offer assistance and guidance in usage and optimisation, and help navigate the quagmire of AWS services - which are all awesome - that one has to spend 1-2 hours studying to fully understand it and find that it is not for you. we have been using AWS for 6 years now and we are not going anywhere. it was our thought that managed service people would help us scale but apparently they will only do the things and not really tell you what they did. so it felt like a lock in. still NO SHADE ON AWS. AWS is awesome. Maybe this particular partner was not a right fit for us.

  • ForwardFuture
    Forward Future (@ForwardFuture) reported

    “Will Amazon ever sell its custom chips outside of AWS?” Matt Garman, CEO @awscloud, says: “Never say never. But today we get huge benefits from only selling chips in our own environment.” “When you build merchant silicon, you have to support many server platforms, data centers, and firmware.” “We only have to build for one: AWS. That simplifies everything.”

  • SpainGreatAgain
    Bella Iberia (@SpainGreatAgain) reported

    @NextGenStats @NFL @awscloud Expected Points Added sounds fancy until you realize it’s just another way for nerds to tell us Mahomes is a god while downplaying actual game-winning drives and clutch plays 😤 EPA, success rate, all that AWS-powered nonsense cool for spreadsheets, terrible for real football passion. Stop letting models replace what our eyes see on Sundays. Bring back old-school football debate

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @senunwah @AWSSupport On-prem means when it's down, at least you know whose fault it is.

  • manas__vardhan
    Manas Vardhan (@manas__vardhan) reported

    @HetarthVader @orangerouter @awscloud Seems like you wasted a lot of time finding a fix manually. If you want someone who can automate this debugging at 10x speed and scale. Let me know. I'm a researcher at USC, prev at JPmorgan. I automate stuff for fun.

  • milescarrera
    Miles Carrera (@milescarrera) reported

    @awscloud has a history of sweeping things under the carpet, only acknowledging the gravity of an issue when an multiple availability zone or regions had widespread failure of services. I am sure this is much worse than we are being told.

  • Xyzfb2t
    Xyz (@Xyzfb2t) reported

    @awscloud First create a problem by having different services and the solve it by creating a new service.

  • BuddyPotts
    Neal🅾️ (@BuddyPotts) reported

    @danorlovsky7 @awscloud @NextGenStats The defense was horrendous last year and all they added was Edmunds but lost Okereke and Flott. They cant stop the run at all, they should trade down and collect more picks and build the defense

  • ceO_Odox
    Ødoworitse | DevOps Factory (@ceO_Odox) reported

    Every DevOps engineer knows "It works on my machine" is a lie. Hit a wall today deploying to @awscloud EC2—*** was begging for a password in a headless shell. ​Error: fatal: could not read Username. Reality: The source URL drifted, and the automation had no keyboard to answer. 🧵

  • monalisamusk
    Mona⁷³⁷ (@monalisamusk) reported

    Your favorite apps don’t even own their own computers so why should you go through the hassle & unnecessary stress of buying hardware or dedicated servers for your own startup > Netflix runs on Amazon AWS > Spotify runs on Google Cloud > Airbnb runs on Amazon AWS "The Cloud" = renting someone else's servers Buying a server: $10k+ upfront Renting on cloud: $0.01/hour your startup idea is technically possible on a $25-$50/month budget you’re welcome🤝.

  • SuperStiffYogi
    Super Stiff Yogi (@SuperStiffYogi) reported

    @AWSSupport The next steps are just the same documentation links which do not address my specific issue. If you read the case carefully you would see that. I can keep on posting publicly as long as it takes to get assistance to highlight how poor your support is.

  • jmbowler_
    james bowler 👹 (@jmbowler_) reported

    anyone else having trouble getting past @awscloud mfa?

  • Tahalazy
    Taha Haider Syed (@Tahalazy) reported

    @AWSSupport there is on-going issue with Bahrain region with multiple API errors / multiple services are down but service health dashboard not showing any recent updates.

  • deegeemeeonx
    deegeemee (@deegeemeeonx) reported

    @Atlassian @awscloud How about fixing the authentication of your vscode plugins, which forces every dev to login again and again and is broken for months, before pumping out sloppy ai tools nobody asked for?!

  • Vidhiyasb
    Vidhiya (@Vidhiyasb) reported

    @awscloud @awscloud amazon Q's file write tools are having issues..please fix

  • PsudoMike
    PsudoMike 🇨🇦 (@PsudoMike) reported

    @awscloud Mean time to resolution in payments systems is where this matters most. An alert at 2am on a failed settlement run has very different urgency than a slow API endpoint. If the agent can distinguish context and prioritize accordingly, that changes what being on call actually means.

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.

  • susegadtraveler
    ST (@susegadtraveler) reported

    @AWSSupport is effective non functioning at this point. Their health dashboards no longer show the real status wrt UAE regions. 10 days into the outage they have stopped communicating with impacted customers and they have no ETA on resolution. Possible data loss for customers!

  • BryanChasko
    Bryan Chasko (@BryanChasko) reported

    @CirrondlyLog @grok @awscloud congrats Jose! I dont know if you, or any AI, want any part of my cost issues 🤪

  • Abomination81
    Abomination (@Abomination81) reported

    @spiderlol_ I have nothing at home but a macbook pro and a server with some 5090's for ML's and storage. I use amazon aws ec2

  • Mn9or_
    Mansour (@Mn9or_) reported

    @AWSSupport Hello We are currently affected by the outage in me-south-1 AMI copy to another region is stuck and failing Snapshot creation fails with internal errors Plz help , we can not create a tech support as it’s required a subscription

  • Chris83748731
    Chris (@Chris83748731) reported

    @noahmorris @awscloud Thank you for the fast response ! I was in the middle of rendering a video that didn't complete! I lost all the credits from this video ?or I can continue after the server is back online?

  • jlgolson
    Jordan Golson (@jlgolson) reported

    @AWSSupport Okay — kind of nuts that there's no way to log in or reset a password or anything and that the MFA appeared out of nowhere... also that you can have the same login for AWS Builder AND AWS Console and there's no great explanation for why they're different.

  • SuperStiffYogi
    Super Stiff Yogi (@SuperStiffYogi) reported

    @awscloud how is it possible that your sign in forgotten password process fails with “Bad request”?! And your email case support is so bad it makes no attempt to assist?