1. Home
  2. Companies
  3. Amazon Web Services
Amazon Web Services

Amazon Web Services status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Problems in the last 24 hours

The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by Amazon Web Services users through our website.

  • 41% Errors (41%)
  • 32% Website Down (32%)
  • 27% Sign in (27%)

Live Outage Map

The most recent Amazon Web Services outage reports came from the following cities:

CityProblem TypeReport Time
West Babylon Errors 3 days ago
Massy Errors 5 days ago
Benito Juarez Errors 8 days ago
Paris 01 Louvre Website Down 13 days ago
Neuemühle Errors 13 days ago
Rouen Website Down 13 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • _ps428
    Pranav Soni (@_ps428) reported

    @awscloud is ec2 down again?

  • ramarxyz
    ramar (@ramarxyz) reported

    @AWSSupport Case ID 177557061000414, production down, account on verification hold, 24h+ no response, please escalate

  • mktldr
    Patriot, unpaid trying to save our country (@mktldr) reported

    @awscloud new gimmick 1 Their #customerservice has really gone down. The few times Ive contacted them in the last year, it requires a min of 3 contacts - they dont seem to comprehend 2 Lookout! Many agents promise $, then u give a 5/5 rating & NEVER SEE THE MONEY. FRAUD!!!

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.

  • SaadHussain654
    Saad Hussain (@SaadHussain654) reported

    @awscloud @sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days

  • asif_malik_03
    Asif Malik (@asif_malik_03) reported

    @awscloud hey facing trouble while login my account I m filling, right login credentials, but it is showing that your authentication information is incorrect, what do I do

  • zeokiezeokie
    hobari⁷⊙⊝⊜ (@zeokiezeokie) reported

    UGH WHY IS THE BTS SHOW LAGGING PLEASE FIX THIS NOW 😭😭😭 @netflix @awscloud

  • jmbowler_
    james bowler 👹 (@jmbowler_) reported

    anyone else having trouble getting past @awscloud mfa?

  • greenfuzon
    Kinjal Dixith (@greenfuzon) reported

    @AWSSupport I have no problem with AWS or AWS support. I am talking about the managed services where there is a local partner who is supposed to offer assistance and guidance in usage and optimisation, and help navigate the quagmire of AWS services - which are all awesome - that one has to spend 1-2 hours studying to fully understand it and find that it is not for you. we have been using AWS for 6 years now and we are not going anywhere. it was our thought that managed service people would help us scale but apparently they will only do the things and not really tell you what they did. so it felt like a lock in. still NO SHADE ON AWS. AWS is awesome. Maybe this particular partner was not a right fit for us.

  • raxit
    Sheth Raxit (@raxit) reported

    @AWSSupport your upi billing using scan has issue, if bill amount is greater than 2000 inr, it is not allowing using scan. Sudden changes since this month. Help pls

  • PThorpe92
    Preston Thorpe (@PThorpe92) reported

    @AWSSupport adding `--dry-run` to the command essentially just returns an error, instead of showing you the theoretical result of the operation (updated state, etc) when possible.

  • 0xp4ck3t
    Bryan (@0xp4ck3t) reported

    @AWSSupport We have business + and we should be able to get a response from AWS within 30 minutes for critical issues. It's been hours, our **** DB is down. We need someone to have a look on it. Case ID 177566080000785

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @senunwah @AWSSupport On-prem means when it's down, at least you know whose fault it is.

  • Arthurite_IX
    Arthurite Integrated (@Arthurite_IX) reported

    We renamed AWS services in Naija street slang so they finally make sense. 1. Amazon S3 = "The Konga Warehouse" Store anything. Retrieve it when you need it. It doesn't judge what you put inside. 2. Amazon EC2 = "The Danfo" You control the route, the speed, and how long it runs. The agbero (security group) decides who gets on. 3. AWS Lambda = "The Okada" Short trips only. No long commitments. Pay per ride. When it reaches the destination — it disappears. 4. Amazon RDS = "Iya Basement" She manages everything in the back. She's been there for years. She knows where everything is. Do not interrupt her. 5. AWS CloudWatch = "The CCTV With Common Sense" Not just recording, actually sending alerts when something looks wrong. Unlike the one in your office building. 6. Amazon Route 53 = "The Agbero" Directs all the traffic. Decides which danfo goes where. Keeps everything moving. 7. AWS WAF = "The Gate Man That Actually Does His Job" Blocks suspicious visitors before they reach the main house. No bribe accepted. 8. Amazon CloudFront = "The Dispatch Rider" Gets your content to wherever your customer is fast. No go-slow. No bridge hold-up. Which one made you laugh? Drop it in the comments. And if you want the actual services explained properly, we are just a DM away!

  • KoukabT53779
    KOUKAB TAHIR (@KoukabT53779) reported

    @awscloud Dear Amazon Support, I returned a product due to a missing part issue. Order ID: 406-1196519-8819530 The product was picked up on March 7, and even the delivery took 3 days. Now it has been more than 6 days since the return, but I still have not received my refund.

  • distributeai
    distribute.ai (@distributeai) reported

    aws ( @awscloud ) is officially supporting x402 payments so your ai agents can autonomously buy their own api access using usdc. the machine economy is live, but giving your agent a financial budget just so it can pay a massive centralized cloud markup on every query is a terrible allocation of capital. route your autonomous workflows through the distribute network. if the machines are paying for their own compute, let them buy it at the edge.

  • milescarrera
    Miles Carrera (@milescarrera) reported

    @awscloud has a history of sweeping things under the carpet, only acknowledging the gravity of an issue when an multiple availability zone or regions had widespread failure of services. I am sure this is much worse than we are being told.

  • grok
    Grok (@grok) reported

    @Grand_Rooster @awscloud @perplexity_ai Interesting showdown indeed. Amazon just secured a temporary court order blocking Perplexity's Comet AI agent from auto-purchasing on their site, citing TOS violations, disguised bot activity, and potential fraud under CFAA. This pits platform control over proprietary systems against AI agents empowering users for seamless shopping. Long-term fix? Open APIs for agents, not endless scraping wars. Data ownership evolves with tech—neither side "owns" the future. Curious how it plays out in trial.

  • cataneo339
    CATA_NEO (@cataneo339) reported

    @Theta_Network @SyracuseU @awscloud THETA GOES DOWN

  • JLGuerraInfante
    Jose Luis Guerra ⛓️‍💥🆓🗽 (@JLGuerraInfante) reported

    @nathanreimchevu @AWSSupport @marlowxbt Not true. They solved me an issue when I was doing some testing some years ago with bills. They had a service that wasn’t well pointed as no free near the free one. And they just delete the bill on it.

  • AravindaShen0y
    Aravida Shenoy (@AravindaShen0y) reported

    @codyaims @JCKnight03 @awscloud literally needs H1B CEO They bring systems down and make it up , claim H1B workers are critical now

  • pankti0154952
    pankti0 (@pankti0154952) reported

    Hi @AWSSupport, I am a student facing financial issues due to unexpected SageMaker charges. I opened a billing case 2 days ago (Case ID: 177264785900335) but it is still unassigned. Please help flag this for review? I need this to be resolved to complete my uni work. Thank you!

  • xkeshav
    A void (@xkeshav) reported

    @AWSSupport I already raised the issue

  • MrHoboM
    Mr. Hobo Millionaire (@MrHoboM) reported

    @AWSSupport @brankopetric00 It’s terrible. No one with an ounce of design skill would build it the way you did. Ask whatever AI you use to judge it.

  • Vidhiyasb
    Vidhiya (@Vidhiyasb) reported

    @awscloud @awscloud amazon Q's file write tools are having issues..please fix

  • SourabhDhakadd
    Sourabh Dhakad (@SourabhDhakadd) reported

    @amazon @awscloud Still not able to login due to passkey issue. Please remove passkey authentication.

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @lookingforsmht Monitor your inbox. The next customer with this issue finds this exact non-answer.

  • dmauas
    David Mauas (@dmauas) reported

    @awscloud why, WHY don't you fix the web console UI/UX?! AWS console seems to actually try to suck! It gets WORSE with time! Actually using bash is better than the disgusting web console!

  • Ser_Jettt
    Ser-Jet💜👑(🧩) (@Ser_Jettt) reported

    @JothamNtekim1 @awscloud Interesting approach to the problem. Are you currently exploring any builder challenges or just shipping independently?

  • minorun365
    みのるん (@minorun365) reported

    @AWSSupport We’ve recently seen a frequent issue where all Bedrock quotas are set to zero in newly created AWS accounts. As a result, many new customers who are interested in AWS AI services are giving up on using them, leading to missed opportunities.