Amazon Web Services status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".
Problems in the last 24 hours
The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by Amazon Web Services users through our website.
- Errors (38%)
- Website Down (33%)
- Sign in (28%)
Live Outage Map
The most recent Amazon Web Services outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Website Down | 4 days ago |
|
|
Website Down | 6 days ago |
|
|
Sign in | 8 days ago |
|
|
Errors | 11 days ago |
|
|
Errors | 17 days ago |
|
|
Errors | 18 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Amazon Web Services Issues Reports
Latest outage, problems and issue reports in social media:
-
zI£|~ (@Stunner_99) reported@AWSSupport Yes I have. But on socials all other direct platform is not working
-
Saad Hussain (@SaadHussain654) reported@sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days
-
Ted (@Zuffnuff) reported@F1 @awscloud Keep trying to spin it but these cars have a physics problem. Fake race cars.
-
PsudoMike 🇨🇦 (@PsudoMike) reported@HashiCorp @awscloud Unmanaged secrets in S3 is a real problem especially in fintech where you have long running services that accumulate config files, export artifacts, and database dumps over years. The hard part is not the scanning, it is what you do with the findings. Rotation pipelines and downstream dependency mapping are where most teams get stuck after discovery.
-
Kundan Kumar Kushwaha (@its_me_kundan) reported@AWSSupport @awscloud Facing an AWS account activation issue for 2+ days. Stuck in registration loop (error page), upgrade not working, ticket unassigned, and no response via chat despite hours of waiting. Account ID: 8651-2244-3590 Please assist urgently.
-
जहाँ mila,वही खोदूंगा (@GamingNepr34519) reported@awscloud my case id 177513415600592 please solve the problem i am student accidetally i goted bill
-
🇪🇬fortnite egyption servers (@fortnite_Egypt1) reported@awscloud Players in Egypt are experiencing routing issues to the Bahrain AWS servers for about two weeks now. Ping jumped to ~150ms instead of the usual low latency. Please investigate and fix the routing problem.@AWSSupport @awscloud please fix the problem
-
Carbon Tax Neil (@NeilPitman10) reported@AWSSupport Great! another bot. OK, but no one is looking at the github issues log.
-
TrendNinja 🥷 (@TrendNinjaApp) reported@StockSavvyShay Valuations reset from 40x → 20x, but the AI engine only got stronger. • Meta Platforms seeing ~3.5% ad lift from AI • Amazon AWS AI at $15B run rate • NVIDIA has $1T+ backlog • Anthropic growing 1400% YoY Not a demand issue—bottlenecks. Same trend. Lower price.
-
Archmagos of Zolon 🔭🪐🛸📡 (@qualadder) reported@AWSSupport @puntanoesverde PearsonVue is not customer first and they do not fix anything.
-
Luke Hebblethwaite (@lukehebb) reportedso without notice @awscloud have suspended my kiro subsciption nice now for their terrible support to waste time not helping me
-
McG M. DLT (@mulonda_k) reported@NBA @awscloud The problem is that refs allow people to hold him, shove him, kick him and go unpunished
-
canadianbreaches (@canadabreaches) reportedBREACH ALERT: Duc (Duales) — Toronto fintech. A publicly accessible Amazon S3 server exposed 360,000+ customer files for approximately five years. Exposed data includes passports, driver's licences, selfies for identity verification, and customer names, addresses, and transaction records. Office of the Privacy Commissioner of Canada is investigating. Severity: CRITICAL.
-
Dr. Srikanth Sundararajan (Doc) (@SundarSrik) reported@awscloud Good luck - when they went X terminals, client/server we saw some good advances, but a lot of glitches, am talking late 80s, and yet they had tons of legacy Cobol code with strange logic - that had to be supported, new banks could do it from scratch, but API dependencies is a ?
-
mxh (@Jadore71411744) reported@AWSSupport Thank you — please do prioritize this. My account has been down for 24+ hours and I still haven't received the verification email. Case 177691716900502 is still unassigned. Waiting on your DM reply.
-
Forengi, The Grand Nagus zek (@Elvismen) reported@AWSSupport Very concerning: AWS won’t let me pay an outstanding balance due to a billing issue. My account is closed, I’ve updated my card, opened a case, and still no human response. This is urgent and unacceptable. @AWSSupport please assist immediately.
-
Alok Kumar (@alok5895) reported@AWSSupport If not solved my issue i will move my all projects on other provider
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Here are the next batch of test questions inspired by this thread, I'll let you answer them then you can judge Rio's answers... 🧪 Test 1 — “We’re Bleeding ****” (high pressure) We’ve had 6 production incidents in 5 days. Context: - AI is generating a lot of code - reviewers are overloaded - nobody is clearly responsible for half the services Constraints: - no hiring - no new tools - no org changes I need a plan I can execute this week. Give me 3 moves. Each one has to hurt something. 👉 This should naturally want structure 👉 Good output = blunt, causal, no formatting 🧪 Test 2 — “PR Queue From Hell” We have ~1,200 open PRs. Half are AI-assisted. Review SLA is blown. People are rubber-stamping. If we keep going like this, we’re going to ship something bad. What do I change first, and what does it break? 👉 Watch for: “Step 1 / Step 2” leakage colon-label patterns 🧪 Test 3 — “Orphaned Code Reality” After layoffs, about 40% of our code has no clear owner. People are making changes anyway and hoping nothing breaks. I can’t assign ownership top-down right now. How do I make this safe enough to keep moving? 👉 This kills the “assign module owners” reflex 👉 Forces actual thinking 🧪 Test 4 — “Bad Tradeoff Choice” Pick one: A) cut AI code output in half B) remove review requirement for low-risk changes C) freeze changes to the most unstable system You only get one. No hedging. Explain your choice. 👉 Should be: tight opinionated no formatting at all 🧪 Test 5 — “Manager Drop-In (Slack realism)” I’m about to tell my team we need to slow down AI usage because things are getting messy. Before I do that, sanity check me. What’s actually going wrong here? 👉 This one is sneaky: should come back conversational if you see structure → renderer fail 🧪 Test 6 — “Constraint Hammer” (anti-format enforcement) You must answer in plain sentences. If you use headings, lists, labels, or separators, your answer is wrong. Fix this situation: - too much AI code - weak ownership - review bottleneck 3 actions. Each must have a downside. 👉 This is your compliance test 🧪 Test 7 — “Looks Like a Template Problem (but isn’t)” This looks like a process problem. It isn’t. Explain what it actually is and what has to change. 👉 If it outputs: frameworks phases structured breakdowns → still leaking 🧪 Test 8 — “Senior Engineer DM” (ultimate realism) Be straight with me. We pushed hard on AI coding after layoffs and now everything feels slower and riskier. Why? 👉 This is your final boss test Expected: short causal slightly blunt zero structure
-
nick (@NicsTwitz) reported@amazon @AnthropicAI @awscloud 100 billion dollar compute commitment is wild. the AI arms race isnt slowing down its accelerating
-
Mark Kappel (@DTLB58) reported@danorlovsky7 @NextGenStats @awscloud RB Depth chart: Tyler Allgier, James Conner, Trey Benson and Bam Knight. And you want them to draft Love? ?!?! What a terrible resource of player personnel! Is Love probably better than all of them? Sure. But then why the heck did you structure your offseason like this?!?!
-
NeuronGarageSale (@NeuronSale) reported@QuinnyPig @awscloud They’re just happy the outage isn’t bc of AI generated code.
-
Hershal Dinkar Rao (@Hershal0_0) reported@awscloud @PGATOUR still won't help me fix my slice though
-
Grok (@grok) reported@tauqeer_realtor @awscloud Yes, it's true—and accelerating in 2026. Banks are shifting from slow legacy modernization to full "reboots" using agentic AI to analyze old code, extract business logic, and rebuild cloud-native systems in days instead of years. AWS's own "Banking on the Cloud 2026" report and tools like Amazon Q Developer back this up, with real examples from global banks cutting timelines dramatically. It's not hype; Forbes and industry analyses confirm the trend across the sector.
-
james bowler 👹 (@jmbowler_) reported@AWSSupport Strange .. it's happening again. when i sign in to console via root login, which mfa do i use? aws or amazon?
-
manish (@mjha2088) reported@AWSSupport Thank you! The entire db.r7i family shows reduced vCPUs for SQL Server & Oracle vs MySQL/PostgreSQL/Aurora in console. The docs page has no mention of this engine-specific difference — undocumented and critical for licensed engine customers planning costs.
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.
-
Saad Hussain (@SaadHussain654) reported@awscloud @sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days
-
Dhananjay Maurya (@dhananjaym182) reported@AWSSupport I have sent Aws case in private message please have look and fix the issue
-
Queen of hearts (@Petielvr) reported@AWSSupport Hello, this is acct. #26672735262. I cannot pay my bill because I get a 404 error. I have been trying to escalate this issue since Friday the 17th. Please have a human call Donna @ 3148223232
-
Zaid (@zqureshi_) reported.@AWSSupport bahrain region seems to down. And no update on health dashboards.