Amazon Web Services status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".
Problems in the last 24 hours
The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by Amazon Web Services users through our website.
- Errors (40%)
- Website Down (33%)
- Sign in (28%)
Live Outage Map
The most recent Amazon Web Services outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Website Down | 3 days ago |
|
|
Website Down | 5 days ago |
|
|
Sign in | 7 days ago |
|
|
Errors | 10 days ago |
|
|
Errors | 16 days ago |
|
|
Errors | 17 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Amazon Web Services Issues Reports
Latest outage, problems and issue reports in social media:
-
🇪🇬fortnite egyption servers (@fortnite_Egypt1) reported@awscloud Players in Egypt are experiencing routing issues to the Bahrain AWS servers for about two weeks now. Ping jumped to ~150ms instead of the usual low latency. Please investigate and fix the routing problem.@AWSSupport @awscloud please fix the problem
-
Kane | AI Insider 🤖 (@scooblover) reportedThis isn't Iran's first move against AI/tech infrastructure. Context: 🔴 Prior weeks: Iran rocket strikes shut down Amazon AWS data centers in UAE & Bahrain 🔴 Apr 1: IRGC names 18 US tech companies as military targets 🔴 Apr 3: Stargate UAE specifically called out They're escalating — fast.
-
Basim Khalid (@basimkhalid) reported@nygma504 @AWSSupport @awscloud Its down for me too. Any ETA please?
-
dani (@danisconverse) reported@awscloud @amazon I'm writing to report a clear case of animal cruelty by an Amazon delivery driver in Rathdrum, Idaho. On around April 5, 2026, the driver grabbed Joe Hickey's small dog, Rocky, by the neck and slammed him onto rocks, causing broken bones and $10,000 in vet bills
-
Vidhiya (@Vidhiyasb) reported@awscloud @awscloud amazon Q's file write tools are having issues..please fix
-
TopCeo (@TeetheBuilder) reported@KylePause @AWSSupport Are you still having issues with this? I may be of help
-
Bella Iberia (@SpainGreatAgain) reported@NextGenStats @NFL @awscloud Expected Points Added sounds fancy until you realize it’s just another way for nerds to tell us Mahomes is a god while downplaying actual game-winning drives and clutch plays 😤 EPA, success rate, all that AWS-powered nonsense cool for spreadsheets, terrible for real football passion. Stop letting models replace what our eyes see on Sundays. Bring back old-school football debate
-
IamNotATerrorist (@Mynameiskhan924) reportedReally disappointed with Amazon support. After weeks of trying, I’m still getting automated replies. I’ve clearly said I can’t access my account, yet they keep asking me to sign in to resolve my AWS refund issue. How am I supposed to do that without access? This is frustrating. @AWSSupport @JeffBezos @awscloud
-
Forengi, The Grand Nagus zek (@Elvismen) reported@AWSSupport Very concerning: AWS won’t let me pay an outstanding balance due to a billing issue. My account is closed, I’ve updated my card, opened a case, and still no human response. This is urgent and unacceptable. @AWSSupport please assist immediately.
-
Decent Cloud (@DecentCloud_org) reported@Mn9or_ @AWSSupport No support subscription means you wait for someone else's ticket to fix your outage
-
Mirror AI - The Ultimate Virtual Try on (@TryItOnMirror) reported@EPAMSystems @ZalandoTech @awscloud Interesting direction. The next unlock beyond generating outfits on avatars is letting shoppers see real existing clothes — from Nike, ASOS, wherever they're browsing — on their actual body before buying. That's where the return problem actually gets solved.
-
Dreamcatcher_MK (@Dreamcatch3r_mk) reported@amazon @awscloud @Uber Thanks for letting my login to my new tv to watch The Boys new season… you guys suck!!!
-
Sourabh Dhakad (@SourabhDhakadd) reported@amazon @awscloud Still not able to login due to passkey issue. Please remove passkey authentication.
-
Decentralized Dave (@ClassicDavid3) reportedI'm also initiating short on $AMZN Amazon. Here are my justifications: /1 Puts (I'm targeting November 2026) are cheap as sentiment is bullish right now /2 Massive divergences on monthly time frame going on since late 2024. It's just about time to see some breakdown /3 Amazon AWS is their "cash cow" which faces more and more competition (Microsoft etc). Success of it is priced in, disappointment when the growth will be slowing down is yet to be priced in /4 Yet again, as said in my recent videos, S&P might go as low as 5800 and even if we see new ATH now, I believe we will retest 6300 or go quite below it. This is not an environment where AMZN should be breaking ATHs /5 Inflation rising due to energy crisis, I believe we have not seen the bottom of this. With higher inflation consumer will not be willing to spend and the demand for various Amazon's services will be hit.
-
The OddsBreakers (@theoddsbreakers) reported@PSchrags @awscloud @NextGenStats U mean, who will they trade down for?…
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.
-
nick (@NicsTwitz) reported@amazon @AnthropicAI @awscloud 100 billion dollar compute commitment is wild. the AI arms race isnt slowing down its accelerating
-
JustAnotherEarthling *humorous/satirical* (@RobBoggs4) reported@amazon @awscloud Why when looking for men's shoes,ls there a headline " amazon's choice" And it's advertising women's shoes? Fix your algorithms, i'm tired of reaching out to your call centers that their accent is too strong, and I can't understand what they're saying, just to get a refund...
-
Sadiq (@Md_Sadiq_Md) reported@AWSSupport I’ve raised this issue 7 times now, and it’s been 4 days with no response. I need someone to speak to ASAP
-
Joe Sutphin (@JLSports24) reported@PSchrags @awscloud @NextGenStats The problem is those teams don’t know how to use their picks
-
Steve (@somatg3) reported@danorlovsky7 @awscloud @NextGenStats Easy. Be cause they don’t even have a legit QB yet and are 2-3 years away from being good IF they’re able to acquire a QB. They need EVERYTHING. Spending #3 for a RB on a bad team isn’t a winning draft strategy. Trade down or take the best edge rusher remaining.
-
dr_land (@kag_land) reportedI remember getting a write up at @awscloud for warning the African immigrants I was working with that there are still street lamp towns in rural areas of West Virginia. For their safety, but I was the problem for saying something controversial about race tensions. Oh well.
-
Dhananjay Maurya (@dhananjaym182) reported@AWSSupport I have sent Aws case in private message please have look and fix the issue
-
Evans (@Evans000601) reported@amazon @awscloud The delivery time for sellers' goods to the Polish warehouse is too slow, seriously too slow! Things like KTW5, XWR3... Could Amazon please optimize this or give us some more details?
-
🇪🇬fortnite egyption servers (@fortnite_Egypt1) reported@awscloud Players in Egypt are experiencing routing issues to the Bahrain AWS servers for about two weeks now. Ping jumped to ~150ms instead of the usual low latency. Please investigate and fix the routing problem.@AWSSupport @awscloud please fix the problem
-
Bryan (@0xp4ck3t) reported@AWSSupport We have business + and we should be able to get a response from AWS within 30 minutes for critical issues. It's been hours, our **** DB is down. We need someone to have a look on it. Case ID 177566080000785
-
AI Tech News (@AITechNews_in) reported@amazon @AnthropicAI @awscloud 5GW = not scale… it’s dominance. AI is officially an energy problem now, not just a software problem.
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Judge my local llm Rio's answer. Qwen3.5-27b-Opus-4.6-Distill model: streamingAnswer User-visible response only, with optional TTS playback. Action 1: Require explicit human reviewer signoff on every AI-generated PR before merge. Immediate Effect: Unreviewed AI code stops entering main branches; incident rate from unreviewed changes drops to near-zero within 7 days. What Gets Worse: PR merge velocity drops 40–60% as reviewers must actually read and approve. Why this is still the correct move: You're trading throughput for safety. A broken deployment costs far more than a slower merge. --- Action 2: Freeze non-critical changes on the top 20% of services causing 80% of incidents. Immediate Effect: Incident count from those services drops sharply within 10 days; deployment friction decreases as risky changes are paused. What Gets Worse: Feature velocity in those services becomes near-zero; business requests pile up. Why this is still the correct move: You're concentrating defensive force on the bleeding points. Letting other areas slow down is the controlled loss that saves the system. --- Action 3: Require every PR to explicitly tag an owner (person or team). No tag = no merge. Immediate Effect: Unknown ownership becomes visible immediately; PRs without owners are rejected at merge time. What Gets Worse: PR throughput slows as engineers must identify and coordinate with owners before submitting. Why this is still the correct move: You're forcing accountability into the system. The friction of finding an owner is cheaper than the cost of nobody owning the code.
-
Hetarth Chopra (@HetarthVader) reported@orangerouter and I spent days debugging why our inter-node bandwidth on @awscloud was slow. 8x A100 TP8PP2 serving across machines. bandwidth was ~100 Gbps. should have been 400 Gbps.
-
ramar (@ramarxyz) reported@AWSSupport Case ID 177557061000414, production down, account on verification hold, 24h+ no response, please escalate