Amazon Web Services status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".
Problems in the last 24 hours
The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by Amazon Web Services users through our website.
- Errors (41%)
- Website Down (32%)
- Sign in (27%)
Live Outage Map
The most recent Amazon Web Services outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Errors | 18 hours ago |
|
|
Errors | 5 days ago |
|
|
Website Down | 9 days ago |
|
|
Errors | 9 days ago |
|
|
Website Down | 9 days ago |
|
|
Errors | 10 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Amazon Web Services Issues Reports
Latest outage, problems and issue reports in social media:
-
Adrian SanMiguel (@ar_sanmiguel) reported@ranman @QuinnyPig @awscloud Yeah, well. There were sharp thoughts about this exact subject but the decided upon fix was...double down on moar official engagement. It ain't exactly working.
-
Bryan (@0xp4ck3t) reported@AWSSupport URGENT - We have business + and we should be able to get a response from AWS within 30 minutes for critical issues. It's been hours, our **** DB is down. We need someone to have a look on it. Case ID 177566080000785
-
Claudio Kuenzler (@ClaudioKuenzler) reportedWhoa. Did @awscloud Frankfurt just go down for 2 mins ~5min ago?
-
Vlad The Dev (@VladimirAtHQ) reportedOur EC2 infrastructure in ME-CENTRAL-1 has been down since March 1 due to the regional outage, affecting critical operations and causing financial impact. Instance: i-0deea3115254b7cf1. We request escalation for SLA review and service credit. @AWSSupport #AWSOutage
-
pankti0 (@pankti0154952) reportedHi @AWSSupport, I am a student facing financial issues due to unexpected SageMaker charges. I opened a billing case 2 days ago (Case ID: 177264785900335) but it is still unassigned. Please help flag this for review? I need this to be resolved to complete my uni work. Thank you!
-
Jose Montes de Oca (@josemontesdeoca) reported@awscloud says drone strikes in the Middle East damaged two data centers in the UAE and knocked a Bahrain facility offline. The fallout was immediate: some big-name cloud staples saw elevated errors and degraded availability, and AWS warned the situation could stay unpredictable while repairs continue.
-
ramar (@ramarxyz) reported@AWSSupport Case ID 177557061000414, production down, account on verification hold, 24h+ no response, please escalate
-
Zaid (@zqureshi_) reported.@AWSSupport bahrain region seems to down. And no update on health dashboards.
-
Hershal Dinkar Rao (@Hershal0_0) reported@awscloud @PGATOUR still won't help me fix my slice though
-
DrastikD (@ThrottlesTv) reported@awscloud @amazon WTF Amazon, "AWS doesn't talking to shopping" so my AWS works fine but can't log in to shop. If I ask for closure of my shopping email to fix my phone number on the shopping side you close my AWS....I thought they don't talk?
-
WilliamNextLvl (@WilliamNextLev1) reported@WatcherGuru Only problem is...$NET is not in the business of cyber security. LOL Cloudfare competes with Amazon AWS for serverless computing. (I would buy $NET here...)
-
Pulseon (@pulseon_dev) reportedAmazon just acquired Fauna Robotics. Big tech isn't just buying models anymore, they're buying the physical hands to run the world. For @AWScloud, the edge is no longer a server—it’s a robot. #robotics #infra
-
🇪🇬fortnite egyption servers (@fortnite_Egypt1) reported@awscloud Players in Egypt are experiencing routing issues to the Bahrain AWS servers for about two weeks now. Ping jumped to ~150ms instead of the usual low latency. Please investigate and fix the routing problem.@AWSSupport @awscloud please fix the problem
-
Arthurite Integrated (@Arthurite_IX) reportedWe renamed AWS services in Naija street slang so they finally make sense. 1. Amazon S3 = "The Konga Warehouse" Store anything. Retrieve it when you need it. It doesn't judge what you put inside. 2. Amazon EC2 = "The Danfo" You control the route, the speed, and how long it runs. The agbero (security group) decides who gets on. 3. AWS Lambda = "The Okada" Short trips only. No long commitments. Pay per ride. When it reaches the destination — it disappears. 4. Amazon RDS = "Iya Basement" She manages everything in the back. She's been there for years. She knows where everything is. Do not interrupt her. 5. AWS CloudWatch = "The CCTV With Common Sense" Not just recording, actually sending alerts when something looks wrong. Unlike the one in your office building. 6. Amazon Route 53 = "The Agbero" Directs all the traffic. Decides which danfo goes where. Keeps everything moving. 7. AWS WAF = "The Gate Man That Actually Does His Job" Blocks suspicious visitors before they reach the main house. No bribe accepted. 8. Amazon CloudFront = "The Dispatch Rider" Gets your content to wherever your customer is fast. No go-slow. No bridge hold-up. Which one made you laugh? Drop it in the comments. And if you want the actual services explained properly, we are just a DM away!
-
Manas Vardhan (@manas__vardhan) reported@HetarthVader @orangerouter @awscloud Seems like you wasted a lot of time finding a fix manually. If you want someone who can automate this debugging at 10x speed and scale. Let me know. I'm a researcher at USC, prev at JPmorgan. I automate stuff for fun.
-
Mansour (@Mn9or_) reported@AWSSupport Hello We are currently affected by the outage in me-south-1 AMI copy to another region is stuck and failing Snapshot creation fails with internal errors Plz help , we can not create a tech support as it’s required a subscription
-
**** Gutierrez (@puntanoesverde) reported@jo_byden @PearsonVUE @awscloud This feels like a bad joke. It is degrading to have to plead for my rights just because of a natural cognitive habit. Thinking out loud is part of my problem-solving process as an Architect.
-
Bikz (@bikbrar) reported@codyaims @AWSSupport @awscloud If you can pay for the $100/mo business tier support they’ll call you instantly and screen-share to help fix anything 24/7
-
Vidhiya (@Vidhiyasb) reported@awscloud @awscloud amazon Q's file write tools are having issues..please fix
-
Jordan Golson (@jlgolson) reported@AWSSupport Okay — kind of nuts that there's no way to log in or reset a password or anything and that the MFA appeared out of nowhere... also that you can have the same login for AWS Builder AND AWS Console and there's no great explanation for why they're different.
-
Ayus (@waiting4AI) reported@AWSSupport I'm trying to verify my mobile number since last 2 days but it's failed and not getting support from the team. Please fix @awscloud
-
Ignacio G.R. Gavilán (@igrgavilan) reported@awscloud_es @AWS @awscloud I need to talk urgently with you. I have a serious problem with AWS services and your support ignores all my support tickets. I prefer an in-person contact, in spanish if possible.
-
WilliamNextLvl (@WilliamNextLev1) reportedOnly problem is...$NET is not in the business of cyber security. Lol Cloudfare competes with Amazon AWS for serverless computing. (I would buy $NET stock here, way oversold...)
-
Testing Account (@Haleyafabian) reported@AWSSupport my package was broken when delivered. I need it replaced asap.
-
Siddhant Tripathi (@siddhantio) reported@awscloud opened a case over 10 days ago and it’s still unassigned to any agent. Please help in resolving the billing issue.
-
Super Stiff Yogi (@SuperStiffYogi) reported@AWSSupport The next steps are just the same documentation links which do not address my specific issue. If you read the case carefully you would see that. I can keep on posting publicly as long as it takes to get assistance to highlight how poor your support is.
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.
-
thanhdar nguyen (@thanhdar1999) reported@ikmolp0909 @henrychang10000 @awscloud I bought it when it was 0.3, now it looks terrible. Why is that? What's the way forward?
-
James G (@IetsG0Brandon) reported@ring are your servers down? dod you not pay @awscloud ? why am I paying to not connect to my system and for you to say " its our fault " ? too busy counting your billions? what ********?
-
Decent Cloud (@DecentCloud_org) reported@AWSSupport @CPGgrowthstudio Production down 5 days. The response commits to nothing. Next customer with this issue finds the same boilerplate.