1. Home
  2. Companies
  3. Amazon Web Services
Amazon Web Services

Amazon Web Services status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Problems in the last 24 hours

The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by Amazon Web Services users through our website.

  • 41% Errors (41%)
  • 32% Website Down (32%)
  • 27% Sign in (27%)

Live Outage Map

The most recent Amazon Web Services outage reports came from the following cities:

CityProblem TypeReport Time
West Babylon Errors 1 day ago
Massy Errors 2 days ago
Benito Juarez Errors 6 days ago
Paris 01 Louvre Website Down 10 days ago
Neuemühle Errors 10 days ago
Rouen Website Down 10 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • Mn9or_
    Mansour (@Mn9or_) reported

    @AWSSupport Hello We are currently affected by the outage in me-south-1 AMI copy to another region is stuck and failing Snapshot creation fails with internal errors Plz help , we can not create a tech support as it’s required a subscription

  • ceO_Odox
    Ødoworitse | DevOps Factory (@ceO_Odox) reported

    Every DevOps engineer knows "It works on my machine" is a lie. Hit a wall today deploying to @awscloud EC2—*** was begging for a password in a headless shell. ​Error: fatal: could not read Username. Reality: The source URL drifted, and the automation had no keyboard to answer. 🧵

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.

  • BuddyPotts
    Neal🅾️ (@BuddyPotts) reported

    @danorlovsky7 @awscloud @NextGenStats The defense was horrendous last year and all they added was Edmunds but lost Okereke and Flott. They cant stop the run at all, they should trade down and collect more picks and build the defense

  • jlgolson
    Jordan Golson (@jlgolson) reported

    @AWSSupport This is ******* ridiculous at this point. After a half dozen back and forth emails, the guy finally says "Also, after reviewing this request, I noticed a few things were not addressed and would like to clarify these. First, I see you mentioned that you're having trouble with an AWS Builder ID and not the account management console. Please note that an AWS Builder ID complements an AWS account, but it is separate from the AWS account and its sign in credentials." NO KIDDING, THAT IS WHY I SPECIFICALLY SAID IT WAS AN AWS BUILDER ID AND WAS SEPARATE FROM MY AWS ACCOUNT AND I COULD LOG INTO MY AWS CONSOLE JUST FINE. Explain to me what to do, because it seems like you are failing to THINK BIG and that you have zero BIAS FOR ACTION, so INVENT AND SIMPLIFY so that you can EARN TRUST and if you DIVE DEEP and do better, I'll DISAGREE AND COMMIT, got it?

  • FroojdDive
    Froojd (@FroojdDive) reported

    @AWSSupport DMed you. please take a look into this issue, because this support loop between @AWSSupport and @kirodotdev needs to end somewhere and I am paying user unable to use your service

  • TwistedEdge
    James Baldwin (@TwistedEdge) reported

    I *really* want to like AgentCore but the more I build with it and run into limitations, the more I worry it's still too early. @awscloud. First I run into DCR issues with a custom MCP and now it seems AgentCore doesn't pass ui:// resource requests through.

  • 0xKeng
    Keng N (@0xKeng) reported

    @Lakshy_x @KASTxyz @awscloud avoiding the common issue of funds idling or locking during transactions.

  • _PhilipM
    Philip McAleese (@_PhilipM) reported

    @AWSSupport is there an issue with eu-west-1? Nothing on the status page, but the console times out and we're seeing errors on products using Gateway API.

  • Arthurite_IX
    Arthurite Integrated (@Arthurite_IX) reported

    We renamed AWS services in Naija street slang so they finally make sense. 1. Amazon S3 = "The Konga Warehouse" Store anything. Retrieve it when you need it. It doesn't judge what you put inside. 2. Amazon EC2 = "The Danfo" You control the route, the speed, and how long it runs. The agbero (security group) decides who gets on. 3. AWS Lambda = "The Okada" Short trips only. No long commitments. Pay per ride. When it reaches the destination — it disappears. 4. Amazon RDS = "Iya Basement" She manages everything in the back. She's been there for years. She knows where everything is. Do not interrupt her. 5. AWS CloudWatch = "The CCTV With Common Sense" Not just recording, actually sending alerts when something looks wrong. Unlike the one in your office building. 6. Amazon Route 53 = "The Agbero" Directs all the traffic. Decides which danfo goes where. Keeps everything moving. 7. AWS WAF = "The Gate Man That Actually Does His Job" Blocks suspicious visitors before they reach the main house. No bribe accepted. 8. Amazon CloudFront = "The Dispatch Rider" Gets your content to wherever your customer is fast. No go-slow. No bridge hold-up. Which one made you laugh? Drop it in the comments. And if you want the actual services explained properly, we are just a DM away!

  • IetsG0Brandon
    James G (@IetsG0Brandon) reported

    @ring are your servers down? dod you not pay @awscloud ? why am I paying to not connect to my system and for you to say " its our fault " ? too busy counting your billions? what ********?

  • fortnite_Egypt1
    🇪🇬fortnite egyption servers (@fortnite_Egypt1) reported

    @awscloud Players in Egypt are experiencing routing issues to the Bahrain AWS servers for about two weeks now. Ping jumped to ~150ms instead of the usual low latency. Please investigate and fix the routing problem.@AWSSupport @awscloud please fix the problem

  • crypt__Engineer
    CryptoCloudEngineer (@crypt__Engineer) reported

    Amazon Web Services (AWS) just mass-deleted a billion-dollar problem. Amazon S3 Files launched yesterday. Your S3 buckets now act as fully-featured file systems. No data copying. No syncing pipelines. No EFS + S3 juggling act. Why this is HUGE for AI builders:

  • qualadder
    Archmagos of Zolon 🔭🪐🛸📡 (@qualadder) reported

    @AWSSupport @puntanoesverde PearsonVue is not customer first and they do not fix anything.

  • vic_nanda
    Vic Nanda (@vic_nanda) reported

    @awscloud horrible service, I opened a service request over a week ago and your website says 48 hrs for handling support issues. What is the number to call if you guys won't handle support tickets on time?

  • GamingNepr34519
    जहाँ mila,वही खोदूंगा (@GamingNepr34519) reported

    @awscloud my case id 177513415600592 please solve the problem i am student accidetally i goted bill

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @_ps428 No issues on our end. Your issue. The docs. Resolution complete.

  • deegeemeeonx
    deegeemee (@deegeemeeonx) reported

    @Atlassian @awscloud How about fixing the authentication of your vscode plugins, which forces every dev to login again and again and is broken for months, before pumping out sloppy ai tools nobody asked for?!

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @AWSSupport @vladimirprus The problem isn't bursting. It's that credits are invisible until you're throttled.

  • RanaFarooqAslam
    Farooq Rana (@RanaFarooqAslam) reported

    @awscloud me-central-1 is down from last 7 days, no updates, still cannot get services or data, no timeline

  • Petielvr
    Queen of hearts (@Petielvr) reported

    @AWSSupport Hello, this is acct. #26672735262. I cannot pay my bill because I get a 404 error. I have been trying to escalate this issue since Friday the 17th. Please have a human call Donna @ 3148223232

  • er_shivamsingh0
    Shivam Singh (@er_shivamsingh0) reported

    Hey @AWSSupport can you please tell me , when did the uae data centre issue will be resolve ??

  • PThorpe92
    Preston Thorpe (@PThorpe92) reported

    @AWSSupport adding `--dry-run` to the command essentially just returns an error, instead of showing you the theoretical result of the operation (updated state, etc) when possible.

  • siddhantio
    Siddhant Tripathi (@siddhantio) reported

    @awscloud opened a case over 10 days ago and it’s still unassigned to any agent. Please help in resolving the billing issue.

  • namzylll
    N (@namzylll) reported

    Ignoring the Middle East when it comes to servers is a huge oversight. ALOT of players are stuck with 130+ ping. FIX THEM!!!! @FortniteStatus @awscloud @FortniteME #fortniteriyadh

  • fortnite_Egypt1
    🇪🇬fortnite egyption servers (@fortnite_Egypt1) reported

    @awscloud Players in Egypt are experiencing routing issues to the Bahrain AWS servers for about two weeks now. Ping jumped to ~150ms instead of the usual low latency. Please investigate and fix the routing problem.@AWSSupport @awscloud please fix the problem

  • 0xp4ck3t
    Bryan (@0xp4ck3t) reported

    @AWSSupport We have business + and we should be able to get a response from AWS within 30 minutes for critical issues. It's been hours, our **** DB is down. We need someone to have a look on it. Case ID 177566080000785

  • MarkE7RGreen
    Mark Green (@MarkE7RGreen) reported

    @jimallen926 @awscloud @amazon It's broken They rolled out a new feature and it messed up stuff

  • TutorHailApp
    TutorHail App (@TutorHailApp) reported

    @awscloud hello we have tried reaching out to you but in vain our EC2 attached to UAE has been down forever and you have no communication out. Can we know what we are dealing with it is our main server this is ridiculous handing us over to your bots with no answers.

  • dmauas
    David Mauas (@dmauas) reported

    @awscloud why, WHY don't you fix the web console UI/UX?! AWS console seems to actually try to suck! It gets WORSE with time! Actually using bash is better than the disgusting web console!