Skip to Content

Feeling frustrated by the recent Microsoft Azure and AWS outage? Here’s a simple explanation of what went wrong.

Was the massive cloud outage on October 29th a critical disaster? Learn what the real impact means for you.

On October 29, 2025, the internet felt broken for many people. Big services from Microsoft and Amazon stopped working correctly. This was not a small glitch. It was a major outage that affected users all over the world. If you had trouble with Outlook or other Microsoft 365 tools, you were not alone.

What Went Wrong?

Think of the internet as a giant map. A system called DNS acts as the map’s guide, telling computers how to find websites. On Wednesday, this guide got confused. At the same time, Microsoft made a small mistake in its own system’s rulebook, known as a configuration change.

This combination of problems created a perfect storm. The issues started around 4:00 PM UTC and caused a chain reaction.

  • Azure Front Door Broke: This is like the main entrance for many of Microsoft’s services. When it had problems, people couldn’t get in.
  • DNS Failed: The internet’s map guide couldn’t give proper directions, so even trying to reach a service became difficult.

Microsoft quickly confirmed that an “inadvertent configuration change” was the trigger. Another update pointed to DNS issues as a core part of the problem. Because many services depend on Azure and AWS, the impact was felt far and wide.

Who Was Affected?

The problems hit many popular services that people use every day for work and personal life. Users reported slow speeds, error messages, and a total inability to access their accounts.

The main services with issues included:

  • Microsoft 365 Admin Center
  • Outlook and Exchange Online
  • Microsoft Teams
  • Xbox Live
  • Microsoft Intune and Purview
  • Third-party apps that run on Azure, like CodeTwo

The trouble wasn’t just with Microsoft. Amazon Web Services (AWS) also experienced disruptions. This affected other online tools, such as the cloud dictation service SpeechLive. Even some private websites became unavailable because the DNS problem prevented users from finding them.

How They Fixed It

Microsoft’s engineers worked to solve the problem from multiple angles. They knew they had to act fast to get everything back online.

  1. Reverting the Change: The first step was to undo the bad configuration change. They started rolling back to the last known good set of rules.
  2. Rerouting Traffic: As a temporary fix, they redirected users away from the broken parts of the system. They sent traffic to healthy servers that were still working correctly.
  3. Blocking the Error: To prevent more damage, they blocked the faulty change from spreading further across their network.

While these fixes were being deployed, users slowly began to see services return to normal. Microsoft kept its status page updated to let everyone know the progress. For a while, even accessing the status page itself was a problem, but the company created a direct link to help users stay informed.