5h
20h
25min
9,5h
A recent service update to an authentication component unintentionally prevented access for a subset of users, resulting in intermittent service unavailability.
19-21h
23h
2,5h
50h
Crowdstrike agent downloading new update and causing a reboot loop.
Between 21:40 UTC on 18 July 2024 and 12:15 UTC on 19 July 2024, customers may have experienced issues with multiple Azure services in the Central US region due to an Azure Storage availability event. This issue affected Virtual Machine (VM) availability, which caused downstream impact on multiple Azure services, including failures of service management operations and connectivity or availability of services. Services with dependencies on the impacted Virtual Machines would have been affected.
?
%
?
14min
2,5h
data center outage + high availability did not work
2d
Cloudflare control panel and analytics outage
fiber cut caused by severe weather conditions in the Netherlands
8h
Region West Europe partially down
Service degradation of 104 AWS services (that where using AWS Lambda)
3h
Lambda scaling crossing a new threshold hit a functional bug
fire after cooling system water pipe leak
1d
No connection
2h
Expired certificate
automatic OS update takes network down
2d
Service outage
network update caused traffic disruption
6h
Gmail, Youtube, Google Drive partial outage
performance problems in DNS-based load management
3d
OCI Vault, API Gateway, Oracle Digital Assistant and OCI Search with OpenSearch
network configuration error
1h
World-wide MS Teams outage
unclear
75min
US East2 connectivity issues
backend application service failure
2h
Users unable to send/receive messages
unclear
1h
worldwide, no meetings possible
software update
1h
Google Search, Google Maps globally unavailable
power failure
20min
AZ1 of US East2 without connectivity
A change to the network configuration in those locations caused an outage [1]
1h 15min
many affected websites
global scale orchestration human error, instead of shutting down component product instances were terminated
400 companies and anywhere from 50,000 to 400,000 users had no access to JIRA, Confluence, OpsGenie, JIRA Status page, and other Atlassian Cloud services
">14days for some customers"
DNS problems
App Store, Maps, TV
4h
Quote from Slack status page: A configuration change inadvertently lead to a sudden increase in activity on our database infrastructure. Due to this increased activity, the affected databases failed to serve incoming requests to connect to Slack.
5h
4h
6h
2h
4h
4h
??
??
3h
2h
several hours
7h
3h (everyone) 1d (for Electron app users)
6h
~7h
3h