ch-dk-2 connectivity issue
Updates
Incident has been resolved. The exact root cause remain to be identified at this stage.
The issue affected mostly IPv6 connectivity of a subset of hypervisor hosts.
While we are still evaluating the exact impact of this incident, the following services have been affected:
- SKS control planes: Some SKS control plane backends where hosted on the affected hosts. This resulted in downtime of a subset of SKS control plane clusters
- SOS: Experienced an increase number of error 500. Issue has been fully mitigated by 16:15 CET
- Block storage: Experienced a brief connection drop. It may have resulted in some I/O errors returned to a subset of volumes. As a result some of the affected volumes may have been switched to read-only mode by the instance kernel. In such situation a manual remount will be required to bring back the affected volumes in write mode.
- Some IPv4/IPv6 connection reset may have been experienced on instances while we were applying the mitigation
All services are back available. We are monitoring the situation
Mitigation has been applied. Affected services are converging. We are monitoring the recovery
We are applying a set of mitigation, which is improving the current situation
We are still investigating the origin of the IPv6 network issue.
During the investigation some brief connection reset may be experienced
We are still investigating the origin of the IPv6 network issue.
Some SKS clusters have their API fully unavailable. We are still investigating
Impact of the incident has been extended to the following services: SOS, Block storage as a side effect of the underlying connectivity issue
The issue seems to be related to some partial IPv6 connectivity issue. We are still investigating
The issue may be related to some underlying network issues. We are still investigating
We currently experiencing an issue with SKS in ch-dk-2.
We’re investigating the issue and will communicate when we have more information.
← Back