Update - We're seeing some partial resolution of the OOB management connectivity, but there still seems to be some issues depending on the specific upstream path. We're continuing to monitor.
Feb 10, 2026 - 01:40 EST
Investigating - We're monitoring a partial disruption with Kushida. Although the site prefix is still reachable, we have lost connectivity via OOB management. This node is not currently participating in the anycast due to unrelated issues affecting our ability to apply traffic engineering, so there is no impact at this time.
Feb 10, 2026 - 01:31 EST
Welcome to the Kazehana Networks service status page. Below are all of our current servers which run external services, along with their current status. During incidents, we will provide information regarding the incident, as well as any updates that we may have. Updates will also be provided during scheduled maintenance.
Completed -
The scheduled maintenance has been completed.
Feb 6, 02:00 EST
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 6, 01:00 EST
Scheduled -
LONAP is conducting maintenance in THN14 which will result in ports & sessions going down for up to an hour. LONAP will gracefully close sessions, so we expect little impact with the exception of some routing changes.
Feb 2, 23:44 EST
Resolved -
This incident has been resolved.
Jan 28, 22:50 EST
Monitoring -
We've completed the restarts and all nodes are looking normal. We will monitor a bit for stability.
Jan 28, 20:05 EST
Update -
This appears to be a known issue with systemd in high route environments. Restarting the nodes appears to have resolved the problem, and we are going to slowly complete this with the other nodes. We will keep an eye out for reoccurrences and consider other avenues to resolve the problem if it continues to show up.
Jan 28, 19:53 EST
Identified -
We are working on an issue with 3 of the 4 anycast nodes which is resulting in high CPU load. This does not appear to be affecting traffic flow at this time, but it is making the machines slow when running commands. We have identified that systemd-resolved is chewing up CPU on these machines, which started roughly within the same hour, but we are unclear as to why at this time.
We have seen this problem before, but limited to single sites. We are working to resolve the issue without disrupting service, but we may need to temporarily withdraw prefixes as needed to restart services and/or the nodes themselves.
Jan 28, 19:41 EST