We are continuing to monitor for any further issues.
Posted Feb 26, 2024 - 19:41 UTC
Update
We are still monitoring this latency. There's a substantial number of dots that are currently queued for ingestion.
To mitigate the TTS Plugin data ingestion issue, we have deployed a new version that no longer utilizes the affected endpoint.
Posted Feb 26, 2024 - 14:39 UTC
Monitoring
We are actively working on implementing changes to mitigate the issue.
Posted Feb 24, 2024 - 11:00 UTC
Identified
We are currently investigating a significant performance degradation in our bulk data processing system. This performance bottleneck is impacting the speed of the bulk data endpoint for all users and this incident has the potential to cause significant delays in triggering conditional events and alarms. Our team is currently investigating the root cause and working towards a resolution to restore normal operations as quickly as possible.
We have identified that the TTS Plugin is also affected. The reason is due to this plugin relies on the Bulk endpoint to ingest data.
We apologize for any inconvenience this may cause and appreciate your patience.
Further updates will be provided as we progress in resolving this issue.
Posted Feb 24, 2024 - 10:45 UTC
This incident affected: America (HTTP Post Toronto, TCP Toronto, MQTT Publish Toronto, Events Engine Toronto).