Partially Degraded Service
Functions Operational
90 days ago
100.0 % uptime
Today
America Operational
90 days ago
99.98 % uptime
Today
Login Apps Toronto ? Operational
90 days ago
100.0 % uptime
Today
HTTP Post Toronto ? Operational
90 days ago
99.96 % uptime
Today
TCP Toronto ? Operational
90 days ago
99.98 % uptime
Today
MQTT Publish Toronto ? Operational
90 days ago
99.98 % uptime
Today
MQTT Subscribe Toronto ? Operational
90 days ago
99.98 % uptime
Today
Events Engine Toronto ? Operational
90 days ago
99.95 % uptime
Today
UDP Toronto ? Operational
90 days ago
99.99 % uptime
Today
Ubidots Australia (Private deployment) ? Degraded Performance
90 days ago
99.99 % uptime
Today
Login Apps Sydney ? Operational
90 days ago
100.0 % uptime
Today
MQTT Publish Sydney ? Operational
TCP Sydney ? Operational
90 days ago
99.99 % uptime
Today
Events Engine Sydney ? Degraded Performance
90 days ago
99.98 % uptime
Today
HTTP ? Operational
90 days ago
99.99 % uptime
Today
SMS ? Degraded Performance
Twilio REST API Operational
Twilio SMS, Latin America Degraded Performance
Twilio SMS, Europe Operational
Voice ? Operational
Twilio TwiML Operational
Twilio Voice, Latin America Operational
Twilio Voice, Europe Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
REST API
Fetching
Past Incidents
Nov 28, 2020

No incidents reported today.

Nov 27, 2020
Resolved - This incident has been resolved.
Nov 27, 22:02 UTC
Update - We experienced issues with our private ingestion engine at Australia during different time windows:


## REST API response: From 18:40:23 UTC - 18:54:56 UTC

During this time window, our REST API responded with 50x error codes through HTTP, and may have lost data over other protocols as follows:

### From 17:54:35 UTC - 18:21:22 UTC

We responded to about 80% of the data received through TCP or UDP with a 50x response code. Through MQTT the response was without an ACK message. It is not possible to recover these Dots. Users may see data gaps in their variables.

### From 18:21:22 UTC - 18:54:56 UTC

We responded to about 100% of the data received through TCP or UDP with a 50x response code. Through MQTT the response was without an ACK message. It is not possible to recover these Dots. All our users will see data gaps in their variables during this time window.

The main root of the issue was related to our Kubernetes cluster, which suffered a non-expected task service demand increase, generating a general cluster degradation.

Please, remember that if you got a 50x response code or did not get an ACK message from our server, it means that something went wrong with the request and your request or socket message must be sent again.
Nov 27, 22:02 UTC
Monitoring - We have experienced issues receiving data through MQTT, TCP and HTTP at our private Australia deployment. We have released a hot patch and monitoring the services.
Nov 27, 19:18 UTC
Nov 26, 2020

No incidents reported.

Nov 25, 2020

No incidents reported.

Nov 24, 2020

No incidents reported.

Nov 23, 2020
Resolved - # Report

We experienced issues with our ingestion engine during two different time windows:


## Data ingestion workers: From 11:39 UTC - 15:43 UTC

Even though our REST API responded with a 200 or 201 response code, the action to save Dots and reflect them in our user’s accounts took longer times. This issue affected data visualization at dashboards and variable views levels. Additionally, synthetic variables and and the events engine were also impacted by said delays.
GET requests did not respond with the updated data during this time window.


## REST API response: From 14:08 UTC - 15:42 UTC

During this time window, our REST API responded with 50x error codes through HTTP, and may have lost data over other protocols as follows:

### From 14:08 UTC - 14:27 UTC

We responded to about 70% of the data received through TCP, UDP, or HTTP with a 50x response code. Through MQTT the response was without an ACK message. It is not possible to recover these Dots. Users may see data gaps in their variables.

### From 14:27 UTC - 14:54 UTC

We responded to about 100% of the data received through TCP, UDP, or HTTP with a 50x response code. Through MQTT the response was without an ACK message. It is not possible to recover these Dots. All our users will see data gaps in their variables during this time window.

**Details per protocol:**

* **HTTP, from 14:48 UTC - 15:21 UTC: **About 10% of the data requests got a 50x response code. It is not possible to recover these Dots. Users may see data gaps in their variables.
* **MQTT/TCP/UDP, from 14:27 UTC - 15:21 UTC: **About 100% of the data received through MQTT did not get an ACK message. It is not possible to recover these Dots. All our users will see data gaps in their variables during this time window.


Please, remember that if you got a 50x response code or did not get an ACK message from our server, it means that something went wrong with the request and your request or socket message must be sent again.
Nov 23, 20:07 UTC
Monitoring - All our ingestión services are up and running again. We are working on the data recovery during this outage window.
Nov 23, 15:51 UTC
Update - We are actually experiencing issues with our data ingestion service, we are working to fix it as soon as possible
Nov 23, 15:05 UTC
Update - We are continuing to investigate this issue.
Nov 23, 14:05 UTC
Update - We are still looking into the main root of this issue. We are actually working on deploying a data logger to check if the issue is related to our microservice environment.
Nov 23, 14:05 UTC
Investigating - We are currently experiencing an issue with data ingestion received through MQTT, TCP, UDP or HTTP. Some dots are not being reflected at the user's database. Any dot has been lost until now.
Nov 23, 13:09 UTC
Nov 22, 2020

No incidents reported.

Nov 21, 2020

No incidents reported.

Nov 20, 2020

No incidents reported.

Nov 19, 2020

No incidents reported.

Nov 18, 2020

No incidents reported.

Nov 17, 2020

No incidents reported.

Nov 16, 2020

No incidents reported.

Nov 15, 2020

No incidents reported.

Nov 14, 2020

No incidents reported.