Unable to connect to the remote server

Description
Starting around 10:00am US Eastern on 1/29/2022 we started seeing connection errors with all API calls. We are currently troubleshooting our environment to identify any changes that could account for it but also wanted to reach out to see if there are any connectivity issues that could be identified on the Zoom side.

Error
Unable to connect to the remote server.
A connection attempt failed because the connected party did not
properly respond after a period of time, or established connection failed because
connected host has failed to respond

Which App Type (OAuth / Chatbot / JWT / Webhook)?
JWT

Which Endpoint/s?
All

It’s been down many times over the weekend. Is there a status page for the Zoom API product?

Found it here: https://status.zoom.us/

But it is not accurate. The services have been up and down throughout the weekend, yet there is no mention of it in the outage history.

I still face this problem from today, how is your connection now, bro

The disruption in connectivity would seem to coincide with an update to our certificate on that server. The certificate expired on 1/29 and was renewed with a valid one that morning. I am still able to call the APIs with our integration in my local environment under http connections so it would appear to be an issue with the new certificate on that server.

Is the timeout error we’re seeing the expected response from Zoom API calls if there has been a lapse with a server certificate? Would that event flag something on the Zoom side that need to be cleared before calls would be successful again?

We’ve been having issues since Saturday morning central time as well. We only appear to be having issues with two IPs 170.114.10.84 and 170.114.10.85. On Saturday, we’d intermittently resolve to 52.202.62.238 and would have no problems with that IP. At some point over the weekend, it switched to where we are almost exclusively resolving api.zoom.us to 170.114.10.84 and 170.114.10.85. We currently are proxying traffic from one data center through another that seems to be unaffected.

I’ll take a look at our certs and see if we can see anything.

Why the silence from Zoom? Your APIs are down. Here’s a log from one of our prod servers:

Jan 31 08:02:11 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.85:443
Jan 31 08:47:47 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.84:443
Jan 31 08:52:59 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.84:443
Jan 31 09:02:12 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.84:443
Jan 31 09:32:38 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.84:443
Jan 31 09:44:31 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.85:443
Jan 31 10:02:12 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.84:443
Jan 31 10:57:34 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.85:443
Jan 31 11:00:46 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.85:443
Jan 31 11:22:08 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.85:443
Jan 31 11:24:56 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.84:443
Jan 31 11:27:28 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.84:443
Jan 31 11:29:47 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.85:443
Jan 31 11:40:34 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.85:443
Jan 31 11:56:08 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.84:443
Jan 31 12:11:50 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.84:443
Jan 31 12:24:57 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.85:443
Jan 31 12:47:45 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.85:443
Jan 31 13:02:13 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.85:443
Jan 31 13:08:30 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.84:443
Jan 31 13:11:06 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.84:443
Jan 31 13:21:12 apps08-prd1 node: Error: connect ETIMEDOUT 170.114.10.84:443

That was helpful thanks. Our server was attempting to call against 170.114.10.84 and 170.114.10.85 as well and we were able to get calls through again once we forced api.zoom.us to resolve to 52.202.62.238

I don’t think we want that to be our lasting solution though, so hopefully Zoom will have some feedback to help us better understand the behavior we’re seeing.

We found in our primary data center those IPs that Zoom transitioned to ended up being on their IPRM flagged as malicious. So, our data center was actually blocking traffic. It was fishy we weren’t even getting to certificate exchange. We are having an issue in another data center, so we are still trying to track that down.

What I’m seeing this morning is that we are able to make calls against those .84 and .85 IPs again in our environment. I’m a little hesitant to remove our workaround for the time being though.

It appears that in our other data center it was again due to the transition to the new IPs.

The exact same thing happened to us. Our hosting provider had to unblock the IP addresses as they had been marked as malicious.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.