Scaling Headless bot using Docker

We have built a headless bot using the Meeting SDK for Linux and we are using the headless sample as our base, which deploys it to a docker container. Now we are worried if it can handle the increase in load we are expecting. The bot should at least be able to handle up to around 200 requests per second. Currently, it is just running in a single container deployed in AWS Ec2 Instance.

So, is it possible to have the requests that come in from the Zoom SDK for one meeting, in our case chat message events, be spread out across multiple containers?

@noahviktorschenk, if you end up needing something that can support higher load, you could check out the Recall.ai API for your meeting bots instead.

It’s a simple 3rd party API that lets you use meeting bots to get raw audio/video from meetings without you needing to spend months to build, scale and maintain these bots.

Let me know if you have any questions!

@noahviktorschenk when you mentioned 200 request per second, what is the “request” you are referring to?

In this case it would be, on chat message events from the in-meeting screen

@noahviktorschenk that should be fine.