How do I get raw video in multiple files of a certain length when using Meeting Linux Bot (meetingsdk-linux-raw-recording-sample)?

While working on the bot, I get one file output.yuv with the conference recording, however I want to process the video in real time and for this I need it to record video of a certain length of time. Or can I relay the direct stream received by the bot? Stream I want to process using a neural network that works on python Tensorflow and Keras

@emotioniq before saving the file into output.yuv, there is onRawDataReceived which provides you with the individual YUV420 video frames.

You can choose to manipulate the frames, or stream it somewhere else.

Note: If you do too much processing (in onRawDataReceived) on the same machine without enough Compute, Memory GPU, IOPS on storage, you will encounter performance issues.

If you’re processing these with a neural network, one thing you may want to do is figure out what frame rate you need to for your use case. Then, you could encode the YUV files into H.264 and then stream it to a different machine to do the processing.

The reason for this is because you want to maximize your GPU utilization, and, as Chun Siong said, you don’t want to do too much processing on the machine capturing the video otherwise you will likely run into performance issues with the capture.

Hopefully that helps!

Another alternative is to use Recall.ai for your meeting bots instead. It’s a simple 3rd party API that lets you use meeting bots to get raw audio/video from meetings without you needing to spend months to build, scale and maintain these bots.

We also provide an easy way to get real-time PNG video frames, which are perfect for processing with AI.

Let me know if you have any questions!

1 Like