I’m using Zoom’s Meeting SDK to build a bot that joins and records meetings. To capture individual participant audio streams, I’m utilizing the IZoomSDKAudioRawDataDelegate::onOneWayAudioRawDataReceived method. However, I noticed that the length of the recorded audio doesn’t match the entire duration the participant was unmuted.
I’m trying to understand whether onOneWayAudioRawDataReceived continuously provides an audio buffer as long as the participant remains unmuted, or if it only sends data when the participant is actively speaking.
Could I be missing something in the implementation? Any insights would be greatly appreciated!
onOneWayAudioRawDataReceived should indeed stream audio continuously when a participant is unmuted (even if silent from my understanding). If you’re seeing discrepancies between your recorded audio length and the actual time someone spoke, check for:
Dropped Frames: Buffer overflows or timing issues can cause missed frames.
Sample Rate/Timestamp Mismatch: Verify you’re using the same sample rate Zoom delivers and accurately handling timestamps.
Recording Timing: Make sure you start recording the moment participants begin speaking and stop only after they’ve finished.
Silent Frame Handling: Ensure you’re not accidentally discarding silent frames at the beginning or in between speech.
Addressing these areas should help resolve any discrepancies in your final