I wanted to share my experience with the raw data collection process using the Zoom Meeting SDK on Windows. I followed the tutorial provided by Zoom, but unfortunately, I was not able to achieve my desired results.
As per the tutorial, I implemented an instance of IZoomSDKRendererDelegate to process the raw video data and set up an IZoomSDKAudioRawDataDelegate for the audio raw data. However, even after following the steps mentioned in the tutorial, I did not get the expected output.
Which Windows Meeting SDK version?
zoom-sdk-windows-5.13.5.12103
Thank you in advance for your help.
To reproduce
As per Zoom’s recommendation, I would like to suggest adding a “To Reproduce (If applicable)” section to the tutorial. This will help others to understand the problem better and provide a possible solution.
Furthermore, I suggest restructuring the tutorial and adding the following example to create interfaces in the analogous files:
@jsarmiento, totally understand that it was difficult to capture and process the raw data from Zoom - audio and video are especially tricky to work with.
It might be worth checking out Recall.ai that has a hosted solution of this so it’s just a simple API call instead of you needing to spin up Windows containers at scale.
Currently we have not succeeded, We have managed to activate the custom record notification, but in the callback event don’t give the data (video and audio)
Here’s some steps to get raw audio working on Windows Meeting SDK
I’m assuming at the moment you are using Custom UI
Create an instance of IZoomSDKAudioRawDataDelegate
MyZoomDelegate.h
#pragma once
#include "stdafx.h"
#include "rawdata/rawdata_audio_helper_interface.h"
#include <iostream>
using namespace std;
using namespace ZOOM_SDK_NAMESPACE;
class MyZoomDelegate :
public ZOOM_SDK_NAMESPACE::IZoomSDKAudioRawDataDelegate
{
public:
virtual void onMixedAudioRawDataReceived(AudioRawData* data_);
virtual void onOneWayAudioRawDataReceived(AudioRawData* data_, uint32_t node_id);
};
MyZoomDelegate.cpp
#include "stdafx.h"
#include "rawdata/rawdata_audio_helper_interface.h"
#include "MyZoomDelegate.h"
#include <iostream>
using namespace std;
using namespace ZOOM_SDK_NAMESPACE;
void MyZoomDelegate::onOneWayAudioRawDataReceived(AudioRawData* audioRawData, uint32_t node_id)
{
std::cout << "Received onOneWayAudioRawDataReceived" << std::endl;
//add your code here
}
void MyZoomDelegate::onMixedAudioRawDataReceived(AudioRawData* audioRawData)
{
std::cout << "Received onMixedAudioRawDataReceived" << std::endl;
//add your code here
}
Now in your CustomizedUIRecordMgr.cpp
You would want to run something like this within the start recording.
Do not run m_pRecordController->StartRecording() and m_pRecordController->StartRawRecording() after one another. You should only run either of them, and in this case, the latter.
Here’s a possible response from Jose in English:
Thank you so much, @chunsiong.zoom! The code samples were really helpful and worked great for us. We were able to get raw audio data during the meeting.
I do have one question though, does everyone need to have their microphone open for the audio to be recorded properly in onMixedAudioRawDataReceived()? Or is there a way to record the audio even if some participants have their microphones muted?
is it possible to have both onOneWayAudioRawDataReceived and onMixedAudioRawDataReceived events active at the same time? If so, how can we choose which one to use for recording the audio?
I create a pull request to your repositorie with solutions to special problem We have with the video yuv format
I do have one question though, does everyone need to have their microphone open for the audio to be recorded properly in onMixedAudioRawDataReceived()? Or is there a way to record the audio even if some participants have their microphones muted?
No, you don’t need everyone’s microphone to be open / unmuted for onMixedAudioRawDataReceived to work. onMixedAudioRawDataReceived will be called whenever anyone and speaks into their unmuted microphone
is it possible to have both onOneWayAudioRawDataReceived and onMixedAudioRawDataReceived events active at the same time? If so, how can we choose which one to use for recording the audio?
They are both active at the same time. I’ve just tested with a breakpoint. During recording, both onOneWayAudioRawDataReceived and onMixedAudioRawDataReceived are called.
I create a pull request to your repositorie with solutions to special problem We have with the video yuv format
Thank you! The sample code does not catch all the cases. A more cleaner way to handle the YUV buffer frame would be to use openCV libraries to process and convert it.
I have added the code you make available on your repo. However, after starting to record in the meeting (the SDK has permission to so as I set it as co-host), I do not know if any raw recording is happening. The audio.pcm file you create in your sample code does not seem to exist and I do not see any logs looking to the output of the run.
Could you giveme some insight on how to debug this properly? I am quite new with this SDK a C++ in general.
I followed the README given in the repo and I got GetRawAudioData working! Thank you very much! I run it in Debug and x86 mode and had to fix some errors: installing jsoncpp and adding to the properties of the project, adding #include “meeting_service_components/meeting_audio_interface.h” in meeting_participants_ctrl_interface.h and finally adding #include in rawdata_renderer_interface.h.