Is sendVideoFrame() supposed to be a function that we build ourselves, and where would I find documentation to help with that?
Which Desktop Video SDK version?
Device (please complete the following information):
Device: Lenovo Ideapad 320
OS: Windows 10
I am trying to create a Zoom app that will implement the IZoomInstantSDKVideoSource class and the documentation for how to do this seems incomplete. The subheading for this step of the Render Video portion has the exact same text as the Receive video frames section above it, so there is no context provided for how to use the code snippet they include on how to send raw video data. After trying to incorporate the code example into my app, it was requiring me to create a IZoomInstantSDKVideoSender to pass into the onInitialize() method but when I tried to declare an instance of IZoomInstantSDKVideoSender or create a subclass for it, it would not allow either to compile because the sendVideoFrame() method is a pure virtual function so the classes are forced to be abstract.
No this function is simply called when a video frame is meant to be sent. The SDK handles this function, you just need to call it on your Sender object that you received from the onInitialize function.
Ah yep, you are right. Looks like this was a copy-paste mistake in the documentation. We will get that fixed as soon as possible.
Hmm, you should only have to implement the functions in the IZoomInstantSDKVideoSource for your video source class, then when the SDK calls onInitialize you should just have to grab the sender from the parameter. Let me share my code with some things stripped away:
Thank you for giving such a detailed response and providing some example code as guidance.
I looked at your examples and tried to incorporate it into my project but I continued getting null-pointer exceptions whenever I tried to send a video frame. Eventually, I determined that the callbacks for the VideoSource class were never being triggered.
My OnInitialize() method is identical to yours and I do assign the VideoSender object from the parameter to my own class member but after I have declared an instance of my video class, I check the value of the pointer to the VideoSender and it is null. I also have a vector to contain supported capabilities that are filled during the for-loop of the OnInitialize method. I tested to see if the problem was in the initial if-statement of OnInitialize checking if the parameter sender was valid but it never reaches the code.
When is the OnInitialize callback supposed to be triggered?
I tried extending IZoomInstantSDKVideoSource in its own class as described in the documentation here and then I tried mimicking your example code by extending the IZoomInstantSDKDelegate and the IZoomInstantSDKVideoSource in one class but neither approach triggered the callbacks upon being declared.
One error I did see pop up several times while testing this was ‘The stream number provided was invalid.’ but I can’t find details on what that means anywhere.
Last night I was able to figure out the issues with the callbacks by adding it to the sessionContext object and turning off other sources of video but I am still unable to receive the videoFrames on a different computer.
This error persists ‘The stream number provided was invalid.’ and I am wondering if it has something to do with the way we are supposed to prepare the data before sending it through the sendVideoFrame() method.
while (ReadFrameData(fp, nFrameLength, pFrameData))
if (pDemoDlg->m_send_size_type != send_size_type)
sender->sendVideoFrame(pFrameData, w, h, nFrameLength, 0);
Here you call a ReadFrameData() method that I assume is an abstraction for our personal handling of data but I have trouble intuiting what needs to be done to prepare that buffer for sendVideoFrame().
Do we have to convert all videos we send to one of the supported video capabilities?
If we do convert a video to one of the supported capabilities, is there any additional data that needs to go along with it? I see OnRawDataFrameReceived() takes in a YUVRawDataI420 object, but I haven’t found any information on what makes that different from a standard yuv buffer object. Is that conversion internally handled by the SDK?
I am using OpenCV to convert videos to the YUV420 type.