Currently, the audio data chunks and video data frames are received at separate callback in the JS code sample. Is there a way to receive a combined audio+video stream instead of separate ones? If not, which algorithm do you recommend we use to synchronize and merge the audio and video streams correctly into one stream?
@xgantan I’ve written a sample app for this
The high level idea is timestamp all the buffers received. Sometimes when users mute their audio or video, save “empty” frames into the audio or video track.
Do note that this sample doesn’t have the logic implemented for multiple user, it is just to showcase how it can be done.
1 Like
I tried setting video.data_opt
to 4 (mixed video stream using speaker view)
or 5 (mixed video stream using gallery view)
. Then nothing is sent to the client server. Are they supported? If not, when can we expect them to be supported? Thanks.
@xgantan currently at point of this post, there is only 1 view support, which is type 3. Video Single Active Stream