Video SDK - Live Streaming Video Composting and Supported Layouts

Hi,

I have a few questions on Video SDK support for live streaming a session, specifically around video composting and how video sessions in the stream are rendered, which I’d appreciate some help with. For background, I’ve already read the following sections of the SDK docs

1) Can you confirm that the Video SDK supports live streaming the audio/video streams of multiple users/participants in the video session, concurrently?

Assuming so, when the SDK method to start live streaming is invoked, does this stream the video of every user/participant in the session, or only the video of the user on behalf of which the SDK call is made?

2) Assuming the answer to 1) is yes, how are the video streams composted/composed and laid out in the generated RTMP live stream by default?

What out of the box compositions and layouts for the live stream are supported and how do they vary depending on what media (e.g. webcam or screen share) is active for a given participant’s stream?

Are there any documented examples of the different composting / layouts that are supported?

And does the Video SDK support programmatically controlling / selecting a desired composition / layout for the live stream?

3) Am I correct in understanding that as of today the desktop Video SDKs (macOS or Windows) are the only means of enabling live streaming for a Video session? (I’ve already found that this function isn’t supported by the Video Web SDK).

Do you have any plans to offer a public web API for the Video SDK service to support managing and reporting on Video SDK sessions, in a similar way to which the Zoom REST API supports this for Zoom Meetings today?

Thanks in advance. Regards,

Neil.