Thanks for posting in the Zoom Developer Forum – I am happy to help here! Great question. We highly recommend implementing one of these three options to improve performance for the Web Meeting SDK and Web Video SDK in Chrome and Edge.
Thanks for your feedback, to be honest it was not the gallery view so much as the performance that was the main thrust behind our team looking into zoom SDK.
Your own performance report was really impressive, the performance in low bandwidth network conditions and also the lower CPU usage, which we think would be great for mobile devices, especially lower spec ones.
I suppose what’s not really mentioned in the report is how the results change when the SharedArrayBuffer is not accessible when for example the user is using an iPad to join the call?
Does SharedArrayBuffer affect the performance of 1:1 calls (we are a telemedicine company) when not available, or is it mostly required for multi-party calls, as nearly all our calls are 1:1, multi-party is not an issue but performance is a massive issue.
Hoping you can shed some light on this for me, I have tried to research the subject in depth but I’m still not 100% sure what role SharedArrayBuffer plays in the zoom architecture so it’s hard to make the call to go ahead with the switch when I know a significant portion of users wont have the ability to access it.
Thanks so much for your thoughts on the issue, and me and my team would really appreciate any insight you may have - it could save us a tonne of dev work!
The SharedBufferArray is a high-performance way to transfer video frames between the Zoom SDK and the app. However, it may not be possible to use it if the app is using third-party tags that interfere with the header setup.
One possible solution is to isolate the Zoom SDK in its own iframe or web worker, which would have its own isolated environment for header setup. This would require some additional development effort, but it would allow the app to continue using third-party tags without interfering with the Zoom SDK.
Another option is to use a different approach to video streaming that doesn’t rely on SharedBufferArray. For example, you could use a WebRTC implementation that doesn’t require this feature, or you could use a different video conferencing SDK that doesn’t rely on SharedBufferArray.
Ultimately, the best approach will depend on the specific requirements and constraints of your app. You may need to experiment with different approaches to find the best solution for your particular use case.
I kind of figured that might be the case but thanks for the confirmation that we weren’t missing anything obvious, we already have a fully working custom webRTC setup but the performance report released by zoom got the dev team talking.
I think for us the best solution would be to isolate the page that contains the video stream, obviously that will take a bit of work, for an SPA that uses a state manager, routing and classes for API abstraction it’s not an insignificant piece of work to split out a whole section of the app into its own mini-app. So I think we’ll hold off for now, but at least I know the way forward if/when we do decide to go for it.
As a side note - I do think that this information should be shared more widely in the zoom docs so devs know what they are getting into and the limitations of using the zoom SDK, our team had basically finished the integration by the time we realised the extent of this issue, and the scale of the fix needed to work around it. Just a little warning box at the top of the docs on domain isolation would have saved our team days of work.
Almost every commercial app in the world is going to have a load of 3rd party tags for all kinds of things, I mean who doesn’t have analytics? It’s not realistic to expect devs to be able to domain isolate an already established web application, many times it will be not even within their control if management use a 3rd party app for reporting or something.
Anyway - thanks for your reply, it did help confirm everything for us.
Enable SharedArrayBuffer will improve the user experience, including reducing the audio latency, supporting gallery view, 720P video, etc.
But in the case of 1:1 calls, the improvement is limited, and the change in CPU and memory usage is small, network is a big fact that impacts the user experience, and that’s what ZOOM Video SDK is good at.
It’s great being able to beat out the competition, but it has to be an apples to apples comparison, and if zooms SDK is used without SharedArrayBuffer in most web apps then I feel like that’s the version that should be tested.
I’m not trying to be awkward with all this, I work for a telehealth company that does hundreds of thousands of sessions a year, and each session needs to work well or the company refunds the client, that’s our policy, if the tech fails (even if it’s because they have poor internet) then we refund, because we feel like that’s the best way to drive adoption of these kinds of technologies.
That’s why it’s so important for us to have reliable data on which technology to use, we currently use 2 different webRTC providers, one is off the shelf and one is on our own servers as a backup.
After seeing the report we started experimenting with zoom SDK (we had looked into it before but mobile browsers weren’t supported till recently).
So, I suppose my question is, do you think we should still look at zoom without using the SharedArrayBuffer (early tests didn’t look great tbh), or…. is there a potential future version of zoom that doesn’t use SharedArrayBuffer?
Can you see a world where zoom is deployed as a webRTC media server, could zoom then use the data layer of webRTC and move its best in class video processing to the server?
According to the past test reports without SharedArrayBuffer, the latency of audio will be slightly higher than the WebRTC solution, and the video is on par, but the ZOOM solution is better off on a weak network.
We are continuously optimizing the video experience, including the Web RTC layer on the server.