I’m building a Next.js 15 telehealth app on Zoom Video SDK Web (v2.1.10, WebRTC).
Question: Is it better to show the self‑view via the SDK’s renderVideo()/startVideo() or capture a separate local preview with navigator.mediaDevices.getUserMedia?
I’m mainly concerned about:
• CPU / memory (my laptop gets hot during tests)
• Accurate feedback—seeing what remote participants actually receive
• Any side‑effects (camera conflicts, duplicate capture, etc.)
@vic.yang
Hey @nventurino
Thanks for your feedback.
Is it better to show the self‑view via the SDK’s renderVideo()/startVideo() or capture a separate local preview with navigator.mediaDevices.getUserMedia?
It depends on the effect you want to achieve. If you want the self-view to be visible to remote users, use the startVideo
method together with attachVideo
. If it’s only for local self-view preview, you can use getUserMedia
or the Video SDK’s localVideoTrack
to achieve that.
CPU / memory (my laptop gets hot during tests)
As for the performance concern, we are continuously working on improvements. We rolled out the WebRTC video solution starting from version 2.1.0, so please keep the Video SDK up to date.
Thanks
Vic
1 Like
yes I mean during a videosdk session, so yes I want to send video to the peer. However I noticed that self-view using getUserMedia video seems to be higher quality and I can put it in a <video>
element to style how I want (draggable container and object-fit:cover), so just wanted to check if I’m overlooking something by doing both.
Hi @nventurino
The Video SDK allows developers to use getUserMedia
to capture and render video themselves, without side effects.
Thanks
Vic