All integrations were made following the Zoom Video SDK documentation.
I’m having some unsolved issues in the documentation.
When a participant turns on their camera, a black screen appears for the other participants, no video is displayed.
The participant who turned on the camera in this case was ID 16786432. In DOM, after the participant turns on their camera, the result is this:
<video-player-container classname="ZoomMeeting_videoPlayerContainer__G9DcX"
style="min-height: 700px; border: 1px solid var(--brd-color); border-radius: 5px; padding: 0.5rem; position: relative; display: block;">
<div class="ZoomMeeting_focusedUsersWrapper__No4Ue">
<ul class="user-list ZoomMeeting_focusedUserList__gFAkH" style="grid-template-columns: repeat(2, 1fr);">
<div class="video-cell ZoomMeeting_focusedVideoCell__C9G0d">
<span class="ZoomMeeting_userName__Ic8sd">Yan ADM</span>
</div>
<div class="video-cell ZoomMeeting_focusedVideoCell__C9G0d">
<video-player data-userid="16786432" classname="ZoomMeeting_videoPlayer__en__2" node-id=""
media-type="video" style="display: block;">
<div style="width: 100%; height: 100%;">
</div>
</video-player>
<span class="ZoomMeeting_userName__Ic8sd">Yan Ramos</span>
</div>
</ul>
</div>
<video-player node-id="16786432" data-userid="16786432" media-type="video" style="display: block;">
<div style="width: 100%; height: 100%;">
</div>
</video-player>
</video-player-container>
Below is the excerpt cited in the documentation I used:
export const renderParticipantVideos = async (client, stream, sessionParticipants, videoContainer) => {
const participants = client.getAllUser()
const existingVideos = Array.from(videoContainer?.children || [])
const existingUserIds = existingVideos.map(video => video.getAttribute('data-userid'))
const usersToRender = participants?.filter(p => p.bVideoOn && !existingUserIds.includes(p.userId)) || []
// Renderiza novos vídeos
for (let i = 0; i < usersToRender.length; i++) {
try {
if (!existingUserIds.includes(usersToRender[i]?.userId.toString())) {
const userVideo = await stream?.attachVideo(usersToRender[i]?.userId, 3)
userVideo.setAttribute('data-userid', usersToRender[i]?.userId)
videoContainer?.appendChild(userVideo)
}
} catch (error) {
console.error('Error rendering video for user: ', usersToRender[i]?.userId, error)
}
}
}
The documentation states that the enforceMultipleVideos option must be activated for multiple camera rendering to work correctly, so when starting the session, I pass this parameter:
export const tryJoinSession = async (client, sessionName, user, updateSessionInfos, updateSessionParticipants) => {
try {
const jwt = await getVideoSDKJWT(sessionName);
await client.init('pt-BR', 'Global', { patchJsMedia: true }, { leaveOnPageUnload: true }, { enforceMultipleVideos: true })
const joinResponse = await client.join(sessionName, jwt, `${user?.firstName} ${user?.lastName}`, '123')
if (joinResponse?.zoomID) {
updateSessionInfos({ isJoined: true, zoomInfos: client.getSessionInfo(), hostInfos: client.getCurrentUserInfo() })
updateSessionParticipants(client.getAllUser())
return joinResponse
}
} catch (error) {
console.error("Error joining session:", error)
throw error
}
}
I also saw in the documentation that activating SharedArrayBuffer is optional for better performance, but not necessary for camera rendering to work, so I didn’t activate it.
Below is more information about the versions and system I am using:
Video SDK version: 1.12.5
React version: 18
Next JS version: 14.2.2
OS: Windows 11
Browser: Google Chrome
Browser Version 129.0.6668.70 (Official Build) (64 bits)