I have a client that teaches deaf children. They are trying to use Zoom to conduct class. In a group video chat, Zoom decides where to focus based on who is making noise. This doesn’t work for a group of deaf people. So I’m trying to determine if sufficient API method(s) are exposed to build an app that would change the way the focusing works? Any ideas on a technical approach for this?
If the teacher is the host, the teacher should have control over the room and which students are muted or have video enabled.
While I applaud your desire to help these children, let’s see if we can get them help faster using existing functionality of Zoom.
Have you and the instructor read through these Zoom KB docs and watched this video?
I think it might help achieve the desired results faster.
Yes she’s aware of all this but finds it cumbersome while teaching and was hoping it could be automatic like it is for people that are speaking.
@hunter could you please expand on that sir and share more about the experience and what particularly is cumbersome for the educator? I would very much like to learn more about the problem and root cause in hopes of building a solution.
Is the goal here so the teacher is the only person to ever be displayed in the video stream?
If yes, the educator can disable participant video either while creating the meeting or once started, as well as muting all participants.
- Disable student video: Turn off a student’s video to block distracting content or inappropriate gestures while class is in session.
- Mute students : Mute/unmute individual students or all of them at once. Mute Upon Entry (in your settings) is also available to keep the clamor at bay when everyone files in.
A side note: To ensure the virtual classroom (meeting) is secure for students and educator, I highly recommend following these guidelines: https://blog.zoom.us/wordpress/2020/03/27/best-practices-for-securing-your-virtual-classroom/
Looking forward to hearing back from you!
The goal is not so that the teacher is the only person to ever be displayed in the video stream.
So think about a group of 30 deaf people that are trying to use sign language to communicate with each other via Zoom. There is nobody making noise - muting has nothing to do with what they’re trying to do. If there are 30 people then some of the students are not even being displayed the teacher’s screen. If they are not being displayed then she cannot even tell that they are trying to communicate with sign language. Or if everyone is on the screen in the gallery view then they are so small that it is difficult to determine who might be signing. So the idea here is to highlight the person with the most movement, in the same way that right now Zoom highlights the person with the loudest audio, so that the person using sign language would automatically be focused on for everyone.
Does that make sense?
You could use our client SDKs (iOS example), which has the ability to spotlight a user. However, you would have to create or use some sort of movement detection technology that watches the users video to know when and who to call the spotlight function for.
Thanks Tommy now we’re talking! So let’s say I had the computer vision technology to determine when someone was moving, and to what degree. But I’d still need to be able to grab the video stream of each user to be able to do that analysis in real-time. Is there anywhere in the SDKs where I can access the video streams?
You’d have to ask @Carson_Chen in the Mobile SDK channels about getting video streams.
Hey @hunter! How did this turn out?
I’m doing the same here in Spain, I teach deaf people and have meetings with them, it’d be amazing to have a feature like that.
Waiting for your reply!
Hey @adriangm44 I hit a roadblock unfortunately. The API methods I really need are only available via mobile apps, but I wanted to build a desktop plugin. So I’m a bit stuck on it.