How can we determine who is speaking when via the API?
I am aware of a TIMELINE recording file but cannot see any documentation on it: is it only for cloud recordings? Which format is the file? Does it only get created if transcription is enabled?
How can we determine who is speaking when via the API?
I am aware of a TIMELINE recording file but cannot see any documentation on it: is it only for cloud recordings? Which format is the file? Does it only get created if transcription is enabled?
Hey @jimig,
Can you provide more details like if you are trying to detect the speaker during the meeting, or after the meeting with the recordings / transcripts?
You can detect who is speaking in real time via our SDKs.
For example,
https://zoom.github.io/zoom-sdk-android/us/zoom/sdk/InMeetingVideoController.html#activeVideoUserID--
Thanks,
Tommy
Hi Tommy,
It could be during or after.
The users have the zoom app so how would those SDKs help?
Hey @jimig,
The SDKs would basically put “The Zoom App” inside your app, if you wanted to customize the experience and detect the active speaker in real time.
Do you mind sharing your use case so I can better understand what you want to accomplish?
Thanks,
Tommy
Our use case is performing analysis on the meetings.
We’d like to know which participants spoke and for how long.
Our customer would authenticate with our platform to grant access to read all meetings/recordings. I was hoping to find this kind of data via the rest api
Thanks for sharing your use case.
Unfortunately we do not have an endpoint to see reporting on which participants spoke and for how long.
Yes it is only for Cloud Recordings. The TIMELINE file is a .json file. We are currently working on making improvements to match the TIMELINE file with the audio_transcript.vtt file so that you can accomplish this.
Jira: ZOOM-68283
Thanks,
Tommy
Thanks Tommy, how do I go about creating a timeline file?
Currently I can only see audio files when doing a cloud recording
Hey @jimig,
The timeline file will be included in the GET /users/{userId}/recordings response body.
For example,
{
"meeting_id": "VCadzFUxSwezKq4g+u+V5w==",
"recording_start": "2019-09-16T19:09:13Z",
"recording_end": "2019-09-16T19:09:58Z",
"file_type": "TIMELINE",
"download_url": "https://api.zoom.us/recording/download/9a1cd1ab-c788-4697-9579-e224f1dc8600"
}
Here are the cloud recording settings I have:
Let me know if you have any other questions!
Thanks,
Tommy
Will one able to detect active speaker (useId) with websdk@1.8.0?
There is inMeetingServiceListener, but will it be an onActiveVideoUseID
event available there?
Also, i’ve asked to improve the documentation to clarify the list of available events: Web SDK ZoomMtg inMeetingServiceListener
Hey @aleksandr.borovsky,
Thanks for your feedback, once 1.8.0 is released there will be a list of available events.
Thanks,
Tommy