We are getting user-based audio via the onOneWayAudioRawDataReceived()
callback (link to docs).
It provides uint32_t node_id
which seems to represent speakers, how do we resolve those to participant names?
We are getting user-based audio via the onOneWayAudioRawDataReceived()
callback (link to docs).
It provides uint32_t node_id
which seems to represent speakers, how do we resolve those to participant names?
@gibron , IMeetingParticipantsController has a GetUserByID method, which returns an IUserInfo object.
This IUserInfo object has a GetUserName method.
Okay @chunsiong.zoom we have had the opportunity to try.
The GetUserName
method of the IUserInfo
class only seems to return numbers, such as 66
or 71
or 114
. I was expecting this to get the participants name as a string are these ids referencing something else? How do we use those values?
Also for the onOneWayAudioRawDataReceived
method of the IZoomSDKAudioRawDataDelegate
class, is that expected output a continuous audio stream simultaneously for each meeting participant? So while one person is talking, the rest of the audio streams will remain silent?
This might be a separate question so please let me know if you’d like me to edit this and create a new question. But after working on this a bit it feels like we have to open up a websocket connection for each participant, and then apply our own timestamp for each piece of audio (we are transcribing it), and then try to assemble them in order in another process outside of the individual websocket connections.
Does that feel like the recommended use? It feels like there should be a better way of doing this rather than reconstructing the meeting audio streams. there another more straightforward way?
@gibron ,
The
GetUserName
method of theIUserInfo
class only seems to return numbers, such as66
or71
or114
. I was expecting this to get the participants name as a string are these ids referencing something else? How do we use those values?
Odd… did they use numbers for their name?
Also for the
onOneWayAudioRawDataReceived
method of theIZoomSDKAudioRawDataDelegate
class, is that expected output a continuous audio stream simultaneously for each meeting participant? So while one person is talking, the rest of the audio streams will remain silent?
onOneWayAudioRawDataReceived()
returns a node_id too.
This is a unique value for each individual. If I remember this correctly, it should match the user id in IUserInfo.
Let’s assume the case where 2 person speaks at the same time, you might have onOneWayAudioRawDataReceived()
firing at roughly the same time, but with 2 different node_id
This might be a separate question so please let me know if you’d like me to edit this and create a new question. But after working on this a bit it feels like we have to open up a websocket connection for each participant, and then apply our own timestamp for each piece of audio (we are transcribing it), and then try to assemble them in order in another process outside of the individual websocket connections.
I might open a websocket for each unique node_id. Another alternative would be to save the audio individually for each unique node_id, and then passing it to a transcription service.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.