How is the input streams for AudioRecord inside zoom SDK selected?

Description
Hi,

I opened mobilertc.aar using AndroidStudio and found the AudioDeviceAndroid.class file.
In this file, there are three pieces of code where an instance of AudioRecord is created.

<mobilertc.aar/classes.jar/org/webrtc/voiceengine/AudioDeviceAndroid.class>

There are three things that I want to know.

  1. There are three “new AudioRecord()”, what is the condition or reason when each one is called?

  2. How is the value of mBestAudioSource determined?

  3. Is there an API that allows me to select the mBestAudioSource value?

Which Android Client SDK version?
I’ll post when I know.

Hi @k.doi, thanks for using our SDK.

Unfortunately, when it comes to logic internal to the SDK, we cannot disclose much information as some of it may be proprietary in nature. If you have any questions related to the publicly available SDK methods, please let me know and I will be more than happy to help. :slightly_smiling_face:

Thanks!

All right then, does the Android Client SDK have a method to determine which audio device (specific microphone and speaker) to use in a meeting?

Hi @k.doi,

There isn’t a lot of control offered through the SDK for audio input, unfortunately.

There is a small amount of control allowed for output through the InMeetingAudioController class.

Please let me know if you have any questions on how to use this, or if there is anything missing that you would like us to consider adding in a future release.

Thanks!

thanks for the information.

Hi @alexsunny123, thanks for using the dev forum.

Are you looking for help with a similar issue?

Thanks!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.