Cannot get Camera Mode to work in Layers API

Context:
Working with NodeJS and VueJS

Description
So the issue I’m experiencing at the moment that can be forwarded to the zoom developers’ team are the following:

  • Camera mode never gets initiated on my mac after calling runRenderingContext with options { view: “camera” }. If I change the view value to immersive { view: “immersive” }, I can see that the runningContext changes to “inImmersive” mode whereas I do not get “inCamera” mode whenever I set view to camera. Also, I’d be glad if there is a way to use developer tools in camera mode as it isn’t very helpful that one cannot determine when runRenderingContext has switched to “inCamera”.

  • When i’m using the drawWebView and drawImage APIs, none of them seems to work, but while testing drawParticipant API, it works in Immersive mode so it is not clear why both drawWebView and drawImage wouldn’t work in camera mode, and these two are important in rendering the webview (the feature). This issue might also be because the camera mode is not being initiated as discussed in the first point.

How To Reproduce
Call runRenderingContext with camera mode. Currently both drawWebView and drawImage are not working after calling runRenderingContext

await zoomSdk.runRenderingContext({ view: ‘camera’ })
.then(async(ctx) => {
console.log(“runRenderingContext returned”, ctx);
})
.catch(async(e) => {
console.log(e);
});

Then call drawWebView

await zoomSdk.drawWebView({
webviewId: “speaking-time-overlay”,
x: 0,
y: 0,
width: 300,
height: 300,
zIndex:2
})
.then(async(ctx) => {
console.log(“drawWebView returned”, ctx);
})
.catch((e) => {
console.log(e);
});

OR call

await zoomSdk.drawImage({
imageData: imageData,
x: 0, y: 0, zIndex:3
})
.then((ctx) => {
console.log(“drawImage returned imageID”, ctx);
console.log(“drawImage returned imageID”, ctx.imageId);
})
.catch((e) => {
console.log(e);
});

This is the getImageData function
const getImageData = (width, height) => {
const canvas = document.createElement(“canvas”);
canvas.width = width;
canvas.height = height;

const img = new Image();
img.src = "HowTo2.png"; // our image url - change baseurl

const ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
return ctx.getImageData(0, 0, width, height);

};

Did you call zoomSdk.config with the capabilities first? I am also coding in Vue.js and intialize the application like

      zoomSdk.config({
        popoutSize: { width: 480, height: 360 },
        capabilities: [
          'getMeetingUUID',
          'getRunningContext',
          'getMeetingContext',
          'runRenderingContext',
          'closeRenderingContext',
          'drawParticipant',
          'clearParticipant',
          'drawImage',
          'clearImage',
          'drawWebView',
          'clearWebview',
          'postMessage',
          'sendAppInvitationToAllParticipants',
          'getVideoState',
          // Events
          'onRenderedAppOpened',
          'onMessage',
          'onMyMediaChange',
          "onSendAppInvitation",
          "onAppPopout"
        ],
      }).then(
        (configResponse) => {
          this.inZoom = true
          this.checkContext().then(() => {
            if (this.controllerMode) {
              zoomSdk.getVideoState().then((state) => {
                if (state.video) {
                  this.startCamera()
                }
              })
              zoomSdk.onMyMediaChange((event) => {
                if (event.media.video.state) {
                  this.startCamera()
                } else {
                  this.stopCamera()
                }
              })
            }
            if (this.cameraMode) {
              zoomSdk.getVideoState().then((state) => {
                if (state.video) {
                  this.renderWebView()
                }
              })
              zoomSdk.onMyMediaChange((event) => {
                if (event.media.video.state) {
                  this.renderWebView()
                }
              })
            }
          })
        }
      ).catch(e => {
        this.controllerMode = false
        this.inZoom = false
      })
    }

Render web view is like

    renderWebView() {
      zoomSdk.drawWebView({
        webviewId: 'MyCamera',
        x: 0, y: 0, width: 1280, height: 720, zIndex: 5
      })
      .then((ctx) => {
        this.logger("Web View Rendered")
      })
      .catch((e) => {
        // console.log(e);
      });
    },

The web view is another browser window running the same app but it’s console log you can not see. You can use zoomSdk.onMessage to send message between the two. So my component has a template like:

  <div class="container">
    <zoom-camera-app v-if="cameraMode"></zoom-camera-app>
    <zoom-controller-app v-if="controllerMode"></zoom-controller-app>
  </div>

drawWebView doesn’t work in immersive mode, it only works in camera mode.

drawImage and drawParticipant should work in both immersive and camera mode.

If you call the drawImage, or drawParticipant functions from the sidebar (inMeeting) or the immersive window (inImmersive) they only affect the immersive view.

We are working on docs in this area!

1 Like

Thank you so much @henry2. Super helpful.

How do i load dynamic content into the webview considering there’s no place to retrieve webviewId or would you suggest adding the code in the then block (where you added this.logger in your code)? Switching the view in my template as explained in your example affects the side bar. The side bar currently displays what i want to show in the camera mode on my webview.

Please help. Thank you

As Jon states you can not do both camera and immersive at the same time.

Your code can check to see if you are in the camera or panel using getVideoState

zoomSdk.getVideoState().then((state) => {
  if (state.video) {
       this.renderWebView()
  }
})

That is also what I effectively use to determine which component to display in my template (panel and camera are in separate components)

Then if I have a button or something that initiates a display in the camera I use zoomSdk.postMessage and zoomSdk.onMessage to send a message between the panel and the camera components. Post message can send a JSON structure so your on message and determine what to do based on that.

btw - I found that for some reason with Vue3 that the message received was a string instead of JSON as per the Zoom doc. So I parse it back into a JSON structure if it was not received as such:

processMessage({payload}) {
      let pl = (typeof payload == "string") ? JSON.parse(payload) : payload
      // For some reason the payload is a string form of the json
      // ......
 }

Thanks @henry2

I understand what you said and to be clear, I’m not using immersive mode with camera mode. All i want to do is use camera mode, I only switch to immersive mode to confirm if the code is running in another mode as it should. The issue i’m having is based on the example code that you provided above, after callingRunRenderingContext, it looks like camera view is not being rendered, even by specifying this {view: “camera”}. I’m testing with the drawImage function, so I do this:

zoomSdk.getVideoState().then((state) => {
  if (state.video) { // remember this is calling view: "camera" and by now should be in inCamera mode
       drawImage()
  }
})

The above code does not work in camera view. But if i change runRenderingContext to {view: “immersive”}, i see the image. Note: I only switch to immersive mode to confirm if the code is working and i’m testing with drawImage function since it works in both immersive and camera view.

Please help. Also, i see the template i want to show in camera view on the canvas when i switch to immersive, just for testing BUT drawWebView and drawImage are not working in cameraMode with the same code.

I use a method to check the running context that initialises the state. I contains code such as

zoomSdk.getRunningContext().then(
  (ctx) => {
    this.controllerMode = ctx.context == 'inMeeting';
    this.cameraMode = ctx.context == 'inCamera';
  }
).catch(
  (e) => {
    console.debug("*** ", e)
  }
)

Then I use the zoomSdk.drawWebView if the running context is inCamera. It takes a few seconds for the display to in the layer over the camera, but it does work. The camera has to be running, so I also have code that tests the video state and when the state is video I start the running context and when the state changes (camera start or stop) I start or top the running context.

I think the key is that the start the running context has to happen after the camera is turned on.

2 Likes

Thank you @henry2 for all your help. I’d try this again.

1 Like