Hi-res images in setVirtualForeground or drawImage

I have developed my first Zoom App and am displaying certain kind of imagery overlaid over the user’s camera feed. I am using the Zoom App SDK and the app is written in Javascript.

Now, locally the app works just as expected and everything is to my satisfaction. Remotely however - after the image has been scrambled into the video feed and sent over the wire - the overlays are often (depending on the remote user’s bandwidth I guess) very pixelated and (if containing text) barely to not legible.

I have tried out two ways of setting the overlay: setVirtualForeground and drawImage. While setVirtualForeground makes no mention of hi-res images, drawImage documentation claims the images should be adjusted to HDPI displays and scaled by the resolution factor (window.devicePixelRatio is what I used). The thing is: when I do this, the drawImage-drawn image is just drawn double in size and not resulting in a better quality.

Now, how do I fix my drawImage issue and is there any way to improve the quality of such overlays inside Zoom Apps or can I write the app differently to achieve my goal?

Thanks in advance,

Chris

@cz2022 In the customlayout sample app, I ended up drawing quadrants due to API/SDK limitations at the time. In that process, I didn’t scale the width or the height of the image up. I only found the center and then quadrants within that while accounting for padding.

Do you have any screenshots or a git repo that shows the issue you’re encountering?

To add to my last note here is a tip from one of our engineers:

Unfortunately setVirtualForeground is affected by the video feed, and won’t be high resolution during low bandwidth.

Hi Max,

thanks a lot for your reply. I went through your code and it is pretty straightforward…
Comparing to what I am doing really not much of a difference. However, I am running in camera context and draw the image. I was under the impression: if I use window.devicePixelRatio for size / drawing, get the imageData and drawImage it, the Zoom client would “apply” the same devicePixelRatio on drawing. This is not the case.

Also, while we’re talking. Any idea why I sometimes get an error 10063 and sometimes 10001? These seem really random. I was not able to find any definition for these error codes.

Its’ a shame setVirtualForeground doesn’t work here. The handling is way easier.

A minimal example: I want to black out the top left quadrant of the video. This is the way I would do it with a normal canvas in an HTML page. Hoever, in the zoom client and devicePixelRatio returning 2, the whole screen turns black.

(async () => {
  const result = await zoomSdk.config({
    capabilities: [
      'runRenderingContext',
      'closeRenderingContext',
      'drawImage',
      'clearImage',
    ]
  });
    
  const {width, height} = result.media?.renderTarget;
  const scaleFactor = window.devicePixelRatio;

  await zoomSdk.runRenderingContext({view: "camera"});
  
  const canvas = document.createElement("canvas");
  
  canvas.width = width * scaleFactor;
  canvas.height = height * scaleFactor;
  
  const ctx = canvas.getContext("2d");
  ctx.scale(scaleFactor, scaleFactor); // apply scale according to devicePixelRatio
  
  ctx.fillStyle = "#000";
  ctx.fillRect(0,0, width * 0.5, height * 0.5); // a quarter of the screen in device coordinates
  
  const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
  await zoomSdk.drawImage({imageData, x: 0, y: 0, zIndex: 1});  // -> the whole screen is black instead of just the quadrant.
})();

Am I misunderstaning zoom’s handling of resolutions when drawing images?

Cheers,

Chris

There is a renderTarget field the config response that contains the resolution of camera mode’s off screen renderer. This isn’t scaled by screen resolution. Also for all versions of Zoom that support Camera Mode it is 1280x720 for now.

If you drawWebView at x:0, y:0, width:config.media.renderTarget.width, height:config.media.renderTarget.height then the webview will cover the entire camera area.

And you can drawImage at 0,0,100,100 to put a square in the top left corner.

For the error codes, the numbers are paired with a message string. Try saving the error in a global window var in the the catch part of the promise (err){window.zoomError=err} and then you can inspect it’s contents better. I think when the error object is converted to a string it may only show the number, but inside it has more information.

Hope that helps! Let me know if I can clarify anything else.

Hi @Robert.Wallis , @MaxM ,

first of all: thanks a lot for your replies. I am already using the dimensions of the renderTarget property. But I guess you answered with this my question whether I can use the device resolution to just draw the images which seems to not be the case.

Regarding your suggestion: you were right and the error message was just not stringified and only the code was present. Gonna get back to those shortly… Regarding drawWebView: I cannot draw the web view. The webview stays on the side as a control panel and has a completely different UI to what I actually draw (images), which would also be fine. BUT:

Now, sadly my “fight” is still far from over :wink: Now that I have moved to drawImage from setVirtualForeground based on Max’s answer, I found the issue with running a “camera” rendering context:

  • I get either {message: “success”} as a result or
  • error 10063: The app already called render Js-api first.

in either case, getRunningContext returns “inMeeting”. I get a short white flash when initializing the context and then right after (even with message: success) getRunningContext returns “inMeeting”. In general drawing images works, however, sometimes (every 10 draws maybe) I get a spontaneous:

  • 10001:The zoom client encountered an error while processing the request.{“imageId”:…}

Which could be related to the initialization?

Here the code…

(async () => {
  const configResult = await zoomSdk.config({
    capabilities: [
      'runRenderingContext',
      'closeRenderingContext',
      'drawImage',
      'clearImage',
    ]
  });

 try {  
    const result = await zoomSdk.runRenderingContext({view: "camera"});
    console.log(result); // sometimes {message: success}, if not thrown
    const context = await zoomSdk.getRunningContext();
    console.log(context); // *always* inMeeting, expected inCamera
 } catch (error) {
    console.log(error);
 }
})()

I figured by now, that the 10063 comes, when I reload the app without exiting and closing the app. The reload doesn’t close the context and a new initialization is returning the error. The other case however makes no sense to me. I have seen another post here referring to the same issue which was related to a bugfix and was resolved with specifying a version in the config call, but in my case I am using the latest npm package, so I assume a version identifier would make no sense.

I have tried so far: reinstall my app, reboot. None worked.

Any suggestions would be highly appreciated, as it is the only missing part to finalize my product.

Cheers,

Chris

One thing that may help:
The sidebar and the camera mode apps are two entirely different webviews. You could think of the sidebar as being opened up in Safari or Edge, and the camera mode app as being opened in Chrome. In fact, it’s exactly this except we use the webview equivalent of those 3 browsers: WKWebView or EdgeWebView2, and CEF.

So the sidebar will always be “inMeeting”. The developer console will always be the sidebar’s developer console. There is no way to open the developer console for CEF. So you will never see a “console.log” with “inCamera”.

However, the sidebar cannot drawParticipant, drawImage, or drawWebView on itself. So when the sidebar sends these commands, they are instead sent to the camera mode webview.

I find it a bit more useful to have the camera mode app call drawParticipant and drawWebview on itself. Because of the multi-process timing issue where runRenderingContext is successful, however the camera mode hasn’t fully loaded yet.

Hope this clears up some debugging issues.

1 Like

Hey Robert,

thanks for the detailed explanation. It actually does help a lot and after re-reading the docs I think I got it.

I would like to follow your suggested approach and have the camera mode app to draw on itself, however: when I send a debug notification from the app which was started via runRenderingContext({view: “camera”}) it says it is in inMainClient mode and not inCamera. A call to drawImage from within this app fails with the error “API can only be called when the zoom app is running in a meeting”. Any idea what is happening there?

Cheers,

Chris

I see. runRenderingContext is only available from the meeting sidebar Apps “inMeeting”, not the main client Apps tab “inMainClient”.

Hey Robert,

I get it. And this is not the issue here. You suggested:

I find it a bit more useful to have the camera mode app call drawParticipant and drawWebview on itself. Because of the multi-process timing issue where runRenderingContext is successful, however the camera mode hasn’t fully loaded yet.

I am trying to do exactly this:

  • side bar app is inMeeting and calls runRenderingContext: camera
  • main client app loads
  • i check getRunningContext in main client app and it says: inMainClient instead of inCamera
  • i try to drawImage from the app in main client but the error gets thrown that only an inMeeting app can call drawImage

Have I missed something here?

Cheers,

Chris

There are 4 different webviews that run a Zoom Apps:

  • inMainClient : The “Apps” tab from the main Zoom Window
  • inMeeting: The sidebar App when in a meeting
  • inCamera: In a meeting, over my video feed.
  • inImmersive: In a meeting, replacing the grid view / speaker view area.

The only possible way to get to inCamera or inImmersive is from a runRenderingContext call in the inMeeting webview.

It won’t work from inMainClient.

The right-click-inspect debuggers are only available for inMainClient, inMeeting, and inImmersive. All of these webviews will open the same home url when you start.

When you call getRunningContext it will only return the context for the webview that is running the API. For example getRunningContext from the sidebar will always return inMeeting.

So in order for getRunningContext to return inCamera it will have to be open. And in order for you to see “inCamera” you’ll have to either display it on the screen, or use postMessage and then onMessage on another webivew to receive that message. console.log() or using the right-click-debug inspector will not show “inCamera”.

We are working on making camera mode easier to use, but this is how it works today.

Thank you for your patience.

Hi Robert,

no, thank you for your patience :wink: I think things are getting lost here :wink: I get all of what you are saying. I am using postMessage and showNotification to help my debugging (btw. I rolled my own debug mechanisms).

So, again:

  • side bar app (inMeeting as expected) invokes runRenderingContext: “camera”
  • loaded app after runRenderingContext invokes config call and shows via showNotification: inMainClient
  • another attempt (triggered manually later): side bar app makes a postMessage
  • notification (from onMessage handler) shows “inMainClient” instead of inCamera

The same approach with runRenderingContext “immersive” makes the notification show “inImmersive” as expected. It’s only the camera mode that seems to work different than “advertised” :wink:

Cheers,

Chris

@cz2022 If you’re seeing inMainClient in your postMessage() call then you are in the client instance instead of the in-meeting instance which is where I think the discrepancy is coming from.

Instead, once you runRenderingContext: camera and the page loads again in you can check if the runningContext is inCamera.

However, if you’re posting a message containing inMainClient then that means it was sent from the client instance instead.

I hope that helps! Let me know if you have any questions.