Runaway process created using Layers API

Zoom Apps Configuration

  • Any Zoom App using the layers API (runRenderingContext)
  • Mac Desktop client latest version 5.11.6 (9890)

Description
Invoking the layers API (i.e. runRenderingContext and then drawing layers) creates running processes on my computer. However, when the video stops displaying, and even when the meeting ends and the zoom client is completely closed these processes remain.

With repeated use of Apps using this API, it makes the computer slow and eventually unusable. Screenshot of my processes after testing an app for some time (note Zoom is completely closed at this point, none of these should still be running):

How To Reproduce

  1. Invoke Layers API a few times
  2. Quit Zoom
  3. Look at your running processes

Thank you for reporting this bug, we will take a look at it. This is very helpful.

Hi @Robert.Wallis and team, we’re still getting users complaining about this. Seems like a significant security and basic functionality issue and I’m surprised it hasn’t been addressed. Can you please provide an update? Thanks!

Hi @vandalayindustries we committed a patch for this in 5.12.0 in August. Are you using a client later than 5.12.0 and still getting this issue? If it’s 5.13.3 and still getting this issue please let me know and we’ll look into it more.

I asked the team that worked on it and they aren’t having the issue on 5.13.3, so we’ll need to figure out how to reproduce the issue if it’s still happening.

1 Like

Hi @Robert.Wallis , I just tested myself on Mac Version: 5.12.2 (11434), and confirming that loads of ZoomCefAgent processes remain running after the zoom client application is closed.

It should be straightforward to reproduce: call runRenderingContext and then draw layers (and iterate this a few times) and you’ll see runaway processes created.

@vandalayindustries Seems you’ve been able to get camera mode in layers api working. I have posted in this forum with no reply - Cannot get Camera Mode to work in Layers API.

After calling runRenderingContext with view set as “camera”. Nothing happens after using drawWebView. There’s no way to inspect to see if camera mode is also working.

Also, how do i get the drawWebViewId? I have been looking everywhere on this planet on where to get this but haven’t found a solution. Even when i use random string, nothing works.

Please help :pray:

What are you looking to use drawWebViewId for? Can you point me to the documentation that mentions this?

https://marketplace.zoom.us/docs/zoom-apps/guides/layers-manipulating-ui/ this is what I’m following, can you see where the

webviewId

Context:
Working with NodeJS and VueJS

Description

  • Camera mode never gets initiated on my mac after calling runRenderingContext with options { view: “camera” }. If I change the view value to immersive { view: “immersive” }, I can see that the runningContext changes to “inImmersive” mode whereas I do not get “inCamera” mode whenever I set view to camera. Also, I’d be glad if there is a way to use developer tools in camera mode as it isn’t very helpful that one cannot determine when runRenderingContext has switched to “inCamera”.
  • When i’m using the drawWebView and drawImage APIs, none of them seems to work, but while testing drawParticipant API, it works in Immersive mode so it is not clear why both drawWebView and drawImage wouldn’t work in camera mode, and these two are important in rendering the webview (the feature). This issue might also be because the camera mode is not being initiated as discussed in the first point.

How To Reproduce
Call runRenderingContext with camera mode. Currently both drawWebView and drawImage are not working after calling runRenderingContext

await zoomSdk.runRenderingContext({ view: ‘camera’ })
.then(async(ctx) => {
console.log(“runRenderingContext returned”, ctx);
})
.catch(async(e) => {
console.log(e);
});

Then call drawWebView

await zoomSdk.drawWebView({
webviewId: “speaking-time-overlay”,
x: 0,
y: 0,
width: 300,
height: 300,
zIndex:2
})
.then(async(ctx) => {
console.log(“drawWebView returned”, ctx);
})
.catch((e) => {
console.log(e);
});

OR call

await zoomSdk.drawImage({
imageData: imageData,
x: 0, y: 0, zIndex:3
})
.then((ctx) => {
console.log(“drawImage returned imageID”, ctx);
console.log(“drawImage returned imageID”, ctx.imageId);
})
.catch((e) => {
console.log(e);
});

This is the getImageData function
const getImageData = (width, height) => {
const canvas = document.createElement(“canvas”);
canvas.width = width;
canvas.height = height;

const img = new Image();
img.src = "HowTo2.png"; // our image url - change baseurl

const ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
return ctx.getImageData(0, 0, width, height);

};

Thanks for sending that my way! Looking at the reference it doesn’t look like that option is supported:

https://marketplace.zoom.us/docs/zoom-apps/js-sdk/reference/#installation

I’ll work with our documentation team to make sure that our documentation matches the current version of our code.

We’ll work to make the camera mode more debuggable as I can see why that would be a barrier when developing. In the meantime, you should be able to use the promise resolution in conjunction with zoomSdk.postMessage and zoomSdk.onMessage to pass information between instances of of the application.

We have documentation on this process here:

and an example of how this can be done in the following repository:

I’m not sure why this wouldn’t be working. Once you have debugging set up via postMessage functions I expect that you can confirm if camera mode is initialized correctly.