RTMS Node SDK segfaults after successful join on second session in same process (Node 24, leave + uninitialize called)

Hi all — we’re running into what looks like a native crash in the RTMS Node SDK and wanted to check if this is expected behavior or a bug.

Summary

When we run multiple RTMS sessions sequentially in the same Node process:

  • First session works perfectly
  • We cleanly shut it down (leave() + uninitialize())
  • Second session:
    • joins successfully
    • then immediately segfaults

If we restart the container (fresh process), the next session works again.

So this appears tied to reusing the SDK in a long-lived process.


Environment

  • Node: 24.14.0 (per RTMS recommendation)
  • Platform: linux-x64
  • Container: node:24-trixie-slim (Debian)
  • Runtime: AWS ECS (Fargate)
  • SDK: @zoom/rtms (latest as of March 2026)

Behavior

First session

  • join succeeds
  • captions stream normally
  • shutdown is clean
Successfully joined
...
<RTMS data received>
...
Successfully left

Second session (same process)

Joining meeting
Successfully joined
Segmentation fault (core dumped)

This happens even when:

  • using a completely new meeting
  • using a new RTMS session
  • calling both leave() and uninitialize()

Shutdown code
We are doing full cleanup:

client.leave?.();
client.uninitialize?.();

Also clearing timers, sockets, etc.

What we observed

  • Crash happens AFTER successful join
  • Not related to auth or meeting data
  • Not fixed by upgrading to Node 24
  • Not fixed by adding uninitialize()
  • Fresh container works every time for the first meeting join

Question

Is the RTMS Node client expected to support multiple sequential sessions in the same process?

If yes:
is there additional cleanup required beyond leave() + uninitialize()?

If not:
is there a known limitation that requires one RTMS session per process/container?

Current workaround

We are currently forcing a process restart after each session to avoid the segfault, but this prevents us from supporting multiple concurrent or sequential streams in one container.

Happy to provide more logs or a minimal repro if helpful.

Hi @Chris33 this looks like a bug to me, not documented expected behavior. The Zoom RTMS API surface says a Client is tied to a single meeting, but you can create multiple client instances, leave() stops polling and releases client resources, and Client.uninitialize() releases SDK resources for when you’re done using the SDK. Especially if this is consistently reproducible, this looks like something that the Zoom team will need to address internally.

If you’re looking to use Zoom RTMS out-of-the-box without managing the infrastructure on your end, I wanted to note that we’re a Zoom RTMS Preferred Partner. We’ve helped thousands of developers integrate with Zoom and can help you with your RTMS integration too!