Hi all — we’re running into what looks like a native crash in the RTMS Node SDK and wanted to check if this is expected behavior or a bug.
Summary
When we run multiple RTMS sessions sequentially in the same Node process:
- First session works perfectly
- We cleanly shut it down (
leave()+uninitialize()) - Second session:
- joins successfully
- then immediately segfaults
If we restart the container (fresh process), the next session works again.
So this appears tied to reusing the SDK in a long-lived process.
Environment
- Node: 24.14.0 (per RTMS recommendation)
- Platform: linux-x64
- Container: node:24-trixie-slim (Debian)
- Runtime: AWS ECS (Fargate)
- SDK: @zoom/rtms (latest as of March 2026)
Behavior
First session
- join succeeds
- captions stream normally
- shutdown is clean
Successfully joined
...
<RTMS data received>
...
Successfully left
Second session (same process)
Joining meeting
Successfully joined
Segmentation fault (core dumped)
This happens even when:
- using a completely new meeting
- using a new RTMS session
- calling both leave() and uninitialize()
Shutdown code
We are doing full cleanup:
client.leave?.();
client.uninitialize?.();
Also clearing timers, sockets, etc.
What we observed
- Crash happens AFTER successful join
- Not related to auth or meeting data
- Not fixed by upgrading to Node 24
- Not fixed by adding uninitialize()
- Fresh container works every time for the first meeting join
Question
Is the RTMS Node client expected to support multiple sequential sessions in the same process?
If yes:
is there additional cleanup required beyond leave() + uninitialize()?
If not:
is there a known limitation that requires one RTMS session per process/container?
Current workaround
We are currently forcing a process restart after each session to avoid the segfault, but this prevents us from supporting multiple concurrent or sequential streams in one container.
Happy to provide more logs or a minimal repro if helpful.