Hi everyone,
I’m working on a federated graph that handles project state synchronization for a remote collaborative environment. Our team is using pc video editing to generate a high volume of local render previews, and we need to push the status of these renders (bitrate, frame counts, and encoding progress) back to our Apollo Server so other collaborators can see the live updates in a web dashboard.
I’m currently using graphQL subscriptions over websockets for this, but as the number of concurrent “render-start” events increases, I’m noticing a significant lag in the message delivery to the client. I suspect it’s either a bottleneck in the redis pubsub implementation or perhaps the way I’ve structured the schema to return the entire metadata object instead of just the changed fields.
I’m curious if anyone has dealt with similar performance issues when bridging native desktop software outputs with a graphQL backend:
-
Would it be more efficient to use
@deferon the initial query and then only use subscriptions for the absolute smallest delta updates? -
Has anyone tried using apollo router with a custom rhai script to throttle these high-frequency updates before they hit the subgraphs?
-
I’m also seeing some memory pressure on the server-side when multiple users are pushing binary metadata blobs; should I be moving the actual file info to a separate REST connector and just using graphQL for the orchestration?
I’m really trying to keep the UI snappy so that the progress bars for these edits feel “real-time,” but right now the overhead of the subscription lifecycle is making the whole experience feel a bit sluggish. Any advice on scaling this for high-throughput media assets would be a lifesaver!