We’re facing some challenging performance issues with some queries taking an extremely long time to respond, often in excess of 20s and frequently timing out at 60s. These extreme timings cannot be reproduced in local dev environments, even when using load testing tools like graphql-bench and autocannon. These same queries can also complete in ~500ms.
We have a federated graph setup with an instance of Apollo Gateway (running ~0.24) and 2 instances of Apollo Server (running 3.x) providing subgraphs. All 3 use NestJS 7 on top of Apollo Server + express. Our resolvers use Typeorm to interface with a reasonably large and complex MSSQL database. The database is hugely OP and resource monitors never get over 10% RAM & CPU.
The API servers are hosted on AWS ECS and are generously resources. Like the DB server, the CPU and RAM usage average out at 10-20%, although the CPU does spike to >100% sometimes.
Tracing in Apollo Studio appears to show some high level resolvers “hanging” or “waiting” for long periods of time. ie. 27s out of a 30s response.
We do have some N+1 issues and we’ve made some limited use of Dataloader in places to resolve those issues, although we can definitely implement it more. I’m not convined that the N+1 issues are responsible for the extreme timings though due to the wild range of those timing and the inconsistencies of them.
Does this scenario sound famililar to anyone? Does anyone have any ideas or pointers how we might go about tracking down the root issue and debugging it? We’re a bit stumped tbh, especially as the issues seem to be impossible to repro locally.