dsy401
August 21, 2023, 1:57am
1
Hi developers,
Recently, I migrated the apollo server v3 to v4 on around 17 August and experience there are spikes on load balancer average response time.
Then I checked the server memory and it is not stable as well.
So, I am assuming whether it is related to apollo server v4 migration.
I was trying to use k6 load testing to conduct the pressure testing to simulate the high concurrency requests to hit my local server (Apollo server v4), and I have some logging that shows the memory heap usage increase rapidly.
Then I switched back to Apollo server v3, and conducted the high concurrency requests to hit my local server, and the logging shows the memory heap usage increase smoothly comparing to apollo server v4.
I also trying to switch the node version between 14, 16 and 18, the result was still the same.
I have no idea what the issue will be. Can anyone give me help?
Dependencies
"graphql": "^15.7.2", // also tried to upgrade to 16, still get the same result
"@apollo/server": "^4.9.1"
node version: 14.16
Hey @dsy401 , thanks for reporting. Can you tell me if the issue still exists when you add the ApolloServerPluginUsageReportingDisabled()
plugin to your plugins configuration? This might be related to an issue thatβs already open:
opened 07:26AM - 12 Jul 23 UTC
### Issue Description
We've been running Apollo server for a while in a couple β¦ of APIs, and we have always noticed a memory leak in both, which appeared to be linearly proportional to the number of requests handled by each API.
While investigating the memory leak, v8 heap snapshots where taken from the running servers at two different timestamps, with a distance of 6 hours. The latter heap snapshot was compared to the previous one in order to track what new objects are in the JS heap that where not 6 hours before, and there are thousands of new retained `Request`-like objects that reference the "usage-reporting.api.apollographql.com" host, and hundreds of `TLSSocket` new objects that reference this same host.
### Some objects that are leaking in the JS memory:
<details>
<summary><b>Request-like object</b></summary>
```
body::Object@13534193
cache::"default"@729π
client::Object@13537293
credentials::"same-origin"@54437π
cryptoGraphicsNonceMetadata::""@77π
destination::""@77π
done::system / Oddball@73π
headersList::HeadersList@13537317
historyNavigation::system / Oddball@75π
initiator::""@77π
integrity::""@77π
keepalive::system / Oddball@75π
localURLsOnly::system / Oddball@75π
map::system / Map@130579
method::"POST"@49427π
mode::"cors"@84517π
origin::system / Oddball@67π
parserMetadata::""@77π
policyContainer::Object@13537295
preventNoCacheCacheControlHeaderModification::system / Oddball@75π
priority::system / Oddball@71π
properties::system / PropertyArray@13537319
redirect::"follow"@53093π
referrer::"no-referrer"@85507π
referrerPolicy::system / Oddball@67π
reloadNavigation::system / Oddball@75π
replacesClientId::""@77π
reservedClient::system / Oddball@71π
responseTainting::"basic"@102749π
serviceWorkers::"none"@519π
taintedOrigin::system / Oddball@75π
timingAllowFailed::system / Oddball@75π
unsafeRequest::system / Oddball@75π
url::URL@13537301
<symbol context>::URLContext@13538143
fragment::system / Oddball@71π
host::"usage-reporting.api.apollographql.com"@13538145π
map::system / Map@135759
path::Array@13538147
port::system / Oddball@71π
query::system / Oddball@71π
scheme::"https:"@6945π
username::""@77π
__proto__::Object@135757
<symbol query>::URLSearchParams@13538149
map::system / Map@135741
__proto__::URL@135739π
urlList::Array@13537299
useCORSPreflightFlag::system / Oddball@75π
useCredentials::system / Oddball@75π
userActivation::system / Oddball@75π
window::"no-window"@87117π
__proto__
```
</details>
<details>
<summary><b>TLSSocket object</b></summary>
```
<symbol blocking>::system / Oddball@75π
<symbol client>::Client@131765
<symbol connect-options>::Object@13536139
<symbol error>::InformationalError@13536143
<symbol kBuffer>::system / Oddball@71π
<symbol kBufferCb>::system / Oddball@71π
<symbol kBufferGen>::system / Oddball@71π
<symbol kCapture>::system / Oddball@75π
<symbol kHandle>::system / Oddball@71π
<symbol kSetKeepAlive>::system / Oddball@75π
<symbol kSetNoDelay>::system / Oddball@73π
<symbol maxRequestsPerClient>::system / Oddball@67π
<symbol no ref>::system / Oddball@73π
<symbol parser>::system / Oddball@71π
<symbol pendingSession>::system / Oddball@71π
<symbol res>::system / Oddball@71π
<symbol reset>::system / Oddball@75π
<symbol timeout>::system / Oddball@71π
<symbol verified>::system / Oddball@73π
<symbol writing>::system / Oddball@75π
_SNICallback::system / Oddball@71π
_closeAfterHandlingError::system / Oddball@75π
_controlReleased::system / Oddball@73π
_events::Object@13536133
_hadError::system / Oddball@75π
_host::"usage-reporting.api.apollographql.com"@131813π
_maxListeners::system / Oddball@67π
_newSessionPending::system / Oddball@75π
_parent::system / Oddball@71π
_peername::Object@13536141
_pendingData::system / Oddball@71π
_pendingEncoding::""@77π
_readableState::ReadableState@13536135
_rejectUnauthorized::system / Oddball@73π
_requestCert::system / Oddball@73π
_secureEstablished::system / Oddball@73π
_securePending::system / Oddball@75π
_server::system / Oddball@71π
_sockname::system / Oddball@71π
_tlsOptions::Object@13536129
_writableState::WritableState@13536137
allowHalfOpen::system / Oddball@75π
alpnProtocol::system / Oddball@75π
authorizationError::system / Oddball@71π
authorized::system / Oddball@73π
connecting::system / Oddball@75π
domain::system / Oddball@71π
encrypted::system / Oddball@73π
map::system / Map@130053
properties::system / PropertyArray@13536145
secureConnecting::system / Oddball@75π
server::system / Oddball@67π
servername::"usage-reporting.api.apollographql.com"@13536131π
ssl::system / Oddball@71π
__proto__::Socket@147607π
```
</details>
Here is a chart showing the memory usage of the last two days for one of the APIs:
![Screenshot 2023-07-12 at 09 17 21](https://github.com/apollographql/apollo-server/assets/45515538/683cae90-0b57-4312-a53c-29ed54612800)
The first left half of the chart (the first day) the Apollo server was running with the `ApolloServerPluginUsageReporting` enabled, and the memory kept increasing linearly, and the last half (the second day), exactly the same code was running but passing the `ApolloServerPluginUsageReportingDisabled` to the plugins, so that the usage reporting is disabled. In this last case no memory was being leaked.
We are using `@apollo/server` with version `4.3.0`
### Link to Reproduction
https://github.com/GabrielMusatMestre/apollo-server-memory-leak-repro
### Reproduction Steps
Steps are described in the README.md of the reproduction repo.
This is not a reliable reproduction, as the memory leak might start being noticeable by running the server under heavy load for hours or days, and it needs a properly configured `APOLLO_KEY` and `APOLLO_GRAPH_REF` that will actually publish usage reports to Apollo.
If it seems unrelated, would you please open another issue on the apollo-server
repo with as much detail as you have available and a reproduction? Without more details Iβm not sure I can reproduce this, so please be as thorough as possible.
dsy401
August 21, 2023, 10:03pm
3
Thanks, I will add the ApolloServerPluginUsageReportingDisabled() to see whether it works or not
dsy401
August 22, 2023, 1:33am
4
@trevor.scheer Looks like there were still memory leak and performance issue. I will create issue in apollo-server repo
@dsy401 did you create the issue? We are also having a memory leak but we cannot find it yet.