First, thanks a lot for helping us!
I’ve stared at this a bunch running experiments, and I feel there must be something wrong here.
So, I have two queries that have nothing to do with each other – they return completely non-overlapping data. One query returns a large result; let’s call this query getStacks(). The second query returns a tiny result; let’s call this query dummyQuery(). Now, depending on whether the large result for getStacks is in the cache, processing the results of dummyQuery is either very fast or very slow – i.e. if I run getStacks with policy no-cache, then processing the results of dummyQuery takes 1ms. If getStacks results are cached, then processing dummyQuery results takes ~500ms. Additionally, processing the results of dummyQuery takes even longer if I turn this query into a mutation – it seems to usually take double (1s), but sometimes it seems to take 2s.
I believe this duration is all about apollo cache updating; it has nothing to do with other parts of my app re-rendering or such; I’m pasting the code that’s doing the measurement and the relevant Chrome profiles.
Another perhaps interesting fact is that, if the results of dummyQuery are not put in the cache when the query returns (for example because I run it with no-cache, or if dummyQuery is a mutation whose result is not normalized), then processing these results is fast.
So, it would appear that the presence of the large getStacks results in the cache affects the caching of dummyQuery results.
Is this supposed to ever happen / how can we investigate this further?
Here are the profiles showing the 500ms processing of dummyQuery results and 2s processing of dummyQuery results when it is a mutation. I’m also putting a screenshot of the mutation one below.
Here is the code doing the measurement, for clarity.
async function onClick() {
const start = Date.now();
const {data} = await client.mutate({
mutation: DUMMY_MUTATION,
//fetchPolicy: "no-cache",
});
console.log(
`done mutation. took ${Date.now() - start}ms. stats:`,
stats,
JSON.stringify(stats, null, 2),
);
This is the output of client.getMemoryInternals(), printed immediately
after the dummyQuery results have been cached.
{
"limits": {
"parser": 1000,
"canonicalStringify": 1000,
"print": 2000,
"documentTransform.cache": 2000,
"queryManager.getDocumentInfo": 2000,
"PersistedQueryLink.persistedQueryHashes": 2000,
"fragmentRegistry.transform": 2000,
"fragmentRegistry.lookup": 1000,
"fragmentRegistry.findFragmentSpreads": 4000,
"cache.fragmentQueryDocuments": 1000,
"removeTypenameFromVariables.getVariableDefinitions": 2000,
"inMemoryCache.maybeBroadcastWatch": 5000,
"inMemoryCache.executeSelectionSet": 50000,
"inMemoryCache.executeSubSelectedArray": 10000
},
"sizes": {
"print": 25,
"parser": 8,
"canonicalStringify": 4,
"links": [],
"queryManager": {
"getDocumentInfo": 9,
"documentTransforms": []
},
"cache": {
"fragmentQueryDocuments": 0
},
"addTypenameDocumentTransform": [
{
"cache": 9
}
],
"inMemoryCache": {
"executeSelectionSet": 45973,
"executeSubSelectedArray": 10000,
"maybeBroadcastWatch": 13
},
"fragmentRegistry": {}
}
}
The queries look like this:
type Query {
getStacks(snapshotID: Int!, filters: [StacksFilter!]): SnapshotData!
}
type Mutation {
dummyMutation: DummyType!
}
type DummyType {
id: ID!
dummyNumber: Int!
}
type SnapshotData {
id: ID!
# Goroutines grouped by process.
GoroutinesByProcess: [GoroutinesGroup!]!
}
# GoroutinesGroup describes a set of goroutines with identical stack traces.
type GoroutinesGroup {
# The IDs of the goroutines in this group, grouped by process.
IDs: [GoroutinesIDs!]!
Data: [FrameData!]!
# Frames are ordered leaf to root.
Frames: [FrameInfo!]!
}
The data inside GoroutinesGroup is not normalized, fwiw.