Hey
We’re using Apollo Client, and are in general satisfied. However, we’ve run into performance issues for some of our clients that have very large accounts, which leads to some large requests and store caches.
The UI will stutter and drop frames to a varying degree, but it’s very noticeable. It seems to increase/occur whenever a request comes in and tries to update the cache.
Based on discussion here: Slow updates with large cache
we’ve looked into our cache sizes and limits, and have adjusted accordingly.
We seem to have a pretty large cache, but have adjusted the limits to accommodate it:
{
"limits": {
"parser": 1000,
"canonicalStringify": 1000,
"print": 2000,
"documentTransform.cache": 2000,
"queryManager.getDocumentInfo": 2000,
"PersistedQueryLink.persistedQueryHashes": 2000,
"fragmentRegistry.transform": 2000,
"fragmentRegistry.lookup": 1000,
"fragmentRegistry.findFragmentSpreads": 4000,
"cache.fragmentQueryDocuments": 1000,
"removeTypenameFromVariables.getVariableDefinitions": 2000,
"inMemoryCache.maybeBroadcastWatch": 5000,
"inMemoryCache.executeSelectionSet": 250000,
"inMemoryCache.executeSubSelectedArray": 150000
},
"sizes": {
"print": 26,
"parser": 38,
"canonicalStringify": 15,
"links": [],
"queryManager": {
"getDocumentInfo": 27,
"documentTransforms": []
},
"cache": {
"fragmentQueryDocuments": 0
},
"addTypenameDocumentTransform": [
{
"cache": 27
}
],
"inMemoryCache": {
"executeSelectionSet": 128121,
"executeSubSelectedArray": 92948,
"maybeBroadcastWatch": 49
},
"fragmentRegistry": {}
}
}
It should be normalized, at least to a very high degree. We also have lint rules to ensure that everywhere an ID field is available, it is included in the query fragment on that type.
Any advice on how we can improve performance for the client/cache while managing large data sets would be much appreciated.
Thanks!