Hello,
I feel like I’m misunderstanding something essential and it’s resulting it excessive network requests throughout our application whereas I’d expect these requests to be fulfilled by the cache more frequently.
Problem: Throughout the app, I am now noticing queries sent twice in many places, while some queries are only ever executed once. Ie. The query to ExternalAdminDashboard might always be executed 2x on a specific page load, and the query for myProfileForm will always be only executed once on its page load.
This problem is widespread enough that certain pages are executing many queries 3x, resulting in 40 queries on a single view. Linked is an example flame graph, where the named queries are just examples of many which are executed 3x. And sequentially! Despite their data being correctly added to the cache, and the cache “looking fine”. Our cache has no special configuration - it’s the InMemoryCache object, which we instantiate outside of react and it remains stable throughout re-renders.
I’ve done some further experimentation, and I can get the problem to go away if I add this policy to the offending queries: [fetchPolicy: “network-only”, nextFetchPolicy: “cache-only”]. But this feels odd - as the default behavior, from my understanding is we’ll hit the network if we don’t find the data, but if the data is in the cache, then we will grab that and forgo the network request. Because the above requests are happening sequentially, I’d 100% expect the data to be in the cache and retrievable.
What’s weirder - is that when I run experiments to force re-renders of React components via a timer after the initial page load, Apollo indeed does not make an additional network request. So even though we make all these requests on initial page load/rendering, we do seem to hit the cache afterward.
I’m also not entirely sure why a minority of requests behave exactly as you’d expect - I’m thinking probably because those specific requests are in React components that don’t re-render - so we don’t have the conditions to even potentially make the excessive requests. Am I missing something here? I do understand the concepts of the normalized cache and I’m certain that a large majority of these requests are cachable, as I see this problem even with requests that just request the currentUser’s Id, which should always be hit from the cache after first execution.
Any thoughts here?
Thanks!
Hey @rmjensen 
This does not sound typical so there is definitely more at play here. It’s a bit difficult to fully understand what’s happening without seeing what’s actually happening but I can at least offer a few things to look at.
Are you querying for any non-normalized objects that overlap with other queries in the app? More often than not, when I see multiple fetches of a query, it indicates that more than one query is asking for an overlapping, non-normalized object that doesn’t include the same selection set. When that non-normalized object gets written to the cache, the default behavior is to overwrite, not merge, that non-normalized object. Apollo Client ensures queries, at all times, can fulfill their data requirements, so if a non-normalized object gets written and overwrites fields that cause another query’s data to go missing, it could trigger a refetch. That refetch would then overwrite the first query’s data, and the second query might fetch again because now it has missing fields (hopefully you can see the problem). There is some defensive code in Apollo Client to prevent infinite refetches in this case (something we call a feud), but multiple fetches are typically an indication of this happening.
Can you check if you are querying for any non-normalized data and if that overlaps with other queries on the page? That might be a good place to start.
1 Like
Oh wow - thank you so much! That was super helpful - and it is exactly the problem! Ha. I didn’t understand that those later non-normalized entities would actually totally overwrite the entry in the cache. (leading to even the queries with normalized entities having to run again to re-fetch their now-absent data)
So then the solution moving forward is to always include the fields needed to ensure normalized entities. Now the challenging part is enforcing it in a distributed micro-frontend setup 
But seriously, thanks!
1 Like
Glad to help! Happy to hear that’s what the problem was 
So then the solution moving forward is to always include the fields needed to ensure normalized entities
Yes! This is the best way to avoid this problem. Querying the key fields for the entity ensures the cache can always identify it so it can safely merge data from various fields in different queries.
If though you find yourself needing to query non-normalized data (there are valid reasons for this), let me add one more tip for you. If those non-normalized objects are safe to merge (i.e. the existing object represents the same thing as the incoming object), I’d recommend adding a type policy for that field that tells the cache to merge the object rather than overwrite it. That would still let you query for subsets of fields on that object in different queries without having to maintain the same selection set in each.
new InMemoryCache({
typePolicies: {
YourType: {
fields: {
theNonNormalizedField: {
// always merge the existing with the incoming data
merge: true
}
}
}
}
});
We overwrite the non-normalized object by default because we have no idea of the existing object represents the same object as the incoming entity. Try this as well for objects you know can be merged together!
1 Like
Perfect thanks for the advice. I feel like I’ll definitely be needing it at some point as I try to wrangle in all these queries to work w/ the cache. 
1 Like