CQRS Backend - Ensure local cache is persisted for a specified time period

Background

Due to our CQRS backend in most cases we never get an updated entity returned, we simply get a 200 and a basic payload stating that our event has been acknowledged. Because of this our local cache contains the most recent version of out record but a subsequent request returns the stale data from the backend and overwrites our local cache :cry:

Currently we keep this updated version of our data in a reactive variable which means we have two data stores which contain the same data which is less than ideal. Please also note that our entity records don’t contain any timestamp values such as updatedAt so we can’t do a comparison here.

Solution

At present I’m looking at building a solution around the apollo cache so that the local cache is persisted for a defined time period (2 mins), following this period the local cache can then be overwritten by the matching record from the backend.

I’m thinking we could have an object which defines the cache identifier key and an associated timestamp so a lookup would be extremely quick, e.g

 {
   "Person:cGVvcGxlOjE=": 1638867005,
   "Planet:cGxhbmV0czox": 1638868005,
   ...
 }

We’d just need to add a hook to set the identifier and timestamp upon adding the record to the local cache, and then have a way of inspecting records from the server side, determine if the local cache is still deemed newer and return the existing over the incoming while inside the timestamp indicated for that record identifier.

I’ve read the documentation around the type policies but wondered whether this would be the best place to handle this logic and if there was another approach I could use? Can you use define a top level merge function for an entity e.g. Person which could inspect and then determine whether to merge based on the identifier/timestamp above?

Any help would be appreciated.

This might be possible with type policies, I believe you can specify a merge function at the type level instead of the field level. However I would personally suggest solving this at the server level instead of the client level. A server that almost always returns out-of-date data sounds like something that could cause issues no matter how robust the client is.