How exactly do you update an array in the cache after a mutation?

Hi, after executing a query getItems, it returns an array of Items which consist of an _id and title, and this appears in my cache as such:

items = [{_id:"someID", title:"someTitle"},{_id:"someID", title:"someTitle"},...]

I have a mutation to add an item:

const ADD_ITEM = gql`
    mutation AddItem($title: String!) {
        addItem(title: $title) {
            _id
            title
        }
    }
`

And a corresponding useMutation hook:

const [addItem] = useMutation(ADD_ITEM, {
        update(cache, { data: { addItem } }) {
            const { _id, title } = addItem
            cache.modify({
                fields: {
                    getItems(existingItems = []){
                        const newItemRef = cache.writeFragment({
                            fragment: gql`
                                fragment NewItem on Item {
                                    _id
                                    title
                                }
                            `,
                            data: {
                                _id,
                                title
                            }
                        })
                        return [...existingItems, newItemRef]
                    }
                }
            })
        }
    })

Now, my question is, how do I update this array by pushing the returned data into it?

From what I understand from the docs, the data field is supposed to identify the query, but I’m not sure how I’m supposed to identify the cached query. When I inspect the cache query using the Apollo Dev Tools, I don’t see any unique ID or anything of that sort associated with it.
image

I tried identifying it with data: { "userID" : Meteor.userId() } (returns the matching userID), but this doesn’t work either.

How should I be approaching this?

This is unfortunately still not an easy problem to solve. You’ll need the second parameter of the getItems field modifier function. It contains a number of helpers, one of which is storeFieldName, which in turn contains the data you need: getItems({"userID":"zH4anDoRfNBQAZRe"}). The problem is, you’ll have to extract the userID from that string yourself so you can decide whether or not to push the new item to the array in the cache.

I’ve recently published a package that just so happens to attempt to solve exactly this problem among other things. You can find it here: apollo-augmented-hooks - npm

And here is a lengthy guide explaining exactly how the cache works and how to best tackle your specific problem: apollo-augmented-hooks/CACHING.md at master · appmotion/apollo-augmented-hooks · GitHub

By the way, a little hint: You can replace the entire cache.writeFragment section with a helper doing essentially the same thing:

getItems(existingItems = [], { toReference }) (
    [...existingItems, toReference(addItem)]
)
2 Likes

Thank you very much for this, I will have a read through and will try implementing this shortly. I will update on the progress with the final working code.

Hi @mindnektar, I’ve read through your guide and have implemented my mutation as such to successfully update the cache.

const [addItem] = useMutation(ADD_ITEM, {
        update(cache, mutationResult) {
            cache.modify({
                fields: {
                    getItems: (previous, { toReference }) => {
                        return [...previous, toReference(mutationResult.data.addItem)]
                    }
                }
            })
        }
    })

Thank you very much for the help, this seems to work like a charm so far.

I’ve also checked out your package and it seems to be solving a lot of the problems which make dealing with Apollo a little bit of a pain due to the immense nesting required. I have not actually used any of the new hooks yet, but I’m very keen on doing so. I’m working on a pretty large project, but I will probably be looking into migrating all the hooks over to using the augmented ones, and maybe if I have more complex use cases along the way I may use the augmented ones straight away.

The guide and documentation is extremely well done, thank you for that!

2 Likes

Glad I could help and thanks for the compliments! Migration can easily be done step by step; you could migrate a single hook, see if it works for you and then migrate more whenever convenient.

1 Like

I’ve questions related to this topic as well, so I’m writing here to avoid creating a new post.

I have got a query that fetches element from the server, and write them to the cache.
The query return elements in a specific order, by date.
Then, I apply a mutation to update one element in the server, which update also his date as it was edited.
Logically, when I refresh the page, the query is rerun and returns the result with the new order, which is great.
However, I’m would like the cache to reflect the fact that the order was updated without having to reload the query, as we are operating on the same data, we just reordered them.

I’ve tried with this code, but it’s not working.

const [shallowUpdate] = useMutation(SHALLOW_UPDATE, {
    update(cache) {
      cache.modify({
        fields: {
          getCuratedVideos(previous = []) {

            const sorted = [...previous];
            sorted.sort((a, b) => {
              const date1 = new Date(a.updatedAt);
              const date2 = new Date(b.updatedAt);

              return date2.getTime() - date1.getTime();
            });

            return sorted;
          },
        },
      });
    },
  });

Can you tell me how to update cache order after a mutation?

Yep, this is one of the most frequent gotchas with Apollo cache updates. It’s not working because the array elements you are sorting don’t actually have updatedAt fields, because they are not the actual video objects - they are just references to them in the cache. In order to access the updatedAt field (and any other fields), you’ll need the readField helper:

          getCuratedVideos(previous = [], { readField }) {
            const sorted = [...previous];
            sorted.sort((a, b) => {
              const date1 = new Date(readField(a, 'updatedAt'));
              const date2 = new Date(readField(b, 'updatedAt'));

              return date2.getTime() - date1.getTime();
            });

            return sorted;
          }, 

The reason why you’re not getting the actual objects is because the cache is normalized - no matter how often you request a particular object, it will only exist once in the cache and be referenced wherever needed.

I suggest you have a read through the guide I’ve linked above to get a clearer understanding of how the cache is structured and how to properly interact with it, so you’ll have an easier time in the future!

1 Like

Hi @mindnektar, so now I’m trying to update an item already existing in the cache (just a single field).

I have the following query in my cache:

getInfo({"_id":"someID"}) {
    name: "someName",
    ...some other data
}

There can be multiple of such queries with different _id’s in the cache. There is a specific field, toggle: false that I wish to flip to true after a mutation completes.

I tried the following:

        update(cache, { data: { submitInfo }}) {
            cache.modify({
                fields: {
                    getInfo(existingInfo = []) {
                        const newInfo = cache.writeFragment({
                            data: submitInfo,
                            fragment: gql`
                                fragment NewInfo on Info {
                                    _id
                                    toggle
                                }
                            `
                        })
                        // I know the following statement is wrong because
                        // existingInfo is not iterable. 
                        return [...existingInfo, newInfo]
                    }
                }
            })
        }

As per the guide I also tried to identify the cache object as such:

cache.modify({
     id: `{"_id":"${someID}"}`,

But this didn’t work either. All I need to do is modify this field within this query to change the Boolean value. In some future mutations I’ll need to be doing some other field updates, like for strings, etc, and I believe the method of modification would be the same if I can figure out how to update it in this instance.

On a separate note, is there a cache modification cheat sheet somewhere? I’ve searched but I’ve not been able to find any.

After I complete the slew of cache modifications I need to do I’ll create a cheat sheet open for any contributions. It’s been a bit of a pain trying to figure out how to implement some of this with reference to the official documentation. I think these are common use cases which should have immediate references up somewhere.

Ignore the previous comment, I didn’t catch your example properly. Sorry about that.

Your use case is exactly the main one I’m seeking to solve with apollo-augmented-hooks. You don’t need to pass an id to the cache.modify call, because you want to update a root query, the root query at hand being getInfo. The problem is that your current implementation would update every single getInfo item in your cache, not just the one with the id you want. You’d have to use the storeFieldName helper to find out which getInfo item is being targeted in each iteration and how to modify it.

Search for this section in my caching guide: “Imagine our todos query was parameterised” (I realize I should probably add some anchor links to it…). That’s where I’m explaining your use case in-depth.

1 Like

Thank you so much once again. So how I implemented it is as such:

update(cache) {
    cache.modify({
        fields: {
            getInfo(previous, { toReference, storeFieldName }) {
                const jsonVariables = storeFieldName.substring(
                    storeFieldName.indexOf('{'),
                    storeFieldName.lastIndexOf('}') + 1
                )
                const variables = JSON.parse(jsonVariables)
                // The id variable below is from outside this function
                // because I have it available in my React component.
                if (variables._id === id) {
                    const temp = {...previous}
                    temp.toggle = false
                    return temp
                }
                return previous
            }
        }
    })
}

Some notes:

  1. I did not need to actually use my mutationResult because on a successful mutation I know the state will always be false, so I manually update it.
  2. I need to spread the data into an object, not an array, as I just need to update a single field, and the data itself doesn’t consist of an array and none of the fields consist of arrays either.
  3. For more complex objects/arrays, I envision usage of cloneDeep comes in handy.

Your guide, as again, is a lifesaver.

It’s just odd that this kind of information isn’t captured at all in the documentation. I searched for storeFieldName but I could not come across a single occurrence of it.

I will definitely be making a cheat sheet/quick reference (for both the original way of doing things and apollo-augmented-hooks ) and linking it here for anyone else to contribute, as that should significantly speed up development and help ease the learning curve, as some of these things are rather trivial to do when unfortunately they aren’t quite presented that way in the documentation.

I do have 2 follow-up questions though, this seems to not be very efficient from a design point of view. If the cache inherently has unique keys associated with each query, why is it that a modify call must hit every cached query, when we should be able to specify the specific key to change in call in the first place? Typically the cache may not store that much information, sure, but I’m curious as to why there isn’t a way to directly specify which query-key to update, without having to iterate over every single query. Is this inefficiency addressed by apollo-augmented-hooks?

My other question is, is it necessary to make a copy of the object stored in the query? This data is definitely read-only, and I was wondering if making a copy, changing what needs to be changed, and returning that copy is “hacky” or if that’s the correct way to do it, because it feels incomplete that there’s no simpler way to directly modify the data without having to copy it. Or is that what fragments are for? Although I’m not sure how fragments would integrate in this method of updating the cache.

A couple of points:

Your specific use case here can be simplified if instead of the getInfo root query you modify the normalised Info cache object:

update(cache) {
    cache.modify({
        id: `Info:${id}`, // assuming `Info` is the correct typename
        fields: {
            toggle: () => false,
        }
    })
}

This should also answer this question of yours:

I’m curious as to why there isn’t a way to directly specify which query-key to update

There is, and this is how. I had previously assumed that getInfo contained an array of objects, in which case you’d usually decide whether or not to add or delete an item from it after a mutation, but if it is a single object and you only want to change the value of one of it’s properties, it makes a lot more sense to target the specific normalised cache object by providing its id. That way, the property will be updated wherever the Info object is referenced throughout the application.

The problem with how you solved it is this: previous does not actually contain the data of the Info object in your cache - instead, it will look something like this:

{ __ref: 'Info:some-id' }

It is simply an object holding a reference to the actual object in the cache. So if you handle the cache modification the way you did, you will return the following object:

{ __ref: 'Info:some-id', toggle: false }

This should not be working, though according to you it does - maybe some internal consolidation logic makes it happen. Still, you’re not actually updating the normalised cache object, so if that object is referenced elsewhere in the application, the change to toggle will not be reflected. You should be able to verify this by console.log-ging your temp object.

In conclusion: In the vast majority of use cases, root query modifications should only be done if they are arrays that need to be updated. If you only need to update a single object, target that object directly by providing its cache id.

One other detail on this topic:

why is it that a modify call must hit every cached query

It does not actually hit every cached query, but only the ones that match the field name you provided, ignoring any variables. So if the field to be modified is getInfo, all the root queries with getInfo will be targeted, but not any others. This is particularly great if e.g. your variables contain filters like time intervals which are freely selectable by the user, and you have to decide whether a new item should be added to the cache object according to if it matches those time intervals.

As for your last question: Apollo expects all cache data to be immutable, so never modify the original data. Making a copy is the correct way to do it.

Which leads me to one last note:

For more complex objects/arrays, I envision usage of cloneDeep comes in handy.

This should not be necessary and will probably not work due to the root queries only storing references and the cache objects themselves being normalised. Make sure to always target the specific cache objects that need updating, and only target the root queries if you need to add or delete a reference.

1 Like

Your specific use case here can be simplified if instead of the getInfo root query you modify the normalised Info cache object:

The problem with how you solved it is this: previous does not actually contain the data of the Info object in your cache - instead, it will look something like this:

Okay I’ve figured out why there’s a discrepancy here. My getInfo query returned a type without an _id. So, what that meant is that the query stored all the data without any __ref pointing to some other cache object.

This is why my previous method worked. By changing the return type to include the _id field, then it works as per what you described, and the previous method doesn’t because the previous object only contains a __ref and not the actual data. Very interesting.

And with regards to the following statement:

why is it that a modify call must hit every cached query

I should have been clearer, I meant hitting every cached getInfo query. But yes with the method you just described it makes a lot more sense now.

Is it a must to always return an _id field? I’ve actually not been doing this for many return types because it seemed unnecessary, as the _id field merely returned to me information I already had from a parent component passing down a prop. In fact, this is because I need the _id to perform the query in the first place. The _id is passed as an argument to the query, and that’s why I didn’t think it was necessary to return it.

But then this brings about the inconsistency in terms of the cached queries. If the return type does not have an _id field or something similar, the query does not store a __ref to the object but rather the entire object itself. Not sure about the consequences of this, if any.

I expect that you should always return an _id if there are multiple queries which have the same return type, and if a query returns a type which no other query returns, then it’s okay to omit the _id field. But this is just a guess, not sure about the best practice for this.

Ah yes, that makes perfect sense!

But then this brings about the inconsistency in terms of the cached queries. If the return type does not have an _id field or something similar, the query does not store a __ref to the object but rather the entire object itself. Not sure about the consequences of this, if any.

I don’t know how your application is structured, but just imagine the same Info object was present in multiple root queries throughout your cache. If each of these occurrences contained the actual object (with the toggle field) rather than a reference to the normalised cache item, you would have to manually keep track of all these occurrences and modify each of them yourself. That would be completely unmaintainable in a growing application. If they all reference the same object, however, you only ever need to modify that particular object.

So I would strongly suggest to always return an _id field, if possible. An additional advantage is that Apollo can perform many cache updates automatically. This could even work in your specific example, if the toggle field is provided by your API (which I assume it is):

mutation yourMutation {
    yourMutation {
        _id
        toggle
    }
}

If you call your mutation like this (and always provide an _id), Apollo will be able to identify the correct normalised cache object itself (by comparing the _id and the __typename) and update it automatically, making your cache.modify call obsolete.

This is also covered in my guide: https://github.com/appmotion/apollo-augmented-hooks/blob/master/CACHING.md#how-do-i-update-the-cache-after-a-mutation

1 Like

I don’t know how your application is structured, but just imagine the same Info object was present in multiple root queries throughout your cache. If each of these occurrences contained the actual object (with the toggle field) rather than a reference to the normalised cache item, you would have to manually keep track of all these occurrences and modify each of them yourself. That would be completely unmaintainable in a growing application. If they all reference the same object, however, you only ever need to modify that particular object.

Great, I think this echoes with the previous sentiment I laid out. My Info object for any unique ID is only present once in the cache, and it will never be referred to anywhere else, because I have other return types to suit that. This Info return type is very lean and only contains what I need for that specific page, and no other page contains that duplicate info, so it works in this case.

If you call your mutation like this (and always provide an _id ), Apollo will be able to identify the correct normalised cache object itself (by comparing the _id and the __typename ) and update it automatically, making your cache.modify call obsolete.

And yes you do make a good point, I read this in your guide. However I cannot implement this here. The return type of the mutation is not an entire Info object, and it’s just a boolean, so it’ll have a different __typename. But all’s good, I think this method works out for me so far!

Once again thanks a lot for the help. Sincerely appreciated.

Gotcha! Then we’ve got it all cleared up now. :slight_smile:

1 Like

Hey again @mindnektar, so building off previous discussion, I was curious on your personal practice when it comes to allowing Apollo to update the cache automatically (basically point 1 in your guide).

If say you have a query that returns the following type:

type Example {
  _id: String!
  title: String!
  name: String!
  count: Int!
}

And say you have a query which returns this entire Example type, and we have the following mutations to edit the respective fields:

editTitle(title: String!): Example
editName(name: String!): Example
editCount(count: Int!): Example

These result in automatic cache updates. However, if we declared different types as follows:

type TitleReturn {
  _id: String!
  title: String!
}
type NameReturn {
  _id: String!
  name: String!
}
type CountReturn {
  _id: String!
  count: Int!
}

And our mutations correspond to each of those:

editTitle(title: String!): TitleReturn
editName(name: String!): NameReturn
editCount(count: Int!): CountReturn

The cache is not automatically updated because the types are different.

What I have done is that I followed the first method where they all return Example, but my resolvers only returned { _id, title }, { _id, name } and { _id, count } respectively. So, even though the schema allows for ALL the Example fields to be queried, the resolver will not actually return the full Example.

This works fine, but I was wondering from a best practices point of view if this is acceptable or if this behavior should be avoided. I feel like I’m cutting a corner here.

If your editTitle, editName and editCount resolvers are supposed to return Example objects as defined by your API, then they should be able to return the entire Example object, not just parts. A graphql API implementation should be able to handle every single scenario that matches the API definition. Is there a reason why these resolvers can’t return the entire Example? In fact, is there a reason why you can’t just have a single editExample mutation that takes care of the entire Example? That would also be a lot more future-proof and maintainable, if Example ends up growing and having more fields added.

1 Like

Yes I thought so too (about implementing a single editExample mutation). But I attended this talk recently, Best Practices for Designing Federated GraphQL Schemas (YouTube link) and at 17:37 it’s mentioned that opting for finer-grained mutations is preferable, but the return type they show is of the entire Account type, but they didn’t really go into detail about the resolver implementation as the talk was focused on schema design.

It doesn’t really make sense to return the entire Account type when only a single field was updated, so I was wondering if this is wasteful or if this waste is acceptable. Because realistically, the mutation will only ever be asking for the _id and the updated field in its return type (for the finer-grained mutations).

Even if you return the entire Account, Apollo Sever will conveniently make sure to filter out the properties that have not been requested. So there’s no waste here and you should be fine.

As for the talk you’ve linked and their advocating for finer-grained mutations, this may work for types with a smaller number of fields, but I can only reiterate my previous point. I feel it would be cumbersome to create a new mutation and a new resolver for every single field that can be modified, and it would also bloat your API a fair bit. Having a single mutation to change any arbitrary combination of fields is a thing you do once and then you’re done with your resolver. And since the graphql API clearly and strictly defines what parameters are allowed for your mutation (and Apollo Server takes care of the validation itself), usually there should be no further validation work needed on your part. Depending on your server implementation and the complexity of your business logic, an update resolver may even be a one-liner, no matter how many fields are added.

I’m more than willing to have my mind changed, but to me, “one resolver to rule them all” as the presenter called it seems to produce the cleanest and most maintainable code.

1 Like

Yeah I completely resonate with your stance on it, well said. I think I’ll just go ahead and follow the paradigm of returning the entire type as it does make sense with one resolver. Glad to hear your opinion on this, thanks once again. :slight_smile:

2 Likes