RefetchQueries + Pagination how to keep partial data in cache

Hey,

So I have a table with offset / limit pagination. I have a custom merge and read function purposed for my needs.

There is a particular scenario I’m trying to understand how does Apollo client handle it and what I could potentially do to maintain some of the data, what I mean:

  1. Let’s say I navigate to the second page of my table with a limit of 10 items which I have fetched using fetchMore method from useQuery. Now my cache has 20 items and my read function properly slices the data to show me what I want offset = 10 and limit 10.
  2. Now while on this second page I decide to remove an item and in this mutation I have both an update function and refetchQueries array. My update function properly removes the data from the cache leaving me with 19 items in my cache out of a total of let’s say 30 but I have not yet fetched the rest of these thus they are not in my cache. Because I have another 10 items left and I don’t know what this data is I thus use my refetchQueries array to get this missing data from the server to update my UI with this new item since my limit is still at 10. So far so good this all works fine.
  3. What I’m noticing however is in my cache the first 10 items are now null, the reference to these have been. lost I believe this is possibly due to the refetch array which only refetches based on the input of offset = 10 and limit 10. I can see that in my custom merge function the existing data after the refetch is undefined.

My question is how can I maintain these first 10 items based on my scenario? I’m trying to figure out if this is possible, if so, how?

My one thought is I can update my refetchQuery to account for where I am on my page and refetch all of that data and that before which would populate my cache properly. But this doesn’t seem practical as if I have like 1k items and I’m on page 80 I would then run a query to get all 799 previous data set. This example is on a small scale but you can see my point with extremely large data set this is not practical at all and becomes an expensive query.

Is there a way to refetch the query server side but ensure it does not overwrite my cache setting it to null. I’m trying to determine if it’s something I’m doing wrong as I expected Apollo to just update the offset = 10 and limit 10 data set and not affect the previous data set of offset = 0 and limit 10. Thanks for any feedback.

Hi @yeltrah :wave: can you share some more detail about the type policies being used here? Maybe there’s something in your merge function we can help adjust?

Heya,

Thanks for the response. Yes I can here is my merge function

export const findAllByKey = (obj, keyToFind) => {
  return Object.entries(obj).reduce(
    (acc, [key, value]) =>
      key === keyToFind
        ? acc.concat(value)
        : typeof value === 'object'
        ? acc.concat(findAllByKey(value, keyToFind))
        : acc,
    [],
  )
}

const mergePaginateResults = <T>(props: IPaginateResultsProps<T>) => {
  const { args, incomingData, existingData } = props

  const data = existingData

  // find pagination
  const pagination = findAllByKey(args, 'pagination')
  // Assume an offset of 0 if args.offset omitted.
  const { offset = 0 } = pagination?.[0] || {}

  for (let i = 0; i < incomingData.length; ++i) {
    data[offset + i] = incomingData[i]
  }

  return data
}

const offsetLimitMerge =
  <T>(field: string, paginationInfoHandler: TPaginationInfoHandler<T>) =>
  (
    existing: Readonly<T>,
    incoming: Readonly<T>,
    options: FieldFunctionOptions<T>,
  ) => {
    const { args } = options

    const existingData = field ? existing?.[field] : existing
    const incomingData = field ? incoming[field] : incoming

    let merged = existing ? existingData.slice(0) : []

    if (args) {
      merged = mergePaginateResults<T>({
        args,
        incomingData,
        existingData: merged,
      })
    } else {
      // It's unusual (probably a mistake) for a paginated field not
      // to receive any arguments, so you might prefer to throw an
      // exception here, instead of recovering by appending incoming
      // onto the existing array.
      merged = [...merged, ...incomingData]
    }

    let mergedResult = merged

    if (field) {
      mergedResult = {
        // if incoming data is empty use existing obj
        ...(incoming?.[field].length === 0 ? existing : incoming),
        [field]: merged,
      }
    }

    return {
      ...mergedResult,
      paginationInfo: paginationInfoHandler
        ? paginationInfoHandler(mergedResult)
        : mergedResult?.paginationInfo,
    }
  }

And here is my read function if it helps

const readPaginateResults = <T>(props: IReadHelperProps<T>) => {
  const { args, existingData } = props

  // find pagination
  const pagination = findAllByKey(args, 'pagination')

  const { offset, limit } = pagination?.[0] || {}

  let data = existingData

  // if offset is not undefined then assume
  // we want paged results
  if (offset >= 0) {
    data = data?.slice(offset, offset + limit)
  }

  return data
}

Hey @yeltrah !

Appreciate the code snippets! I’ll be honest, without understanding your schema or the data returned, I’m having a bit of a hard time following what the inputs and outputs are here. I’d also like to understand how these functions are used with your type policies. I’m guessing something like this?

new InMemoryCache({
  typePolicies: {
    SomeType: {
      fields: {
        foo: {
          read: readPaginateResults,
          merge: offsetLimitMerge('foo', (merged) => { /* returned paginationInfo */ })
        }
      }
    }
  }
})

Perhaps you can either provide some sample inputs and/or your schema shape to understand how these pieces fit together.

You’re right in that the cache should behave as you expect. My guess is something is awry in your merge function. I’d try placing some console.log statements in some key areas and ensuring the return value looks as you expect. Perhaps the value returned by merge includes those nulls like you’re seeing. The Apollo Dev Tools also might help here to see what the cache looks like after your merge function runs (I suspect you’re using them, but in case you’re not, they might be super helpful here).

If you’re seeing the output you expect from your merge function and you’re still getting null values, perhaps a reproduction of the issue would be super helpful. Again, its a bit difficult to understand what I’m looking at without understanding the shape of the data. Feel free to use our error template if you need a place to get started.

Thanks!

Hey,

Ok so its good to know that it should behave as I expect so I will do some more debugging. I already use DevTools as you suggested and console logs everywhere.

My problem is really that I did put logs and in the merge function I could see right away that the existing data came back undefined from the params of the function before even going into the logic so thus where I began to wonder if there was a misunderstanding on my part of what was expected.

I will provide some information here, and in the meantime see if I can use the error template for a reproduction while continuing to debug on my end now that you have indicated what should be expected.

  1. Yes it is more or less as you suggest as the field is really just the nested object we have in our schema which would look like below
{
 users: [{ }]
 paginationInfo: {}
}

and the input would look something like

{
 filter: {}
 pagination: {
  offset: number
  limit: number
 }
 order: {}
}

Glad you’re already using the debugging tools :smile: . I figured that was the case but thought I’d mention it.

I think the thing I’m stuck on is the return value from your merge and the schema sample you just provided.

{
  users: [{ }]
  paginationInfo: {}
}

This suggests to me there is some parent type with these 2 fields in it. Which means your schema looks something like the following correct? (totally a guess on the names :stuck_out_tongue_winking_eye:)

type Foo {
  users: [User!]!
  paginationInfo: PaginationInfo
}

In your offsetLimitMerge function, I see this returned:

return {
  ...mergedResult,
  paginationInfo: paginationInfoHandler
    ? paginationInfoHandler(mergedResult)
    : mergedResult?.paginationInfo,
}

This suggests to me that you’re returning data back as if existing was the parent type (Foo in my example). Our type policies only allow you to specify merge functions at the field level, so this return value raises red flags. I would expect a type policy to be defined on your end something like this:

typePolicies: {
  Foo: {
    fields: {
      users: offsetLimitMerge('users')
    }
  }
}

Given the above, the users merge function should work on the array of users, not the whole Foo object. paginationInfo is a different field under Foo, and again, merge functions don’t work at the type level. Seems to me your return value would need to be a list of users.

That being said, there could be something I’m fundamentally misunderstanding on how you’re configuring the fields. If so, would you post a snippet of your type policy configuration? I’m just not understanding how you’re configuring them given the code samples you’ve provided so far. That would be truly helpful!

1 Like

Hey,

So you are right I should be more complete in the data / inputs I provide, apologies. I was trying to be brief and show the main bits but I can see how that just provides more confusion and lack of information for proper support.

Here are my types. I’ve left out the other input types as pagination is the main one that matters for the merge function from UserGetInput

type NamespaceUserQuery {
	get(input: UserGetInput!): UserGetPayload! )
}

type UserGetPayload {
	users: [User!]!
	paginationInfo: PaginationInfo!
}

input UserGetInput {
	filter: UserGetFilterArg
	exclude: UserGetExcludeArg
	pagination: PaginationArg
	search: UserGetSearchArg
	order: [UserGetOrderArg!]
}

input PaginationArg {
	"""Number of items in one page"""
	limit: NonNegativeInt

	"""Number of ITEMS to skip (not pages) before returning the result"""
	offset: NonNegativeInt
}

And here is my type policy

typePolicies: {
   NamespaceUserQuery: {
      fields: {
        get: offsetLimitPagination({
          field: 'users',
          useOffsetLimitRead: true,
          keyArgs: ({ input }) => {
            const { filter, exclude } = input ?? {}
            const inputArg = {
              input: {
                filter,
                ...(exclude ? { exclude } : {}),
              },
            }
            return hashKeyArgs(inputArg)
          },
        }),
      },
    },
}

And here is the custom offset limit pagination method

export const offsetLimitPagination = <T>(
  props?: IPagination<T>,
): FieldPolicy<T> => {
  const {
    keyArgs = false,
    field,
    paginationInfoHandler,
    useOffsetLimitRead = false,
    ...rest
  } = props ?? {}
  return {
    keyArgs,
    merge: offsetLimitMerge<T>(field, paginationInfoHandler),
    read: useOffsetLimitRead ? offsetLimitRead<T>(field) : undefined,
    ...rest,
  }
}

This should be more complete and helpful. the offsetLimitMerge function and offsetLimitRead functions are the ones I provided to you initially.

2 Likes

@yeltrah thanks so much for sticking with me through this. This is super helpful.

Reading through your code, I don’t see anything inherently wrong with it, especially since you say you’ve been able to console.log the correct values.

Re-reading your initial post, this part stuck out to me:

I can see that in my custom merge function the existing data after the refetch is undefined.

I went spelunking through the code and discovered an option that I think might be the culprit here as the value of that option determines what result you’ll see as existing in your merge function.

Try setting refetchWritePolicy to merge in your initial query. If you’re using React, this would be set in your useQuery:

useQuery(QUERY, {
  refetchWritePolicy: 'merge'
})

If you’re using the client directly, this would be set in options:

client.watchQuery({ refetchWritePolicy: 'merge' })

Turns out refetchQueries calls refetch under the hood, which sets the refetchWritePolicy to overwrite when the network status is refetch. This would explain why you’re seeing undefined as existing.

I’m hoping this helps! This option is not well documented unfortunately.

Hey,

First want to thank you guys for taking the time to look into this and respond. I know you have plenty of other things going on while supporting us on this OSS. Thanks, always appreciated!

I have tried the suggestion and it seems the issue is still happening unfortunately. In my function I can see after using the mutation the query returns undefined for existing param even with refetchWritePolicy: 'merge'. Is there something else that I need to set in order for this to take effect?

I am using apollo client version 3.7.3 with NextJS 13 SSR if that helps at all.

Unfortunately I’m out of ideas :confused: . Would you be willing to try and get us a reproduction of this issue? You can use our error template as a starting place.

Without a failing test or reproduction where we can poke around, I’m not sure there is much I can offer you in terms of advice. If you’re confident that the values returned by your merge function are accurate, then this seems to me like a weird edge case bug. There is no reason your items should just disappear like that.