It seemed obvious, and the articles about the topic talk about either one or the other, but not both. I’m building a search dropdown (or autocomplete) with infinite scroll, and looking for a way to efficiently fetch the minimal possible data. Currently I’m using offset based pagination, and it worked well until I closely watched my search results.
For simplicity, I will have a simple database (just ids):
[a1, a2, a3, b1, b2, b3, c1, c2, c3]. Just for this example, the dropdown will only show one item at a time. My query is get 2 records, and skip current records.length (it will be the offset). The search box is empty, so I query for all records, get the first 2 (
Then, I type in b, and get
b2. I scroll down to
b2, and that is the last element on my list, so I fetch further 2, skipping the current length of my list, which is 2. There’s only one item matching for b, if we skip 2, and it’s
Then I set the search box value to 3. I already have
b3 in the cache, so it’s loaded as the first element, but because it’s the last as well, I fetch 2 more, skipping one matching the search results. In the database the first element matching 3 is
a3, which will be skipped. Instead,
b3 will be returned as duplicate, and
This is my problem, and I can only think of 3 ways to solve it:
- remove caching, and always load all records backwards as well (so on the 10th “page”, load the previous 9 as well always), which doesn’t seem too efficient
- instead of providing an offset, provide the list of items I already have, and skip them (where id is not in existing), which doesn’t seem too efficient either, as it’s growing both for the api call, and for the database (for each item in the database, and for each id in the list, check if the record’s id is not in the list of ids)
- every time the search field changes, empty the cache, dropdown list jumps to top, and the first 2 loads matching the search condition
The third solution seems to be the only maintainable solution. Which is basically (at least for my case) equals to just not having a type policy to merge - so new results will override existing. But it would mean that if I scroll down to my final result,
fetchMore's result would override the existing results - which is not what I want. I want to merge them together, which I can do safely as the search conditions are the same, so I can go one-by-one, without being afraid of skipping one, or having duplicates.
updateQuery option is very handy, I just merge the previous data with the current data in this case. However, it’s being depreciated.
Now as you can see I clearly don’t have very deep knowledge of how apollo client works, but for me it seems like type policies are updating the cache per typename, and
updateQuery is updating that specific local query hook’s result (which is what I need).
So far I’m happy with
updateQuery, but I can’t be for long. Hopefully I gave the basic idea about my problem, and my expected behaviour. Any help would be much appreciated.