OG Champ Thread: How to best handle request-based localization in a supergraph?

[Editor note: Sharing the original thread Tyler started about localization from the Champions Slack workspace]

@tylergoelz

This may not be the right place, but starting somewhere. Does anybody have any patterns or lessons learned from incorporating localization into the graph? The caveat is that we don’t have (and don’t have a plan for) user or account-based settings to check within the graph. We’ll likely determine the locale based on the browser/device. I’d rather not have to include an input argument and pass it down through the resolvers, but maybe that’s the best (only? lol) option?

@greg-apollo

You could pass the locale as a header and ensure it’s propagated to all subgraphs. Arguably the locale is already there based on the Accept-Language header if you’re OK using the locale preferences of the browser/device.

@tylergoelz

Thanks, Greg - Are there any Apollo docs or talks on this that I haven’t come across in my searches yet that you would recommend?

@andywgarcia

Here is the accept-language header docs from mozilla: Accept-Language - HTTP | MDN
I have learned not to deviate from the standards. I am currently in the process of removing locale from our keys in our supergraph…

Accept-Language - HTTP | MDN

Depending on the platform you are building the subgraphs on, there is likely a prebuilt open source package to be able to parse the header for you to get the language(s) you can support

@tylergoelz

Our subgraph(s) will likely be built in node. But it sounds like the accepted approach is to leverage the accept-language header between clients and graph?Love this group for getting tried and tested answers

@andywgarcia

I’d say that is the current modern standard for locale selection from browser to server. I have just seen it done so many other ways and it has never played out as well as just letting the browser select the language and passing that through the system (specifically through the standard header) (edited)

@greg-apollo

I would say the only reason to do otherwise is if you have a locale selection UI in your client and some form of validation in the back-end. At that point, you’d want a more deterministic “what was the user’s valid locale selection” rather than use the accept-language header.Since you neither have nor plan to have such a client experience, accept-language should be a good solution.

@Dan_Boerner

Hey @saurav_t, can you share how Expedia’s graph does this?

@saurav_t

we started with passing the locale as an input arg since we had the philosophy that everything a query needs to resolve properly should be in the query itself. That did not scale over the years. (read auth/z, spoofing).Current approach is to pass it via standardized headers.

@juancarlosjr97

If you are planning or using entity cache on router level, if using federation and router, the headers are not taking into consideration building the signature to the record, so keep that in mind (edited)

@Dan_Boerner

Are you saying that the accept-languageheader is not used to construct the cache key used in entity caching?

@juancarlosjr97

Yes, I was planning to open a ticket with Apollo about this interesting case. I just have not had time yet, but I will do it tomorrow. However, this is likely the expected behavior because only entity caching uses the request body to construct the cache key.

I know the accept-language header is not being taken into consideration. I have already tested it, but I am not sure if any other headers are being taken into consideration (edited)

@greg-apollo

By default, I think you’re right that headers are not part of the entity caching cache key.If you were to read the locale from the header, then assign it to a field and include it as part of the key, then it would be considered for entity caching.

type Product @key(fields: "sku locale") { sku: String! """ From the accept-language header """ locale: String! """ Localized using the locale """ name: String! }

That could result in its own issues, though, since you’d be creating a compound key.I also think you can define your own cache-key logic in the Router, but I don’t have experience with that. (edited)

@juancarlosjr97

So, there are two main ways to handle cache keys (I think) … The surrogate keys which I do not think in this case makes any difference but I would have to test it, and the customize key which might work if you do like sha generation using the headers value or something like that (edited)

@tylergoelz

Hey, Jay Anslow, - I know you didn’t give this talk, but Dominic’s account is deactivated and I figured you’d still be able to provide some insight.https://www.youtube.com/watch?v=jJoXS8f3yGoI’m working on creating a PADU for localization through the graph and came across this talk Dominic gave at Summit 22. How did this pattern work out for Babylon? Lessons learned? Things to avoid? Things that worked really well? Would you still recommend this approach (namely, “misusing” context as the entity key)?

@saurav_t and @andywgarcia - you both mentioned experience with the pattern of setting locale as an input argument not scaling well.

Could you share some examples of that pattern not scaling well?

I’m working on creating a preferred/acceptable/discouraged/unacceptable decision record around some options for our architectural review board and I’d like to expand on why that pattern doesn’t scale well.

It didn’t scale well because non-standard patterns become more things that developers have to remember. It becomes “tribal knowledge” for how something works. Its more about keeping these simple and to the basics in my eyes than about the technical solution working or not. At some point when you are long gone from that team or tech, or maybe even you years (maybe even months), someone is going come along and go “why the heck did we do it this way instead of that way”. It doesn’t matter how elegant the solution was because the “why” behind why something was chosen inevitably gets forgotten. This happens less frequently when something adheres to common standards.

The next reason is that non-standard implementations tend to have the most bugs and tend to be the most difficult to maintain if the above paragraph has yet to apply. Because it requires more information for people to know and remember that following standard implementations, it leaves more room for error, and inevitably leads to more bugs. At some point, someone will forget to pass the localization input argument to some upstream datasource for the correct filtering, and then when the bug is found, it becomes a larger than necessary refactor because you have to figure out how to get that input argument all way through the stack. However, when done as headers, most of the time, your code has relatively global access to the request context so you can get the header anywhere you need without having to pass it as arguments in your functions. Its “context” rather than arguments.

At the end of the day, using input arguments works. I have just never seen it (nor really any other fancy/nonstandard solution, even with good reasons) play out as nicely over the course of 5 years than any standard implementation.

2 Likes

Thanks for the help on this @andywgarcia!

1 Like

Such helpful context @andywgarcia. A perfect example of how sharing our hard earned lessons helps one another. Well done!