Query complexity in a federated API

Hello there,

We’re writing GraphQL API for every new service moving forward and have chosen to go with federated architecture. It’s working amazingly well and now we’re moving to federation 2.0 for the public version of our gateway. (Before it was internal only where we were still learning stuff)

Now, I’d like to limit query complexity and have complexity calculation done and request be rejected if it costs more than some amount.

How do I go about doing that? Do the individual subgraph reject queries (that doesn’t seem good), does the gateway know which costs what? And how would it? Is there a special directive that we can use to indicate cost and gateway would know how much?

How are you guys doing it? My subgraphs are in golang and gateway is the federation 2 version with nodejs.

Thanks in advance.

I was hoping to be able to just define a directive with cost to each field and default to 0, and gateway would be able to calculate query complexity from there


We have implemented exactly such a complexity evaluation mechanism in our gateway. We have implemented an apollo plugin that looks at every request and calculates a complexity score. We have used the graphql-query-complexity library for the actual complexity calculation.

Hope that helps.


Thanks for the reply, I tried the same, however on my query complexity calculation, I don’t have see my custom directives on my schema.

I have a @complexity directive on my subgraphs, and now when it’s time to calculate, getComplexity doesn’t see @complexity, I tried to log schema.getDirectives() and I only see @include, @skip, @deprecated and @specifiedBy.

I’m unclear how a directive based approach (using e.g. @complexity or @cost) could ever work here as the Apollo v2 Federation docs explicitly state:

Custom directives are not included in your graph’s composed supergraph schema. The composition process strips all subgraph directives. Only a given subgraph is aware of its own directives.

Which means the costing metadata does not survive the trip from the subgraph to the gateway, and hence cannot be used as a basis for complexity calculations in the latter.

So I’m interested to know @ehardy how you sourced costing metadata in your gateway plugin.

We haven’t used directives as the basis of our cost evaluations. Otherwise we would have run into the issue you reported @Alan_Boshier. We have implemented some heuristics based calculations similar to how GitHub does it with their own GraphQL API.

Hope that helps.

Thanks @ehardy that’s an interesting approach! If you don’t mind me asking, what in your opinion are the strengths & weaknesses of this compared to using schema directives?

Hello everyone,

sorry I missed the discussion. Yes, you’re right and you’ll have the issue where you don’t have cost directive present on gateway, however supergraph does have those directive (which you can clearly see on apollo studio), while we can’t get that, we have two options: use a poorman’s estimation, or grab the supergraph schema from apollo studio.

I contacted support and they let me know they’re looking into this problem already and will be tackled in future. Which is why I went for poor man’s estimation route, and in the future, I’ll simply use the solution which will be presented to us by apollo team.

My poor man’s estimation uses AST to look at how many times you’re expanding a type, in most case (except pagination) in our case, every expansion is a new db query but adding field doesn’t do anything to the db query. The number of time you’re entering a type is the complexity of your query (this is only in our case), feel free to implement the way you want it. I’m just hoping the cost directive is soon tackled by studio team :slight_smile:

Hope this helps someone in the future.

The other approach that works for now is to use directives in subgraph schemas but then have the subgraph map those directives to a JSON object and then expose a REST endpoint that can be called by the supergraph on initial connection and which returns that JSON object.

The supergraph can then aggregate those returned objects from all of its subgraphs and use that info to perform complexity calculations on any received request.

that seems like a very complicated solution, but I’m sure it’ll help someone, in my case I’ll wait for the apollo team to come up with a solution which they mentioned they are looking into :slight_smile: