Feedback thread: Future deprecation of serviceList in ApolloGateway

In federated architectures, the ApolloGateway class currently supports a serviceList option that allows the gateway to fetch subgraph schemas and perform composition on them at runtime.

This option is strongly discouraged in most environments due to important limitations (described in the Apollo docs). Instead, we recommend either:

  • Composing your supergraph schema with the Rover CLI and providing it to the gateway via the supergraphSdl option (as described here)

  • Using managed federation for supergraph schema composition

If you currently use serviceList and neither of these alternatives fits your use case, please let us know by responding to this topic with a description of your concerns. We hope to deprecate serviceList in the coming months, but we want to address important gaps in its replacements before doing so.

Thanks for reading!


Not using managed federation here.

One of the issues I can see from a workflow standpoint is the need to automatically detect changes to the subschemas.

For example, if a child schema is updated, we’d have to detect that a change in the schema, then we’d apparently need to introspect against the running schema, do the same for all of the other services, then stitch together into one superschema.

We use kubernetes, and it seems like we’d need to then introspect the service after it runs to get its schema at runtime, which will probably be very difficult inside of a CI job, and then we’d need to trigger another CI job with all of the updated schemas for all of the services as artifacts, compose the superschema, and then output a new artifact into its build stage.

For our use case the hardest part here would be fetching the introspected query at runtime for each of these child services. Does rover support the ability to create a subgraph introspect schema from a directory of .gql files?

Otherwise, inside of CI we’d have to run this service and get its runtime schema, or worse, connect to the service from the context of a CI job in order to introspect.

For us, being able to generate the introspection from a static set of files, similar to how @graphql-codegen is quite robust in this regard, is looking more and more appealing. We don’t currently use rover, but if we can robustly generate the same schema we would see at runtime from directory of files (likely detecting .gql/graphql and .js/ts using gql tag), then we should be ok to migrate off serviceList.

Without it, it might be a great deal more complex of a set of operations to perform in the context of CI, and be a big pain point in migrating off.


I work in a sensitive space and we do not expose our schemas with out authorization which makes automating the super graph process extremely difficult. We cannot use your managed federation as it is on a non-approved system and we cannot expose our schemas to anyone outside our organization. This option would allow use to still use graphql federation with out huge overhead. We understand the limitations but the need to create a supergraph adds layers of complexity to our ci/cd process that are not necessary. Please look into resolving some of the limitations rather then deprecating this function. If you do deprecate it we will have to move forward with path separation rather then federation.


I appreciate this feedback thread, thank you. This shift away from serviceList has caused us many hours of investigation time and our production graph is still running v 2.22.2 to retain certain functionality. We are really looking to the future and are hoping we can find a new solution as are very interested in getting our bits up to latest versions, I may break out in hives being so far behind.

We run a multi function GraphQL endpoint in GCP’s Google Cloud Run which contains 6 child, sub-graph endpoints and a Gateway endpoint. All of our code is open source in GitHub. CI/CD is managed in GitHub Actions where approved PR’s to main are deployed to Google Cloud Run, verified, and finally the gateway is instructed to reload the schema. For this most part this has been a flawless approach for the last two years.

With all of our graph end points running in GCP’s Google Cloud Run, the paradigm to a polling system has been a hard pill to swallow. Being a containerized function having the process poll every 10 seconds seems like a waste (and may have other issues if it is a background process). Our current configuration utilizes the function apolloGateway.load() to instruct the gateway to refetch its schema. In most cases this works fine, and we prefer the push/trigger approach to updating the gateway instead of always polling. It would be preferred to have managed federation trigger the gateway when there is a schema change and allow it to grab the latest supergraph, over continuously polling to see if there are changes. apolloGateway.load() was removed in federation v2.23.

I have read the limitations list for serviceList and can’t disagree with them. Though in my experience with a federated gateway if an endpoint doesn’t respond when the graph fetches (and rebuilds) the schema the api will probably be broken as well as the are many extended types between the sub-graphs.

So what we are seeing is there are only two paths for a federated gateway going forward, managed federation and composing a supergraph. I have already explained one hesitation we have with managed federation approach. I am guessing we can add scripts to our CI/CD process to build a supergraph and redeploy or restart the gateway, though honest that does feel wrong and error prone.

We considered restarting the gateway when there is a sub-graph update, though this is actually not a straight-forward thing to do in Google Cloud Run. And now knowing that serviceList will be going away, this will be a sticking point for trying to use a supergraph as well.

In summary, after getting all that out of my head, I am not strongly against using managed federation, but with the environment we run in, polling is a really poor approach. In addition I find it frustrating that a third-party service must be used to properly utilize a software library like Apollo Federation.

Again, thank you for providing this space for feedback.


Addressing some of the limitations described in the documentation:

  • Composition might fail.

I could pretty easily push & pull a last good state SDL for the gateway with a single key in S3.

If the gateway fails to compose, we just load the last good SDL via something like:

// Gateway
new ApolloGateway({
  serviceList: [...], // maybe rename to subgraphs
  onCompose: async (success: boolean, supergraph?: Supergraph): Promise<Supergraph> => {
    if (success) {
     await s3.upload({
       Body: supergraph.toString(),
     return supergraph;

    const prevSupergraph = await s3.getObject({
    return prevSupergraph;
  • Gateway instances might differ.

I could use the same process as above for the individual services, which could publish their individual schemas to s3 and just pull the most recent from each one.

For example with kubernetes:

  • In the child services, set an environment variable for the original Date the deployment was created. If I update the deployment, new pods have a more recent Date.
  • When a child service pod comes online, go get the most recent schema from S3. If that schema was part of a deployment more recent than the Date in our environment variable, don’t upload our schema. Otherwise, upload our newer schema.
// Child service
const deploymentDate = new Date(process.env.DEPLOYMENT_DATE);

new ApolloServer({
  onStart: async (schema: Schema): Promise<void> => {
    // I don't know the syntax for this off the top of my head
    const s3DeploymentDateString = await s3.getObjectTag({
      key: "...",
      tag: "x-deployment-date"

   const s3DeploymentDate = new Date(s3DeploymentDateString);
   if (deploymentDate > s3DeploymentDate) {
     await s3.upload({ 
       tags: {
         "x-deployment-date": deploymentDate.toIsoString()
  • In the gateway, periodically (via a retry time or some other method) go get the most recent child schemas from S3. Since we have the most recent deployments’ schemas, compose them to get the most recent Supergraph. Upload our supergraph, and exit early upon successful “polls” if the checksum still matches, or some other logic we want to use.
// Gateway
new ApolloGateway({
  getSubgraph: async (serviceName: string): Promise<subgraphSchema> => {
    const schema = await s3.getObject({ ..., key: serviceName }).promise();
    return schema;
  shouldUpdateSupergraph: async (supergraph: Schema): Promise<boolean> => {
    const latestSupergraphChecksum = await s3.getObjectTag({
      key: "supergraph",
      tag: "x-checksum-or-whatever"
    if (supergraph.checksum !== latestSupergraphChecksum) {
      return true;
    return false;
  • In the gateway, calculate a checksum for each child service’s subgraph under the hood.
  • In child services, calculate a checksum for its own graph under the hood.
  • Upon being load balanced from gateway to child service, do not allow communication if the gateway and child service do not have the same checksum for the child service’s graph.

Via something like this, I probably wouldn’t even have to redeploy the gateway, and even though the idea of a gateway and a child service negotiating in this way is maybe a bit imperfect, I’m not sure how you’d otherwise get around it even using pre-defined supergraph composition. So to me, this is more reliable than using rover beforehand in CI.

For the record, I would much prefer something where I have things I can hook into and provide my own logic rather than being told “you need to do it all in CI/CD”, or worse, at runtime. To me, this seems a lot less complicated for devops, doesn’t need rover, and since you can provide your own logic, it can pretty easily accommodate any architecture.

That, and making rover, a tool that isn’t part of the gateway itself, be more authoritative than the gateway seems really weird to me.

I want more hooks with a well-defined contract, a la “do anything you want in the middle, but you use X to make Y” not a rigid contract like “use rover and only rover” or too loose like “we don’t provide an API for this, BYO everything”.

1 Like

Using serviceList with poll interval setting for the dynamically created environments. Can’t move to the managed schema with described CI workflow. Details are here: Create/Delete graph variants using Rover CLI · Issue #722 · apollographql/rover · GitHub
In general: we are blocked by graph variants that couldn’t be managed using Rover, in my opinion each of stage envs should have its own variant to provide its implementing service address.


Thanks for the feedback. We agree and we’re going to look into adding this functionality in an upcoming version of Rover.

1 Like

Thanks for sharing your use-case. We appreciate the feedback and we certainly want to help you traverse any obstacles you’re still finding.

Using the rover supergraph compose command with a corresponding configuration file (which, in addition to pointing to a graph reference in the Apollo Studio Registry, can point to a subgraph’s local SDL file or its introspection endpoint), your schema never leaves your environment. In this mode, you don’t need to use Apollo Studio or managed federation.

We have a number of customers that work in spaces that need to be protective of their schemas (e.g., field names that reveal secret business functions). In practice, we’ve found that many customers in this space can still use the Apollo Studio Registry after getting it approved since they also usually have other demanding operational requirements and since having a source of truth offers visibility and accountability into how the graph is evolving. We’d encourage you to get in touch with us directly to discuss these organizational obstacles, though certainly understand if that’s an obstacle that you’re very familiar with already! :grinning_face_with_smiling_eyes:

In a similar spirit to particular operational constraints, users (both large and small) have found that serviceList was a bit brittle for their liking due to real-time composition happening within the Gateway at startup time (which can itself be slow on large graphs since composition is CPU-intensive) to subgraphs that may be in various states of evolution. This was fraught with runtime validation errors (and thus failures to compose, and thus inoperable gateways). Most difficultly, it was not a static artifact that could be analyzed and validated in pre-flight and something which stuck around for later analysis, for example, after an outage. Since pre-compiled supergraph files resolve these concerns, we believe it is a stable direction. I should note that the existing managed federation still did still do composition within the gateway, but the registry acted as the source of truth, so Gateways with schemas that didn’t validate were not

That’s all to say, I’d still suspect you would benefit from the use of a supergraph file! It’s possible we haven’t made the benefits of supergraphs over serviceList clear enough so far, so I hope this helps a bit.

To dig into one of your struggles a bit, you mentioned that the process was “super difficult”, has “huge overhead” and “adds layers of complexity” — can you elaborate on that? To build the supergraph you should merely need to run a rover supergraph compose or, if using managed federation, rover subgraph push command. We’ve looked at a number of workflows in designing this and they’ve all tried to be considerate of CI/CD environments. Can you help me understand?

We hope you can get up to date too! We don’t want that! It’s worth noting that the serviceList functionality is merely deprecated right now, and it should still be working as it did since managed federation was first introduced. The load() function, on the other hand, has been more of an implementation detail since we introduced native support for gateway on ApolloServer itself (thus de-necessitating load), so perhaps that’s what you’re finding difficult here is related to load? If that’s the case, there are probably some other experimental hooks that can help you solve it. I’d encourage you to open an issue or Discussion on the Apollo Server GitHub repository if you’re finding it problematic.

I agree! The polling is particularly less ideal from our side of things too since users can have massive fleets of Cloud Run containers polling for updates! Further, in terms of transmitting the signal with the new schema, this also poses a similar challenge. (e.g., if they were to receive webhooks we would need to know where they are). My hunch is also that Supergraphs actually let us move in a better direction here, actually, but there’s some implementation details that are worth chatting about still.

I’m curious if you’ve experimented with Google Cloud Run’s ConfigMap service? I believe this may function in a similar way to Kubernetes ConfigMaps where they can be mounted as volumes and the files on those volumes can be watched.

In this case, a configuration that might be worth entertaining here is having your Gateway “Watch” the config-map volume which has a supergraph file on disk. With your Gateway fleet watching that volume and supergraph file, you could update it using rover — writing the updated supergraph when the new subgraphs have been deployed and having the gateways roll-over. The Gateway doesn’t currently support “watching” supergraphs in this way, but it’s something we’re considering adding. (For those that do want their Gateways to update reactively, we do know that this functionality works conceivably well in Kubernetes, so it’s really a matter of whether it works for you on Google Cloud Run.)

There’s also a whole subset of users who are using tools like Argo to manage their deployments who really prefer to avoid the fully-reactive updating and opt instead for blue-green deployments that gradually roll over (and back), and we think that Supergraphs also help there, but that’s a longer discussion probably outside of the scope of this thread!

Do you have any CI workflows running on your subgraphs? If your subgraphs merely registered when they deploy — using rover subgraph publish — the supergraph would be updated automatically plus it’d let you know if you’re breaking client operations or if the supergraph didn’t compose successfully.

Introspection is an action taken against a running service, but you don’t need to use rover subgraph introspect, you can directly compose the supergraph either by publishing the subgraph’s SDL file with rover subgraph publish (to generate the supergraph in Studio) or locally using rover supergraph compose with a configuration file. If you have multiple .gql files for a subgraph, you can often just concatenate them, e.g.:

$ cat schemas/*.gql |
  rover subgraph publish my-supergraph@my-variant \
    --schema - \
    --name accounts \

We currently don’t natively detect gql tags and extract template tagged literals in Rover — this can get tricky since you can interpolate dynamic values within them which we discourage — but if you can use .graphql files you should be good. Also, since rover accepts pipes of STDIN for schemas — and you take care to not interpolate values — you could also just use other tools (from npm) that let you extract gql tagged template literal contents and either write them to files or pipe them directly into Rover’s rover subgraph publish command.

Addressing some of your bullet-points in your follow-up response (Thanks for those thoughts!):

You can build this, but we think we can provide excellent tooling that helps enable it. At the very least, the supergraph file is intended to be your artifact (that’s definitely been one of it’s design principles!).

You would need to be considerate of whether the subgraphs themselves are in an unexecutable state though, which is something we’re considering workflows on how to facilitate and orchestrate. We think Studio and Rover can both help here, and Kubernetes is definitely a primary workflow considering.

There are also other non-Apollo open-source DevOps tools that can help coordinate these things, too. We do think that our managed federation can help avoid needing to roll a lot of this on your own though, and roughly what you’re describing is part of our free offering (and backed by cloud storage, just as reliably).

That said, I do think rover can help you build many parts of this on your own using well-defined and well-tested interfaces that have been purpose built to be defensive!

In a similar spirit to what I wrote above, I think Rover can help you if you want to build this. I’ll also note that Kubernetes ConfigMaps are a great way to hot-reload configuration on Pods that are deployed. A Gateway could conceivably watch a file and you could have a separate process merely update the supergraph (via Rover) and have all the Gateways reload. This watching functionality doesn’t exist yet, but you could build it yourself, and we’re riffing on some workflows that might take us there.

We have mechanisms for the first two of these bullet-points already that are utilized by our Registry, and we’re considering more durable hand-shaking between Gateways and services once we work out some runtime environment details where that can be tricky. Good idea though!

I think we probably need to take some more time to document and write about these workflows, so I’m glad we’re having this conversation. I do think there’s a blend here that we’ve been refining in our own iterations on this both internally and with large customers that’s becoming more crisp over time.

Sure, you can do everything without Studio and Rover the Registry or any DevOps tooling, but I think it’s worth being cautious about how much you roll on your own. Rover offers a lot of free functionality that you’d just have to rebuild yourself. We’re purpose building Rover to be part of specifically these workflows so if you’re not finding it at all useful, that’s surprising to me. (We’re putting a lot of time into evolving all of these tools to solve pretty much all of the challenges you’re describing!)

Runtime is not where we want most of this to happen either since that’s more difficult to analyze and be confident in and more challenging to analyze later! CI and CD, however, is where most of our users seem to want instrument this stuff since it allows them to be really certain prior to roll-outs that they have a defensive, well-tested, and reproducible build going into production.

Typically, we haven’t found introducing a command here or there in CI or CD workflows to be a particular challenge, but I’d be curious to understand your challenges with introducing preflight and static build commands to your existing CI/CD workflows.

Ok, I think I hear you. We’ve touched on a few subjects in this post, which I hope were helpful and enlightening and this is great feedback, so thanks for sharing. I think we’ll get to the right blend eventually, but it will hopefully be all the right amounts of flexibility, build steps, tooling, webhooks, etc. :smiley:

Thank you for the reply, it is much appreciated

Yes our difficulty is related to [the removal of] load (who moved my cheese?) and finding a new way to update our gateway on sub-graph updates. I did ask the question on this forum and it went not responded to, so perhaps the question belongs on GitHub.

What are the “hooks” you refer to, is there documentation you can point me to?

The idea of having a supergraph monitored by one or more gateway functions appears to be a good approach. I am still thinking through how && when we rebuild supergraphs in our CI/CD process which currently runs on subgraph merges.

This is not something we have experimented with. We are using the fully managed implementation of Google Run which doesn’t appear to have this option. We have been toying around with the idea of keeping a supergraph file in a bucket and having the gateway instances monitor that, or since monitoring the file isn’t an option currently, redeploying the gateway when the supergraph changes using that central supergraph file.

This is an interesting approach though perhaps overkill for us at this point.

1 Like

Do you have any CI workflows running on your subgraphs?

Not right now; we just do a normal deployment for kube, specifically using helmchart. For validating the schema without issue in prod, right now we use tooling such as Tilt which allows you to easily spin up workloads in kubernetes locally, with images from either a registry or by building locally. For our gateway, we have references to our child services, deploy everything (and their dependencies) recursively (not as crazy as it sounds), and then spin up the gateway when everything comes online. Take about 5 minutes, requires 1 command, tilt up.

In production, we log to DataDog for issues with the gateway, but we’ve never had a schema fail to compose in prod. We have about 15 services attached to our gateway.

Introspection is an action taken against a running service, but you don’t need to use rover subgraph introspect

Sure, but I don’t currently give my CI jobs access to a running server to play with, and I don’t expose ingress for the child services, only the gateway. I would have to spin up the service in CI and perform introspection there, or I would have to give our CI job access to said service.

Being able to use rover against static files would probably be ideal for me. If the gateway is going to be given a static schema at runtime, I don’t really see why it would be weird to do the same for the child services; instead of loading a directly of .gql files, I would just load one pre-built schema from rover.

That said, spinning up a service locally that suddenly requires an artifact is probably a non-trivial change to the local workflow. If that’s what you need for the gateway with rover, then yeah, that’s another thing on the list to migrate.

You can build this, but we think we can provide excellent tooling that helps enable it.

There are also other non-Apollo open-source DevOps tools that can help coordinate these things, too.

The reason that I personally went with the API of a bunch of hooks on the server/gateway, is that it would allow pretty simple plugins for certain use cases. Want things to use S3/GCP/etc? Use this utility pack for that use case which provides a few ready-made hooks, similar to things like graphql-scalars.

That way, you can let the ecosystem do all of that work for you and let the packages duke it out until maybe they get adopted by The Guild or Apollo or the like; the only thing that would need to be really consistent is the individual hooks’ inputs and outputs and the core federation workflow.

I’ll also note that Kubernetes ConfigMaps are a great way to hot-reload configuration on Pods that are deployed.

I’m not sure if I’d want to use this, as I think there might be certain cases, such as a rollout in progress with a schema change, where I wouldn’t want to globally trigger an update. ConfigMap is a solid option, though, I’d just need to be able to specify to the gateway instances what I’d like them to do, such as reload from the ConfigMap in a rolling fashion, so as to not have all the gateways update at exactly the same time.

I think it’s worth being cautious about how much you roll on your own. Rover offers a lot of free functionality that you’d just have to rebuild yourself.

I agree that deviation is obviously something you can go too far with, but in general APIs exist to accommodate easy deviation, because “deviation” there is often the actual work being done.

Right now it just looks like rover would require us to do things quite differently from the way we already do it, so to us it doesn’t really matter whether we do it in CI or “roll our own” via hooks, both of these approaches would require work in order to make sure it will work with our existing things (such as networking rules in CI). The hook approach, to me, has a lot more power, a lot less restriction, and would be really easy to have 1 dev sit down and noodle on it, rather than having to get devops and dev to figure out this process together, and probably the ongoing communication forever after.

Runtime is not where we want most of this to happen either since that’s more difficult to analyze and be confident in and more challenging to analyze later! CI and CD, however, is where most of our users seem to want instrument this stuff since it allows them to be really certain prior to roll-outs that they have a defensive, well-tested, and reproducible build going into production.

If your service needs to be running in order to use rover, how is this particularly different from doing it at runtime? Seems like the same process with extra steps, and things being “farther away” due to being in CI. Sure, it’s safer because you can stop the CI job beforehand, but you could do the same thing at runtime and create actionable alerts just the same. Seems like the difference between rover and runtime right now is that rover forces these steps to be made transparent, whereas at runtime these individual steps are not exposed; it’s all one step right now to my knowledge, so of course you can’t stop it early.

My concern is that once you implement in CI, it’s a lot harder to remove and change than at runtime. Depending on your organization, any future workflow changes, regardless of how much it improves things, are just harder to do because of the communication involved. That, and removing such a workflow is also way harder too, and since now it would cross-cut teams, the diffusion of responsibility would be much more likely to kick in, causing teams to take forever to upgrade in the event of a breaking API/workflow change.

1 Like

Thanks for your feedback. Just wanted to jump back in with one note:

I feel like one of the things I noted above may have been missed, which is that this is not the case — that’s what I meant by this suggestion:

There is no runtime subgraph here. This is coming from a file. (Note that --routing-url is the location where the graph can eventually be accessed at runtime, not where it is running right now)

1 Like

There is no runtime subgraph here.

Ah, ok. Sorry, my mistake.

1 Like

Thanks for your follow-ups!

Documentation is lacking but take a look at this code. I can’t promise this will always be around, and we’re working on covering the use cases through a more principled API.

On the point of going too far with “rolling your own”, it seems to me like a simple S3 bucket for schemas is not unlike the basic concept of a schema registry, so I totally see your point.

However, it seems like some people in this thread would be unable to use the schema registry for security or legal reasons, at which point such a custom schema registry might be in scope for them.

Obviously that would increase scope on an operational level, but I imagine there’s overlap in setup between what you would need with a custom schema registry, and rover.

Not saying that it’s a great idea, per se, but that is to say that an internal schema registry might be in scope for some organizations. We don’t use the schema registry right now, so not really sure what it would need to work. Assuming that you can’t use Apollo Studio for whatever reason, and therefore don’t need the advanced features, it seems at a glance that it wouldn’t be that much work to have a simple registry.

I’m using managed federation here, apollo-studio works and in our company’s case, this wouldn’t necessarily impact anyone at all.

But I would like to point out that, managed federation doesn’t work for everyone, while it works for many customers, it doesn’t work for everyone, there are enterprises who won’t want a dependency on different platform, there are security requirements in some companies, etc.

I do however suggest there be a in-house hosted version of apollo-studio which enterprise can deploy by themselves. Similar to how github-enterprise works. (Unless enterprise plan is exactly that)

Without that if I were to do managed schema, I’d have no option whatsoever.

One option always exist that the graphql schema of apollographql is quite open, and I’m sure people looking at schema can build a schema registry by themselves, but I think apollo team providing a hosted solution is definitely one of the thing you should look into :slightly_smiling_face:

We use managed schemas for our integration and production environments, but I’m not sure how this is supposed to work for local development. Currently, we pull the repositories for each of the sub-graph services, start them up and then start the graph gateway to pull the schema from each of the local instances, running on our laptops. (We’ve added custom code to the gateway keep polling the subgraph services until they’re up). Is the idea here that instead of the gateway pulling the sub-graphs automatically, we’d have to add a manual step to do the same thing, so we could provide that unified schema to the gateway?

In other words, how do you see graph federation development working for developers running on their individual machines?

Hi @StephenBarlow @abernix

We are looking at Apollo Federation at the moment and investigating with our internal teams if we can use managed federation for security/legal reasons. I also looked at alternatives if we cannot use managed federation including generating the supergraph file using Rover and got that working in a POC. We could have that as part of our CI/CD process and push the supergraph file to AWS S3 or similar. For the gateway to pick up any changes to the supergraph file I found some “experimental” hooks on the GatewayConfig object. What is happening with these “experimenal” properties and are they going to stay or go? See below example.


import { GatewayConfig } from '@apollo/gateway'
import fs from 'fs';

export const gatewayConfig: GatewayConfig = {
  experimental_pollInterval: 10000,
  experimental_updateSupergraphSdl: async(config) => {
    console.log('reading supergraphSdl file');
    // this could be pulled from say AWS S3 or similar
    const supergraphSdl = fs.readFileSync('prod-schema.graphql', {encoding: 'utf-8'})

    return {
      id: new Date().toISOString(),


import 'reflect-metadata';
import { ApolloServer } from 'apollo-server';
import { ApolloGateway } from '@apollo/gateway';
import { listen as userListen } from './user-subgraph/index'
import { listen as transactionsListen } from './transactions-subgraph/index'
import { listen as paymentsListen } from './payments-subgraph/index'
import { gatewayConfig } from './gatewayConfig'

async function bootstrap() {

  const gateway = new ApolloGateway(gatewayConfig);  

  const server = new ApolloServer({
    tracing: false,
    playground: true,
    subscriptions: false

  await Promise.all([

  server.listen({ port: 3000 }).then(({ url }) => {
    console.log(`Apollo Gateway ready at ${url}`);


Would be great to get a response on this and see if this solution would be viable going forward.
Many thanks,
John Gobl