DocumentationGatewayDeploymentServerless / On the EdgeIntroduction

Serverless / On the Edge

Hive Gateway can be deployed on the edge. This means that you can deploy your Hive Gateway to a serverless environment like AWS Lambda, Cloudflare Workers, or Azure Functions.

💡

Please read carefully following sections, most importantly Bundling Problems. Serverless and Edge are very specific environment that are comming with very specific requirements.

Distributed Caching

But you need to be aware of the limitations of these environments. For example, in-memory caching is not possible in these environments. So you have to setup a distributed cache like Redis or Memcached.

See here to configure cache storage.

Bundling problem

Hive Gateway cannot import the required dependencies manually, and load the supergraph from the file system. So if you are not using a schema registry such as Hive Gateway or Apollo GraphOS, we need to save the supergraph as a code file (supergraph.js or supergraph.ts) and import it. We also need to manually configure the needed transports to communicate with subgraphs.

Loading the subgraph transport

Gateway Transports are the key component of the gateway runtime’s execution. It allows the gateway to communicate with the subgraph.

For example @graphql-mesh/transport-rest is used to communicate with the REST subgraphs generated by OpenAPI and JSON Schema source handlers. And GraphQL subgraphs use GraphQL HTTP Transport @graphql-mesh/transport-http.

To avoid loading unnecessary transports and allow to provide your own transport, Hive Gateway is loading those modules dynamically. This means that the bundler can’t know statically that the transport packages should be included in the bundle.

When running in a bundled environment like Serevless and Edge, you need to statically configure the transports needed to comunicate with your upstream services. This way, the transport modules are statically referenced and will be included into the bundle.

index.ts
import { createGatewayRuntime } from '@graphql-hive/gateway-runtime'
import http from '@graphql-mesh/transport-http'
import supergraph from './supergraph.js'
 
const gateway = createGatewayRuntime({
  supergraph,
  transports: {
    // Add here every transport used by subgraphs you need.
 
    http // For example, the `http` transport for GraphQL based subgraphs.
  }
})
 
export default { fetch: gateway }

Loading the supergraph from a file

Since the file system is not available, we need to find a way to include the supergraph in our code.

For this, we will need to have our supergraph SDL in a .js or a .ts file, so that we can import it in our script.

import { createGatewayRuntime } from '@graphql-hive/gateway-runtime'
import http from '@graphql-mesh/transport-http'
import supergraph from './supergraph.js'
 
const gateway = createGatewayRuntime({
  supergraph,
  transports: { http }
})
 
export default { fetch: gateway }

We explain in following sections how to obtain a supergraph as a .js or .ts file.

Compose supergraph with Mesh

GraphQL Mesh can save the supergraph as a JavaScript file by providing an output file name ending with .js:

mesh.config.ts
import { defineConfig } from '@graphql-mesh/compose-cli'
 
export const composeConfig = defineConfig({
  output: 'supergraph.js',
  subgraph: [
    //...
  ]
})

You can then generate the supergraph file using the mesh-compose CLI from GraphQL Mesh:

npx mesh-compose supergraph

Compose supegraph with Apollo Rover

Apollo Rover only allow to export supegergraph as a GraphQL document, so we will have to wrap this output into a JavaScript file:

echo "export default /* GraphQL */ \`$(rover supergraph compose)\`;"

Compose with other methods

A generic way to turn a supergraph into a JavaScript file is to do it by hand.

In a supergraph.js file, you need to export the supergraph:

export default /* GraphQL */ `
  # Place your supergraph SDL here
`