» Envoy Integration

Consul Connect has first class support for using Envoy as a proxy. Consul configures Envoy by optionally exposing a gRPC service on the local agent that serves Envoy's xDS configuration API.

Currently Consul only supports TCP proxying between services, however HTTP and gRPC features are planned for the near future along with first class ways to configure them in Consul.

As an interim solution, custom Envoy configuration can be specified in proxy service definition allowing more powerful features of Envoy to be used.

» Supported Versions

Consul's Envoy support was added in version 1.3.0. It has been tested against Envoy 1.7.1 and 1.8.0.

» Getting Started

To get started with Envoy and see a working example you can follow the Using Envoy with Connect guide.

» Limitations

The following list limitations of the Envoy integration as released in 1.3.0. All of these are planned to be lifted in the near future.

  • Default Envoy configuration only supports Layer 4 (TCP) proxying. More advanced listener configuration is possible but experimental and requires deep Envoy knowledge. First class workflows for configuring Layer 7 features across the cluster are planned for the near future.
  • There is currently no way to override the configuration of upstream clusters which makes it impossible to configure Envoy features like circuit breakers, load balancing policy, custom protocol settings etc. This will be fixed in a near-future release first with an "escape hatch" similar to the one for listeners below, then later with first-class support.
  • The configuration delivered to Envoy is suitable for a sidecar proxy currently. Later we plan to support more flexibility to be able to configure Envoy as an edge router or gateway and similar.
  • There is currently no way to disable the public listener and have a "client only" sidecar for services that don't expose Connect-enabled service but want to consume others. This will be fixed in a near-future release.
  • Once authorized, a persistent TCP connection will not be closed if the intentions change to deny access. This is currently a limitation of how TCP proxy and network authz filter work in Envoy. All new connections will be denied though and destination services can limit exposure by closing inbound connections periodically or by a rolling restart of the destination service as an emergency measure.

» Bootstrap Configuration

Envoy requires an initial bootstrap configuration that directs it to the local agent for further configuration discovery.

To assist in generating this, Consul 1.3.0 adds a consul connect envoy command. The command can either output the bootstrap configuration directly or can generate it and then exec the Envoy binary as a convenience wrapper.

Some Envoy configuration options like metrics and tracing sinks can only be specified via the bootstrap config currently and so a custom bootstrap must be used. In order to work with Connect it's necessary to start with the following basic template and add additional configuration as needed.

  # access_log_path and address are required by Envoy, Consul doesn't care what
  # they are set to though and never accesses the admin API.
  # cluter is required by Envoy but Consul doesn't use it
  cluster: "<cluster_name"
  # id must be the ID (not name if they differ) of the proxy service
  # registration in Consul
  id: "<proxy_service_id>"
  # local_agent is the "cluster" used to make further discovery requests for 
  # config and should point to the gRPC port of the local Consul agent instance.
  - name: local_agent
    connect_timeout: 1s
    type: STATIC
    # tls_context is needed if and only if Consul agent TLS is configured
            filename: "<path to CA cert file Consul is using>"
    http2_protocol_options: {}
    - socket_address:
       address: "<agent's local IP address, usually>"
       port_value: "<agent's grpc port, default 8502>"
    ads: {}
    ads: {}
    api_type: GRPC
      - key: "x-consul-token"
        token: "<Consul ACL token with service:write on the target service>"
        cluster_name: local_agent

This configures a "cluster" pointing to the local Consul agent and sets that as the target for discovering all types of dynamic resources.

» Advanced Listener Configuration

Consul 1.3.0 includes initial Envoy support which includes automatic Layer 4 (TCP) proxying over mTLS, and authorization. Near future versions of Consul will bring Layer 7 features like HTTP-path-based routing, retries, tracing and more.

For advanced users there is an "escape hatch" available in 1.3.0. The proxy.config map in the proxy service definition may contain a special key called envoy_public_listener_json. If this is set, it's value must be a string containing the serialized proto3 JSON encoding of a complete envoy listener config. Each upstream listener may also be customized in the same way by adding a envoy_listener_json key to the config map of the upstream definition.

The JSON supplied may describe a protobuf types.Any message with @type set to type.googleapis.com/envoy.api.v2.Listener, or it may be the direct encoding of the listener with no @type field.

Once parsed, it is passed to Envoy in place of the listener config that Consul would typically configure. The only modifications Consul will make to the config provided are noted below.

» Public Listener Configuration

For the proxy.config.envoy_public_listener_json, every FilterChain added to the listener will have it's TlsContext overwritten with the Connect TLS certificates. This means there is no way to override Connect TLS settings or the requirement for all inbound clients to present valid Connect certificates.

Also, every FilterChain will have the envoy.ext_authz filter prepended to the filters array to ensure that all incoming connections must be authorized explicitly by the Consul agent based on their presented client certificate.

To work properly with Consul Connect, the public listener should bind to the same address in the service definition so it is discoverable. It may also use the special cluster name local_app to forward requests to a single local instance if the proxy was configured as a sidecar.

» Example

The following example shows a public listener being configured with an http connection manager. As specified this behaves exactly like the default TCP proxy filter however it provides metrics on HTTP request volume and response codes.

If additional config outside of the listener is needed (for example the top-level tracing configuration to send traces to a collecting service), those currently need to be added to a custom bootstrap. You may generate the default connect bootstrap with the consul connect envoy -bootstrap command and then add the required additional resources.

service {
  kind = "connect-proxy"
  name = "web-http-aware-proxy"
  port = 8080
  proxy {
    destination_service_name = "web"
    destination_service_id = "web"
    config {
      envoy_public_listener_json = <<EOL
          "@type": "type.googleapis.com/envoy.api.v2.Listener",
          "name": "public_listener:",
          "address": {
            "socketAddress": {
              "address": "",
              "portValue": 8080
          "filterChains": [
              "filters": [
                  "name": "envoy.http_connection_manager",
                  "config": {
                    "stat_prefix": "public_listener",
                    "route_config": {
                      "name": "local_route",
                      "virtual_hosts": [
                          "name": "backend",
                          "domains": ["*"],
                          "routes": [
                              "match": {
                                "prefix": "/"
                              "route": {
                                "cluster": "local_app"
                    "http_filters": [
                        "name": "envoy.router",
                        "config": {}

» Upstream Listener Configuration

For the upstream listeners proxy.upstreams[].config.envoy_listener_json, no modification is performed. The Clusters served via the xDS API all have the correct client certificates and verification contexts configured so outbound traffic should be authenticated.

Each upstream may separately choose to define a custom listener config. If multiple upstreams define them care must be taken to ensure they all listen on separate ports.

Currently there is no way to disable a listener for an upstream, or modify how upstream service discovery clusters are delivered. Richer support for features like this is planned for the near future.