»Consul Servers Outside of Kubernetes
If you have a Consul cluster already running, you can configure your Consul clients inside Kubernetes to join this existing cluster.
config.yaml file shows how to configure the Helm chart to install
Consul clients that will join an existing cluster.
global.enabled value first disables all chart components by default
so that each component is opt-in. This allows us to only setup the client
agents. We then opt-in to the client agents by setting
client.exposeGossipPorts can be set to
false depending on if
you want the clients to be exposed on the Kubernetes internal node IPs (
their pod IPs (
client.join is set to an array of valid
-retry-join values. In the
example above, a fake cloud auto-join
value is specified. This should be set to resolve to the proper addresses of
your existing Consul cluster.
global: enabled: false client: enabled: true # Set this to true to expose the Consul clients using the Kubernetes node # IPs. If false, the pod IPs must be routable from the external servers. exposeGossipPorts: true join: - 'provider=my-cloud config=val ...'
Networking: Note that for the Kubernetes nodes to join an existing cluster, the nodes (and specifically the agent pods) must be able to connect to all other server and client agents inside and outside of Kubernetes over LAN. If this isn't possible, consider running a separate Consul cluster inside Kubernetes and federating it with your cluster outside Kubernetes. You may also consider adopting Consul Enterprise for network segments.
»Configuring TLS with Auto-encrypt
Note: Consul on Kubernetes currently does not support external servers that require mutual authentication
for the HTTPS clients of the Consul servers, that is when servers have either
verify_incoming_https set to
As noted in the Security Model,
that setting isn't strictly necessary to support Consul's threat model as it is recommended that
all requests contain a valid ACL token.
Consul's auto-encrypt feature allows clients to automatically provision their certificates by making a request to the servers at startup. If you would like to use this feature with external Consul servers, you need to configure the Helm chart with information about the servers so that it can retrieve the clients' CA to use for securing the rest of the cluster. To do that, you must add the following values, in addition to the values mentioned above:
global: tls: enabled: true enableAutoEncrypt: trueexternalServers: enabled: true hosts: - 'provider=my-cloud config=val ...'
In most cases,
externalServers.hosts will be the same as
client.join, however, both keys must be set because
they are used for different purposes: one for Serf LAN and the other for HTTPS connections.
Please see the reference documentation
for more info. If your HTTPS port is different from Consul's default
8501, you must also set
If you are running external servers with ACLs enabled, there are a couple of ways to configure the Helm chart to help initialize ACL tokens for Consul clients and consul-k8s components for you.
»Manually Bootstrapping ACLs
If you would like to call the ACL bootstrapping API yourself or if your cluster has already been bootstrapped with ACLs, you can provide the bootstrap token to the Helm chart. The Helm chart will then use this token to configure ACLs for Consul clients and any consul-k8s components you are enabling.
First, create a Kubernetes secret containing your bootstrap token:
kubectl create secret generic bootstrap-token --from-literal='token=<your bootstrap token>'
Then provide that secret to the Helm chart:
global: acls: manageSystemACLs: true bootstrapToken: secretName: bootstrap-token secretKey: token
The bootstrap token requires the following minimal permissions:
operator:writeif enabling Consul namespaces
agent:readif using WAN federation over mesh gateways
Next, configure external servers. The Helm chart will use this configuration to talk to the Consul server's API
to create policies, tokens, and an auth method. If you are enabling Consul Connect,
k8sAuthMethodHost should be set to the address of your Kubernetes API server
so that the Consul servers can validate a Kubernetes service account token when using the Kubernetes auth method
externalServers: enabled: true hosts: - 'provider=my-cloud config=val ...' k8sAuthMethodHost: 'https://kubernetes.example.com:443'
Your resulting Helm configuration will end up looking similar to this:
global: enabled: false acls: manageSystemACLs: true bootstrapToken: secretName: bootstrap-token secretKey: tokenclient: enabled: true # Set this to true to expose the Consul clients using the Kubernetes node # IPs. If false, the pod IPs must be routable from the external servers. exposeGossipPorts: true join: - 'provider=my-cloud config=val ...'externalServers: enabled: true hosts: - 'provider=my-cloud config=val ...' k8sAuthMethodHost: 'https://kubernetes.example.com:443'
»Bootstrapping ACLs via the Helm chart
If you would like the Helm chart to call the bootstrapping API and set the server tokens for you, then the steps are similar. The only difference is that you don't need to set the bootstrap token. The Helm chart will save the bootstrap token as a Kubernetes secret.
global: enabled: false acls: manageSystemACLs: trueclient: enabled: true # Set this to true to expose the Consul clients using the Kubernetes node # IPs. If false, the pod IPs must be routable from the external servers. exposeGossipPorts: true join: - 'provider=my-cloud config=val ...'externalServers: enabled: true hosts: - 'provider=my-cloud config=val ...' k8sAuthMethodHost: 'https://kubernetes.example.com:443'