Blog HCP Consul on Azure goes GA, plus more Consul news from HashiConf EU Read more
  • Overview
    • Consul on Kubernetes
    • Control access with Consul API Gateway
    • Discover Services with Consul
    • Enforce Zero Trust Networking with Consul
    • Load Balancing with Consul
    • Manage Traffic with Consul
    • Multi-Platform Service Mesh with Consul
    • Network Infrastructure Automation with Consul
    • Observability with Consul
  • Enterprise
  • Tutorials
  • Docs
  • API
  • CLI
  • Community
GitHub
Download
Try HCP Consul
    • v1.12.x (latest)
    • v1.11.x
    • v1.10.x
    • v1.9.x
    • v1.8.x
    • Overview
      • Overview
      • What is a Service Mesh?
      • Overview
      • Chef, Puppet, etc.
      • Nagios
      • SkyDNS
      • SmartStack
      • Serf
      • Eureka
      • Istio
      • Envoy and Other Proxies
      • Custom Solutions
    • Overview
    • Manual Bootstrap
    • Consul Agent
    • Glossary
    • Required Ports
    • Bootstrapping a Datacenter
    • Cloud Auto-join
    • Server Performance
    • Kubernetes
  • API
  • Commands (CLI)
    • Register Services - Service Definitions
    • Find Services - DNS Interface
    • Monitor Services - Check Definitions
    • Overview
    • How Service Mesh Works
    • Configuration
      • Overview
      • Ingress Gateway
      • Mesh
      • Exported Services
      • Proxy Defaults
      • Service Defaults
      • Service Intentions
      • Service Resolver
      • Service Router
      • Service Splitter
      • Terminating Gateway
      • Overview
      • Envoy
      • Built-in Proxy
      • Proxy Integration
      • Managed (Deprecated)
      • Overview
      • Proxy Service Registration
      • Sidecar Service Registration
    • Service-to-service permissions - Intentions
    • Service-to-service permissions - Intentions (Legacy Mode)
    • Transparent Proxy
      • Overview
      • UI Visualization
      • Overview
      • Discovery Chain
    • Connectivity Tasks
    • Distributed Tracing
      • Overview
        • WAN Federation
        • Enabling Service-to-service Traffic Across Datacenters
        • Enabling Service-to-service Traffic Across Admin Partitions
      • Ingress Gateways
      • Terminating Gateways
      • What is Cluster Peering
      • Create and Manage Peering Connections
      • Cluster Peering on Kubernetes
    • Nomad
    • Kubernetes
      • Overview
      • Go Integration
      • Overview
      • Built-In CA
      • Vault
      • ACM Private CA
    • Develop and Debug
    • Security
    • Overview
    • Installation
    • Technical Specifications
    • Common Errors
    • Upgrades
    • Overview
    • Architecture
      • Installing Consul on Kubernetes
      • Installing Consul K8s CLI
        • Minikube
        • Kind
        • AKS (Azure)
        • EKS (AWS)
        • GKE (Google Cloud)
        • Red Hat OpenShift
        • Self Hosted Kubernetes
        • Consul Clients Outside Kubernetes
        • Consul Servers Outside Kubernetes
        • Single Consul Datacenter in Multiple Kubernetes Clusters
        • Consul Enterprise
        • Overview
        • Federation Between Kubernetes Clusters
        • Federation Between VMs and Kubernetes
        • Overview
        • Systems Integration
          • Overview
          • Bootstrap Token
          • Enterprise License
          • Gossip Encryption Key
          • Partition Token
          • Replication Token
          • Server TLS
          • Service Mesh Certificates
          • Snapshot Agent Config
          • Webhook Certificates
        • WAN Federation
      • Overview
      • Transparent Proxy
      • Ingress Gateways
      • Terminating Gateways
      • Ingress Controllers
      • Configuring a Connect CA Provider
      • Health Checks
        • Metrics
    • Service Sync
      • Overview
      • Upgrade An Existing Cluster to CRDs
    • Annotations and Labels
    • Consul DNS
      • Upgrading Consul on Kubernetes
      • Upgrading Consul K8s CLI
      • Uninstall
      • Certificate Rotation
      • Gossip Encryption Key Rotation
      • Configure TLS on an Existing Cluster
      • Common Error Messages
      • FAQ
    • Compatibility Matrix
    • Helm Chart Configuration
    • Consul K8s CLI Reference
    • Overview
    • Requirements
    • Task Resource Usage
      • Installation
      • Secure Configuration
      • Migrate Existing Tasks
      • Installation
      • Secure Configuration
      • ACL Controller
    • Architecture
    • Consul Enterprise
    • Configuration Reference
    • Overview
    • Register Lambda Functions
    • Invoke Lambda Functions
    • Overview
      • Installation
      • Requirements
      • Configure
      • Run Consul-Terraform-Sync
    • Architecture
      • Overview
      • Status
      • Tasks
      • Health
      • Overview
      • task
      • start
    • Configuration
    • Tasks
    • Terraform Modules
      • Overview
      • License
      • Terraform Cloud Driver
      • Overview
      • Terraform
      • Terraform Cloud
    • Compatibility
    • Consul KV
    • Sessions
    • Watches
    • Overview
      • General
      • CLI Reference
      • Configuration Reference
    • Configuration Entries
    • Telemetry
    • Sentinel
    • RPC
    • Overview
      • ACL System Overview
      • Tokens
      • Policies
      • Roles
      • Rules Reference
      • Legacy Mode
      • Token Migration
      • ACLs in Federated Datacenters
        • Overview
        • Kubernetes
        • JWT
        • OIDC
        • AWS IAM
    • Encryption
      • Overview
      • Core
      • Network Infrastructure Automation
    • Overview
    • Admin Partitions
    • Audit Logging
    • Automated Backups
    • Automated Upgrades
    • Enhanced Read Scalability
    • Single sign-on - OIDC
    • Redundancy Zones
    • Advanced Federation
    • Network Segments
    • Namespaces
    • NIA with TFE
    • Sentinel
      • Overview
      • FAQ
    • Overview
    • Improving Consul Resilience
    • Anti-Entropy
    • Consensus Protocol
    • Gossip Protocol
    • Jepsen Testing
    • Network Coordinates
    • Consul Integration Program
    • NIA Integration Program
    • Vault Integration
    • Proxy Integration
  • Consul Tools
    • Overview
    • Compatibility Promise
    • Specific Version Details
      • Overview
      • General Process
      • Upgrading to 1.2.4
      • Upgrading to 1.6.9
      • Upgrading to 1.8.13
      • Upgrading to 1.10.0
    • Common Error Messages
    • FAQ
    • Overview
      • v1.11.x
      • v1.10.x
      • v1.9.x
      • v0.3.x
      • v0.2.x
      • v0.1.x
      • v0.4.x
      • v0.3.x
      • v0.2.x
      • v0.6.x
      • v0.5.x
    • Overview
    • ACL
  • Guides
Type '/' to Search

»Consul Clients Outside Kubernetes

Consul clients running on non-Kubernetes nodes can join a Consul cluster running within Kubernetes.

»Networking

Within one datacenter, Consul typically requires a fully connected network. This means the IPs of every client and server agent should be routable by every other client and server agent in the datacenter. Clients need to be able to gossip with every other agent and make RPC calls to servers. Servers need to be able to gossip with every other agent. See Architecture for more details.

Consul Enterprise customers may use network segments to enable non-fully-connected topologies. However, out-of-cluster nodes must still be able to communicate with the server pod or host IP addresses.

»Auto-join

The recommended way to join a cluster running within Kubernetes is to use the "k8s" cloud auto-join provider.

The auto-join provider dynamically discovers IP addresses to join using the Kubernetes API. It authenticates with Kubernetes using a standard kubeconfig file. This works with all major hosted Kubernetes offerings as well as self-hosted installations. The token in the kubeconfig file needs to have permissions to list pods in the namespace where Consul servers are deployed.

The auto-join string below will join a Consul server cluster that is started using the official Helm chart:

$ consul agent -retry-join 'provider=k8s label_selector="app=consul,component=server"'
$ consul agent -retry-join 'provider=k8s label_selector="app=consul,component=server"'

Note: This auto-join command only connects on the default gossip port 8301, whether you are joining on the pod network or via host ports. Either a consul server or client that is already a member of the datacenter should be listening on this port for the external client agent to be able to use auto-join.

»Auto-join on the Pod network

In the default Consul Helm chart installation, Consul clients and servers are routable only via their pod IPs for server RPCs and gossip (HTTP API calls to Consul clients can also be made through host IPs). This means any external client agents joining the Consul cluster running on Kubernetes would need to be able to have connectivity to those pod IPs.

In many hosted Kubernetes environments, you will need to explicitly configure your hosting provider to ensure that pod IPs are routable from external VMs. See Azure AKS CNI, AWS EKS CNI and GKE VPC-native clusters.

Given you have the official Helm chart installed with the default values, do the following to join an external client agent.

  1. Make sure the pod IPs of the clients and servers in Kubernetes are routable from the VM and that the VM can access port 8301 (for gossip) and port 8300 (for server RPC) on those pod IPs.

  2. Make sure that the client and server pods running in Kubernetes can route to the VM's advertise IP on its gossip port (default 8301).

  3. Make sure you have the kubeconfig file for the Kubernetes cluster in $HOME/.kube/config on the external VM.

  4. On the external VM, run:

    consul agent \
      -advertise="$ADVERTISE_IP" \
      -retry-join='provider=k8s label_selector="app=consul,component=server"' \
      -bind=0.0.0.0 \
      -hcl='leave_on_terminate = true' \
      -hcl='ports { grpc = 8502 }' \
      -config-dir=$CONFIG_DIR \
      -datacenter=$DATACENTER \
      -data-dir=$DATA_DIR \
    
    consul agent \
      -advertise="$ADVERTISE_IP" \
      -retry-join='provider=k8s label_selector="app=consul,component=server"' \
      -bind=0.0.0.0 \
      -hcl='leave_on_terminate = true' \
      -hcl='ports { grpc = 8502 }' \
      -config-dir=$CONFIG_DIR \
      -datacenter=$DATACENTER \
      -data-dir=$DATA_DIR \
    
  5. Check if the join was successful by running consul members. Sample output:

    / $ consul members
    Node                                           Address           Status  Type    Build  Protocol  DC   Segment
    consul-consul-server-0                         10.138.0.43:9301  alive    server  1.9.1  2         dc1  <all>
    external-agent                                 10.138.0.38:8301  alive    client  1.9.0  2         dc1  <default>
    gke-external-agent-default-pool-32d15192-grs4  10.138.0.43:8301  alive    client  1.9.1  2         dc1  <default>
    gke-external-agent-default-pool-32d15192-otge  10.138.0.44:8301  alive    client  1.9.1  2         dc1  <default>
    gke-external-agent-default-pool-32d15192-vo7k  10.138.0.42:8301  alive    client  1.9.1  2         dc1  <default>
    
    / $ consul members
    Node                                           Address           Status  Type    Build  Protocol  DC   Segment
    consul-consul-server-0                         10.138.0.43:9301  alive    server  1.9.1  2         dc1  <all>
    external-agent                                 10.138.0.38:8301  alive    client  1.9.0  2         dc1  <default>
    gke-external-agent-default-pool-32d15192-grs4  10.138.0.43:8301  alive    client  1.9.1  2         dc1  <default>
    gke-external-agent-default-pool-32d15192-otge  10.138.0.44:8301  alive    client  1.9.1  2         dc1  <default>
    gke-external-agent-default-pool-32d15192-vo7k  10.138.0.42:8301  alive    client  1.9.1  2         dc1  <default>
    

»Auto-join via host ports

If your external VMs can't connect to Kubernetes pod IPs, but they can connect to the internal host IPs of the nodes in the Kubernetes cluster, you have the option to expose the clients and server ports on the host IP instead.

  1. Install the official Helm chart with the following values:

    client:
      exposeGossipPorts: true # exposes client gossip ports as hostPorts
    server:
      exposeGossipAndRPCPorts: true # exposes the server gossip and RPC ports as hostPorts
      ports:
        # Configures the server gossip port
        serflan:
          # Note that this needs to be different than 8301, to avoid conflicting with the client gossip hostPort
          port: 9301
    
    client:
      exposeGossipPorts: true # exposes client gossip ports as hostPorts
    server:
      exposeGossipAndRPCPorts: true # exposes the server gossip and RPC ports as hostPorts
      ports:
        # Configures the server gossip port
        serflan:
          # Note that this needs to be different than 8301, to avoid conflicting with the client gossip hostPort
          port: 9301
    

    This will expose the client gossip ports, the server gossip ports and the server RPC port at hostIP:hostPort. Note that hostIP is the internal IP of the VM that the client/server pods are deployed on.

  2. Make sure the IPs of the Kubernetes nodes are routable from the VM and that the VM can access ports 8301 and 9301 (for gossip) and port 8300 (for server RPC) on those node IPs.

  3. Make sure the client and server pods running in Kubernetes can route to the VM's advertise IP on its gossip port (default 8301).

  4. Make sure you have the kubeconfig file for the Kubernetes cluster in $HOME/.kube/config on the external VM.

  5. On the external VM, run (note the addition of host_network=true in the retry-join argument):

    consul agent \
      -advertise="$ADVERTISE_IP" \
      -retry-join='provider=k8s host_network=true label_selector="app=consul,component=server"'
      -bind=0.0.0.0 \
      -hcl='leave_on_terminate = true' \
      -hcl='ports { grpc = 8502 }' \
      -config-dir=$CONFIG_DIR \
      -datacenter=$DATACENTER \
      -data-dir=$DATA_DIR \
    
    consul agent \
      -advertise="$ADVERTISE_IP" \
      -retry-join='provider=k8s host_network=true label_selector="app=consul,component=server"'
      -bind=0.0.0.0 \
      -hcl='leave_on_terminate = true' \
      -hcl='ports { grpc = 8502 }' \
      -config-dir=$CONFIG_DIR \
      -datacenter=$DATACENTER \
      -data-dir=$DATA_DIR \
    
  6. Check if the join was successful by running consul members. Sample output:

    / $ consul members
    Node                                           Address           Status  Type    Build  Protocol  DC   Segment
    consul-consul-server-0                         10.138.0.43:9301  alive    server  1.9.1  2         dc1  <all>
    external-agent                                 10.138.0.38:8301  alive    client  1.9.0  2         dc1  <default>
    gke-external-agent-default-pool-32d15192-grs4  10.138.0.43:8301  alive    client  1.9.1  2         dc1  <default>
    gke-external-agent-default-pool-32d15192-otge  10.138.0.44:8301  alive    client  1.9.1  2         dc1  <default>
    gke-external-agent-default-pool-32d15192-vo7k  10.138.0.42:8301  alive    client  1.9.1  2         dc1  <default>
    
    / $ consul members
    Node                                           Address           Status  Type    Build  Protocol  DC   Segment
    consul-consul-server-0                         10.138.0.43:9301  alive    server  1.9.1  2         dc1  <all>
    external-agent                                 10.138.0.38:8301  alive    client  1.9.0  2         dc1  <default>
    gke-external-agent-default-pool-32d15192-grs4  10.138.0.43:8301  alive    client  1.9.1  2         dc1  <default>
    gke-external-agent-default-pool-32d15192-otge  10.138.0.44:8301  alive    client  1.9.1  2         dc1  <default>
    gke-external-agent-default-pool-32d15192-vo7k  10.138.0.42:8301  alive    client  1.9.1  2         dc1  <default>
    

»Manual join

If you are unable to use auto-join, you can also follow the instructions in either of the auto-join sections but instead of using a provider key in the -retry-join flag, you would need to pass the address of at least one consul server, e.g: -retry-join=$CONSUL_SERVER_IP:$SERVER_SERFLAN_PORT. A kubeconfig file is not required when using manual join.

However, rather than hardcoding the IP, it's recommended to set up a DNS entry that would resolve to the consul servers' pod IPs (if using the pod network) or host IPs that the server pods are running on (if using host ports).

github logoEdit this page
IntroGuidesDocsCommunityPrivacySecurityBrandConsent Manager