A new platform for documentation and tutorials is launching soon.
We are migrating Consul documentation into HashiCorp Developer, our new developer experience.
»What is Cluster Peering?
Cluster peering is currently in beta: Functionality associated with cluster peering is subject to change. You should never use the beta release in secure environments or production scenarios. Features in beta may have performance issues, scaling issues, and limited support.
Cluster peering is not currently available in the HCP Consul offering.
You can create peering connections between two or more independent clusters so that services deployed to different partitions or datacenters can communicate.
Cluster peering allows Consul clusters in different datacenters to communicate with each other. The cluster peering process consists of the following steps:
- Create a peering token in one cluster.
- Use the peering token to establish peering with a second cluster.
- Export services between clusters.
- Create intentions to authorize services for peers.
For detailed instructions on setting up cluster peering, refer to Create and Manage Peering Connections.
»Differences between WAN federation and cluster peering
WAN federation and cluster peering are different ways to connect clusters. The most important distinction is that WAN federation assumes clusters are owned by the same operators, so it maintains and replicates global states such as ACLs and configuration entries. As a result, WAN federation requires a primary datacenter to serve as an authority for replicated data.
Regardless of whether you connect your clusters through WAN federation or cluster peering, human and machine users can use either method to discover services in other clusters or dial them through the service mesh.
|WAN Federation||Cluster Peering|
|Connects clusters across datacenters||✅||✅|
|Shares support queries and service endpoints||✅||✅|
|Connects clusters owned by different operators||❌||✅|
|Functions without declaring primary datacenter||❌||✅|
|Replicates exported services for service discovery||❌||✅|
|Forwards service requests for service discovery||✅||❌|
|Shares key/value stores||✅||❌|
|Uses gossip protocol||✅||❌|
»Beta release features and constraints
The cluster peering beta includes the following features and functionality:
- Mesh gateways for service to service traffic between clusters are available. For more information on configuring mesh gateways across peers, refer to Service-to-service Traffic Across Peered Clusters.
- You can generate peering tokens, establish, list, read, and delete peerings, and manage intentions for peering connections with both the API and the UI.
- You can configure transparent proxies for peered services.
- You can use the
peeringrule for ACL enforcement of peering APIs.
Not all features and functionality are available in the beta release. In particular, consider the following technical constraints:
- Mesh gateways for server to server traffic are not available.
- Services with node, instance, and check definitions totaling more than 4MB cannot be exported to a peer.
- Dynamic routing features such as splits, custom routes, and redirects cannot target services in a peered cluster.
- Configuring service failover across peers is not supported for service mesh.
- Consul datacenters that are already federated stay federated. You do not need to migrate WAN federated clusters to cluster peering.
consul intentionCLI command is not supported. To manage intentions that specify services in peered clusters, use configuration entries.
- Accessing key/value stores across peers is not supported.
- Because non-Enterprise Consul instances are restricted to the
defaultnamespace, Consul Enterprise instances cannot export services from outside of the
defaultnamespace to non-Enterprise peers.