»Telemetry

The Consul agent collects various runtime metrics about the performance of different libraries and subsystems. These metrics are aggregated on a ten second (10s) interval and are retained for one minute. An interval is the period of time between instances of data being collected and aggregated.

When telemetry is being streamed to an external metrics store, the interval is defined to be that store's flush interval.

External StoreInterval (seconds)
dogstatsd10s
Prometheus60s
statsd10s

To view this data, you must send a signal to the Consul process: on Unix, this is USR1 while on Windows it is BREAK. Once Consul receives the signal, it will dump the current telemetry information to the agent's stderr.

This telemetry information can be used for debugging or otherwise getting a better view of what Consul is doing. Review the Monitoring and Metrics tutorial to learn how collect and interpret Consul data.

Additionally, if the telemetry configuration options are provided, the telemetry information will be streamed to a statsite or statsd server where it can be aggregated and flushed to Graphite or any other metrics store. For a configuration example for Telegraf, review the Monitoring with Telegraf tutorial.

This information can also be viewed with the metrics endpoint in JSON format or using Prometheus format.

Below is sample output of a telemetry dump:

[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.num_goroutines': 19.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.alloc_bytes': 755960.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.malloc_count': 7550.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.free_count': 4387.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.heap_objects': 3163.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_pause_ns': 1151002.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_runs': 4.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.accept': Count: 5 Sum: 5.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.command': Count: 10 Sum: 10.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events': Count: 5 Sum: 5.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.foo': Count: 4 Sum: 4.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.baz': Count: 1 Sum: 1.000
[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.memberlist.gossip': Count: 50 Min: 0.007 Mean: 0.020 Max: 0.041 Stddev: 0.007 Sum: 0.989
[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Intent': Count: 10 Sum: 0.000
[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Event': Count: 10 Min: 0.000 Mean: 2.500 Max: 5.000 Stddev: 2.121 Sum: 25.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.num_goroutines': 19.000[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.alloc_bytes': 755960.000[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.malloc_count': 7550.000[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.free_count': 4387.000[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.heap_objects': 3163.000[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_pause_ns': 1151002.000[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_runs': 4.000[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.accept': Count: 5 Sum: 5.000[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.command': Count: 10 Sum: 10.000[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events': Count: 5 Sum: 5.000[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.foo': Count: 4 Sum: 4.000[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.baz': Count: 1 Sum: 1.000[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.memberlist.gossip': Count: 50 Min: 0.007 Mean: 0.020 Max: 0.041 Stddev: 0.007 Sum: 0.989[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Intent': Count: 10 Sum: 0.000[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Event': Count: 10 Min: 0.000 Mean: 2.500 Max: 5.000 Stddev: 2.121 Sum: 25.000

»Key Metrics

These are some metrics emitted that can help you understand the health of your cluster at a glance. For a full list of metrics emitted by Consul, see Metrics Reference

»Transaction timing

Metric NameDescriptionUnitType
consul.kvs.applyMeasures the time it takes to complete an update to the KV store.mstimer
consul.txn.applyMeasures the time spent applying a transaction operation.mstimer
consul.raft.applyCounts the number of Raft transactions applied during the interval. This metric is only reported on the leader.raft transactions / intervalcounter
consul.raft.commitTimeMeasures the time it takes to commit a new entry to the Raft log on the leader.mstimer

Why they're important: Taken together, these metrics indicate how long it takes to complete write operations in various parts of the Consul cluster. Generally these should all be fairly consistent and no more than a few milliseconds. Sudden changes in any of the timing values could be due to unexpected load on the Consul servers, or due to problems on the servers themselves.

What to look for: Deviations (in any of these metrics) of more than 50% from baseline over the previous hour.

»Leadership changes

Metric NameDescriptionUnitType
consul.raft.leader.lastContactMeasures the time since the leader was last able to contact the follower nodes when checking its leader lease.mstimer
consul.raft.state.candidateIncrements whenever a Consul server starts an election.electionscounter
consul.raft.state.leaderIncrements whenever a Consul server becomes a leader.leaderscounter

Why they're important: Normally, your Consul cluster should have a stable leader. If there are frequent elections or leadership changes, it would likely indicate network issues between the Consul servers, or that the Consul servers themselves are unable to keep up with the load.

What to look for: For a healthy cluster, you're looking for a lastContact lower than 200ms, leader > 0 and candidate == 0. Deviations from this might indicate flapping leadership.

»Autopilot

Metric NameDescriptionUnitType
consul.autopilot.healthyTracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0. All non-leader servers will report NaN.health stategauge

Why it's important: Autopilot can expose the overall health of your cluster with a simple boolean.

What to look for: Alert if healthy is 0. Some other indicators of an unhealthy cluster would be:

  • consul.raft.commitTime - This can help reflect the speed of state store changes being performed by the agent. If this number is rising, the server may be experiencing an issue due to degraded resources on the host.
  • Leadership change metrics - Check for deviation from the recommended values. This can indicate failed leadership elections or flapping nodes.

»Memory usage

Metric NameDescriptionUnitType
consul.runtime.alloc_bytesMeasures the number of bytes allocated by the Consul process.bytesgauge
consul.runtime.sys_bytesMeasures the total number of bytes of memory obtained from the OS.bytesgauge

Why they're important: Consul keeps all of its data in memory. If Consul consumes all available memory, it will crash.

What to look for: If consul.runtime.sys_bytes exceeds 90% of total available system memory.

NOTE: This metric is calculated using Go's runtime package MemStats. This will have a different output than using information gathered from top. For more information, see GH-4734.

»Garbage collection

Metric NameDescriptionUnitType
consul.runtime.total_gc_pause_nsNumber of nanoseconds consumed by stop-the-world garbage collection (GC) pauses since Consul started.nsgauge

Why it's important: GC pause is a "stop-the-world" event, meaning that all runtime threads are blocked until GC completes. Normally these pauses last only a few nanoseconds. But if memory usage is high, the Go runtime may GC so frequently that it starts to slow down Consul.

What to look for: Warning if total_gc_pause_ns exceeds 2 seconds/minute, critical if it exceeds 5 seconds/minute.

NOTE: total_gc_pause_ns is a cumulative counter, so in order to calculate rates (such as GC/minute), you will need to apply a function such as InfluxDB's non_negative_difference().

»Network activity - RPC Count

Metric NameDescriptionUnitType
consul.client.rpcIncrements whenever a Consul agent in client mode makes an RPC request to a Consul serverrequestscounter
consul.client.rpc.exceededIncrements whenever a Consul agent in client mode makes an RPC request to a Consul server gets rate limited by that agent's limits configuration.requestscounter
consul.client.rpc.failedIncrements whenever a Consul agent in client mode makes an RPC request to a Consul server and fails.requestscounter

Why they're important: These measurements indicate the current load created from a Consul agent, including when the load becomes high enough to be rate limited. A high RPC count, especially from consul.client.rpcexceeded meaning that the requests are being rate-limited, could imply a misconfigured Consul agent.

What to look for: Sudden large changes to the consul.client.rpc metrics (greater than 50% deviation from baseline). consul.client.rpc.exceeded or consul.client.rpc.failed count > 0, as it implies that an agent is being rate-limited or fails to make an RPC request to a Consul server

»Raft Replication Capacity Issues

Metric NameDescriptionUnitType
consul.raft.fsm.lastRestoreDurationMeasures the time taken to restore the FSM from a snapshot on an agent restart or from the leader calling installSnapshot. This is a gauge that holds it's value since most servers only restore during restarts which are typically infrequent.msgauge
consul.raft.leader.oldestLogAgeThe number of milliseconds since the oldest log in the leader's log store was written. This can be important for replication health where write rate is high and the snapshot is large as followers may be unable to recover from a restart if restoring takes longer than the minimum value for the current leader. Compare this with consul.raft.fsm.lastRestoreDuration and consul.raft.rpc.installSnapshot to monitor. In normal usage this gauge value will grow linearly over time until a snapshot completes on the leader and the log is truncated.msgauge
consul.raft.rpc.installSnapshotMeasures the time taken to process the installSnapshot RPC call. This metric should only be seen on agents which are currently in the follower state.mstimer

Why they're important: These metrics allow operators to monitor the health and capacity of raft replication on servers. When Consul is handling large amounts of data and high write throughput it is possible for the cluster to get into the following state:

  • Write throughput is high (say 500 commits per second or more) and constant
  • The leader is writing out a large snapshot every minute or so
  • The snapshot is large enough that it takes considerable time to restore from disk on a restart or from the leader if a follower gets behind
  • Disk IO available allows the leader to write a snapshot faster than it can be restored from disk on a follower

Under these conditions, a follower after a restart may be unable to catch up on replication and become a voter again since it takes longer to restore from disk or the leader than the leader takes to write a new snapshot and truncate its logs. Servers retain raft_trailing_logs (default 10240) log entries even if their snapshot was more recent. On a leader processing 500 commits/second, that is only about 20 seconds worth of logs. Assuming the leader is able to write out a snapshot and truncate the logs in less than 20 seconds, there will only be 20 seconds worth of "recent" logs available on the leader right after the leader has taken a snapshot and never more than about 80 seconds worth assuming it is taking a snapshot and truncating logs every 60 seconds.

In this state, followers must be able to restore a snapshot into memory and resume replication in under 80 seconds otherwise they will never be able to rejoin the cluster until write rates reduce. If they take more than 20 seconds then there will be a chance that they are unlucky with timing when they restart and have to download a snapshot again from the servers one or more times. If they take 50 seconds or more then they will likely fail to catch up more often than they succeed and will remain non-voters for some time until they happen to complete the restore just before the leader truncates its logs.

In the worst case, the follower will be left continually downloading snapshots from the leader which are always too old to use by the time they are restored. This can put additional strain on the leader transferring large snapshots repeatedly as well as reduce the fault tolerance and serving capacity of the cluster.

Since Consul 1.5.3 raft_trailing_logs has been configurable. Increasing it allows the leader to retain more logs and give followers more time to restore and catch up. The tradeoff is potentially slower appends which eventually might affect write throughput and latency negatively so setting it arbitrarily high is not recommended. Before Consul 1.10.0 it required a rolling restart to change this configuration on the leader though and since no followers could restart without loosing health this could mean loosing cluster availability and needing to recover the cluster from a loss of quorum.

Since Consul 1.10.0 raft_trailing_logs is now reloadable with consul reload or SIGHUP allowing operators to increase this without the leader restarting or loosing leadership allowing the cluster to be recovered gracefully.

Monitoring these metrics can help avoid or diagnose this state.

What to look for:

consul.raft.leader.oldestLogAge should look like a saw-tooth wave increasing linearly with time until the leader takes a snapshot and then jumping down as the oldest logs are truncated. The lowest point on that line should remain comfortably higher (i.e. 2x or more) than the time it takes to restore a snapshot.

There are two ways a snapshot can be restored on a follower: from disk on startup or from the leader during an installSnapshot RPC. The leader only sends an installSnapshot RPC if the follower is new and has no state, or if it's state is too old for it to catch up with the leaders logs.

consul.raft.fsm.lastRestoreDuration shows the time it took to restore from either source the last time it happened. Most of the time this is when the server was started. It's a gauge that will always show the last restore duration (in Consul 1.10.0 and later) however long ago that was.

consul.raft.rpc.installSnapshot is the timing information from the leader's perspective when it installs a new snapshot on a follower. It includes the time spent transferring the data as well as the follower restoring it. Since these events are typically infrequent, you may need to graph the last value observed, for example using max_over_time with a large range in Prometheus. While the restore part will also be reflected in lastRestoreDuration, it can be useful to observe this too since the logs need to be able to cover this entire operation including the snapshot delivery to ensure followers can always catch up safely.

Graphing consul.raft.leader.oldestLogAge on the same axes as the other two metrics here can help see at a glance if restore times are creeping dangerously close to the limit of what the leader is retaining at the current write rate.

Note that if servers don't restart often, then the snapshot could have grown significantly since the last restore happened so last restore times might not reflect what would happen if an agent restarts now.

»License Expiration
Enterprise

Metric NameDescriptionUnitType
consul.system.licenseExpirationNumber of hours until the Consul Enterprise license will expire.hoursgauge

Why they're important:

This measurement indicates how many hours are left before the Consul Enterprise license expires. When the license expires some Consul Enterprise features will cease to work. An example of this is that after expiration, it is no longer possible to create or modify resources in non-default namespaces or to manage namespace definitions themselves even though reads of namespaced resources will still work.

What to look for:

This metric should be monitored to ensure that the license doesn't expire to prevent degradation of functionality.

»Metrics Reference

This is a full list of metrics emitted by Consul.

MetricDescriptionUnitType
consul.acl.blocked.{check,service}.deregistrationIncrements whenever a deregistration fails for an entity (check or service) is blocked by an ACL.requestscounter
consul.acl.blocked.{check,node,service}.registrationIncrements whenever a registration fails for an entity (check, node or service) is blocked by an ACL.requestscounter
consul.api.httpMigrated from consul.http.. this samples how long it takes to service the given HTTP request for the given verb and path. Includes labels for path and method. path does not include details like service or key names, for these an underscore will be present as a placeholder (eg. path=v1.kv._)mstimer
consul.client.rpcIncrements whenever a Consul agent in client mode makes an RPC request to a Consul server. This gives a measure of how much a given agent is loading the Consul servers. Currently, this is only generated by agents in client mode, not Consul servers.requestscounter
consul.client.rpc.exceededIncrements whenever a Consul agent in client mode makes an RPC request to a Consul server gets rate limited by that agent's limits configuration. This gives an indication that there's an abusive application making too many requests on the agent, or that the rate limit needs to be increased. Currently, this only applies to agents in client mode, not Consul servers.rejected requestscounter
consul.client.rpc.failedIncrements whenever a Consul agent in client mode makes an RPC request to a Consul server and fails.requestscounter
consul.client.api.catalog_register.Increments whenever a Consul agent receives a catalog register request.requestscounter
consul.client.api.success.catalog_register.Increments whenever a Consul agent successfully responds to a catalog register request.requestscounter
consul.client.rpc.error.catalog_register.Increments whenever a Consul agent receives an RPC error for a catalog register request.errorscounter
consul.client.api.catalog_deregister.Increments whenever a Consul agent receives a catalog deregister request.requestscounter
consul.client.api.success.catalog_deregister.Increments whenever a Consul agent successfully responds to a catalog deregister request.requestscounter
consul.client.rpc.error.catalog_deregister.Increments whenever a Consul agent receives an RPC error for a catalog deregister request.errorscounter
consul.client.api.catalog_datacenters.Increments whenever a Consul agent receives a request to list datacenters in the catalog.requestscounter
consul.client.api.success.catalog_datacenters.Increments whenever a Consul agent successfully responds to a request to list datacenters.requestscounter
consul.client.rpc.error.catalog_datacenters.Increments whenever a Consul agent receives an RPC error for a request to list datacenters.errorscounter
consul.client.api.catalog_nodes.Increments whenever a Consul agent receives a request to list nodes from the catalog.requestscounter
consul.client.api.success.catalog_nodes.Increments whenever a Consul agent successfully responds to a request to list nodes.requestscounter
consul.client.rpc.error.catalog_nodes.Increments whenever a Consul agent receives an RPC error for a request to list nodes.errorscounter
consul.client.api.catalog_services.Increments whenever a Consul agent receives a request to list services from the catalog.requestscounter
consul.client.api.success.catalog_services.Increments whenever a Consul agent successfully responds to a request to list services.requestscounter
consul.client.rpc.error.catalog_services.Increments whenever a Consul agent receives an RPC error for a request to list services.errorscounter
consul.client.api.catalog_service_nodes.Increments whenever a Consul agent receives a request to list nodes offering a service.requestscounter
consul.client.api.success.catalog_service_nodes.Increments whenever a Consul agent successfully responds to a request to list nodes offering a service.requestscounter
consul.client.api.error.catalog_service_nodes.Increments whenever a Consul agent receives an RPC error for request to list nodes offering a service.requestscounter
consul.client.rpc.error.catalog_service_nodes.Increments whenever a Consul agent receives an RPC error for a request to list nodes offering a service.  errorscounter
consul.client.api.catalog_node_services.Increments whenever a Consul agent receives a request to list services registered in a node.  requestscounter
consul.client.api.success.catalog_node_services.Increments whenever a Consul agent successfully responds to a request to list services in a node.  requestscounter
consul.client.rpc.error.catalog_node_services.Increments whenever a Consul agent receives an RPC error for a request to list services in a node.  errorscounter
consul.client.api.catalog_node_service_listIncrements whenever a Consul agent receives a request to list a node's registered services.requestscounter
consul.client.rpc.error.catalog_node_service_listIncrements whenever a Consul agent receives an RPC error for request to list a node's registered services.errorscounter
consul.client.api.success.catalog_node_service_listIncrements whenever a Consul agent successfully responds to a request to list a node's registered services.requestscounter
consul.client.api.catalog_gateway_services.Increments whenever a Consul agent receives a request to list services associated with a gateway.requestscounter
consul.client.api.success.catalog_gateway_services.Increments whenever a Consul agent successfully responds to a request to list services associated with a gateway.requestscounter
consul.client.rpc.error.catalog_gateway_services.Increments whenever a Consul agent receives an RPC error for a request to list services associated with a gateway.errorscounter
consul.runtime.num_goroutinesTracks the number of running goroutines and is a general load pressure indicator. This may burst from time to time but should return to a steady state value.number of goroutinesgauge
consul.runtime.alloc_bytesMeasures the number of bytes allocated by the Consul process. This may burst from time to time but should return to a steady state value.bytesgauge
consul.runtime.heap_objectsMeasures the number of objects allocated on the heap and is a general memory pressure indicator. This may burst from time to time but should return to a steady state value.number of objectsgauge
consul.state.nodesMeasures the current number of nodes registered with Consul. It is only emitted by Consul servers. Added in v1.9.0.number of objectsgauge
consul.state.servicesMeasures the current number of unique services registered with Consul, based on service name. It is only emitted by Consul servers. Added in v1.9.0.number of objectsgauge
consul.state.service_instancesMeasures the current number of unique service instances registered with Consul. It is only emitted by Consul servers. Added in v1.9.0.number of objectsgauge
consul.members.clientsMeasures the current number of client agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6.number of clientsgauge
consul.members.serversMeasures the current number of server agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6.number of serversgauge
consul.dns.stale_queriesIncrements when an agent serves a query within the allowed stale threshold.queriescounter
consul.dns.ptr_query.Measures the time spent handling a reverse DNS query for the given node.mstimer
consul.dns.domain_query.Measures the time spent handling a domain query for the given node.mstimer
consul.http...DEPRECATED IN 1.9: Tracks how long it takes to service the given HTTP request for the given verb and path. Paths do not include details like service or key names, for these an underscore will be present as a placeholder (eg. consul.http.GET.v1.kv._)mstimer
consul.system.licenseExpiration
Enterprise
This measures the number of hours remaining on the agents license.
hoursgauge
consul.versionMeasures the count of running agents.agentsgauge

»Server Health

These metrics are used to monitor the health of the Consul servers.

MetricDescriptionUnitType
consul.acl.applyMeasures the time it takes to complete an update to the ACL store.mstimer
consul.acl.resolveTokenLegacyMeasures the time it takes to resolve an ACL token using the legacy ACL system.mstimer
consul.acl.ResolveTokenMeasures the time it takes to resolve an ACL token.mstimer
consul.acl.ResolveTokenToIdentityMeasures the time it takes to resolve an ACL token to an Identity.mstimer
consul.acl.token.cache_hitIncrements if Consul is able to resolve a token's identity, or a legacy token, from the cache.cache read opcounter
consul.acl.token.cache_missIncrements if Consul cannot resolve a token's identity, or a legacy token, from the cache.cache read opcounter
consul.cache.bypassCounts how many times a request bypassed the cache because no cache-key was provided.countercounter
consul.cache.fetch_successCounts the number of successful fetches by the cache.countercounter
consul.cache.fetch_errorCounts the number of failed fetches by the cache.countercounter
consul.cache.evict_expiredCounts the number of expired entries that are evicted.countercounter
consul.raft.applied_indexRepresents the raft applied index.indexgauge
consul.raft.applyCounts the number of Raft transactions occurring over the interval, which is a general indicator of the write load on the Consul servers.raft transactions / intervalcounter
consul.raft.barrierCounts the number of times the agent has started the barrier i.e the number of times it has issued a blocking call, to ensure that the agent has all the pending operations that were queued, to be applied to the agent's FSM.blocks / intervalcounter
consul.raft.commitNumLogsMeasures the count of logs processed for application to the FSM in a single batch.logsgauge
consul.raft.commitTimeMeasures the time it takes to commit a new entry to the Raft log on the leader.mstimer
consul.raft.fsm.lastRestoreDurationMeasures the time taken to restore the FSM from a snapshot on an agent restart or from the leader calling installSnapshot. This is a gauge that holds it's value since most servers only restore during restarts which are typically infrequent.msgauge
consul.raft.fsm.snapshotMeasures the time taken by the FSM to record the current state for the snapshot.mstimer
consul.raft.fsm.applyThe number of logs committed since the last interval.commit logs / intervalcounter
consul.raft.fsm.enqueueMeasures the amount of time to enqueue a batch of logs for the FSM to apply.mstimer
consul.raft.fsm.restoreMeasures the time taken by the FSM to restore its state from a snapshot.mstimer
consul.raft.last_indexRepresents the raft applied index.indexgauge
consul.raft.leader.dispatchLogMeasures the time it takes for the leader to write log entries to disk.mstimer
consul.raft.leader.dispatchNumLogsMeasures the number of logs committed to disk in a batch.logsgauge
consul.raft.leader.lastContactMeasures the time since the leader was last able to contact the follower nodes when checking its leader lease. It can be used as a measure for how stable the Raft timing is and how close the leader is to timing out its lease.The lease timeout is 500 ms times the raft_multiplier configuration, so this telemetry value should not be getting close to that configured value, otherwise the Raft timing is marginal and might need to be tuned, or more powerful servers might be needed. See the Server Performance guide for more details.mstimer
consul.raft.leader.oldestLogAgeThe number of milliseconds since the oldest log in the leader's log store was written. This can be important for replication health where write rate is high and the snapshot is large as followers may be unable to recover from a restart if restoring takes longer than the minimum value for the current leader. Compare this with consul.raft.fsm.lastRestoreDuration and consul.raft.rpc.installSnapshot to monitor. In normal usage this gauge value will grow linearly over time until a snapshot completes on the leader and the log is truncated. Note: this metric won't be emitted until the leader writes a snapshot. After an upgrade to Consul 1.10.0 it won't be emitted until the oldest log was written after the upgrade.msgauge
consul.raft.replication.heartbeatMeasures the time taken to invoke appendEntries on a peer, so that it doesn’t timeout on a periodic basis.mstimer
consul.raft.replication.appendEntriesMeasures the time it takes to replicate log entries to followers. This is a general indicator of the load pressure on the Consul servers, as well as the performance of the communication between the servers.mstimer
consul.raft.replication.appendEntries.rpcMeasures the time taken by the append entries RFC, to replicate the log entries of a leader agent onto its follower agent(s)mstimer
consul.raft.replication.appendEntries.logsMeasures the number of logs replicated to an agent, to bring it up to speed with the leader's logs.logs appended/ intervalcounter
consul.raft.restoreCounts the number of times the restore operation has been performed by the agent. Here, restore refers to the action of raft consuming an external snapshot to restore its state.operation invoked / intervalcounter
consul.raft.restoreUserSnapshotMeasures the time taken by the agent to restore the FSM state from a user's snapshotmstimer
consul.raft.rpc.appendEntriesMeasures the time taken to process an append entries RPC call from an agent.mstimer
consul.raft.rpc.appendEntries.storeLogsMeasures the time taken to add any outstanding logs for an agent, since the last appendEntries was invokedmstimer
consul.raft.rpc.appendEntries.processLogsMeasures the time taken to process the outstanding log entries of an agent.mstimer
consul.raft.rpc.installSnapshotMeasures the time taken to process the installSnapshot RPC call. This metric should only be seen on agents which are currently in the follower state.mstimer
consul.raft.rpc.processHeartBeatMeasures the time taken to process a heartbeat request.mstimer
consul.raft.rpc.requestVoteMeasures the time taken to process the request vote RPC call.mstimer
consul.raft.snapshot.createMeasures the time taken to initialize the snapshot process.mstimer
consul.raft.snapshot.persistMeasures the time taken to dump the current snapshot taken by the Consul agent to the disk.mstimer
consul.raft.snapshot.takeSnapshotMeasures the total time involved in taking the current snapshot (creating one and persisting it) by the Consul agent.mstimer
consul.serf.snapshot.appendLineMeasures the time taken by the Consul agent to append an entry into the existing log.mstimer
consul.serf.snapshot.compactMeasures the time taken by the Consul agent to compact a log. This operation occurs only when the snapshot becomes large enough to justify the compaction .mstimer
consul.raft.state.candidateIncrements whenever a Consul server starts an election. If this increments without a leadership change occurring it could indicate that a single server is overloaded or is experiencing network connectivity issues.election attempts / intervalcounter
consul.raft.state.leaderIncrements whenever a Consul server becomes a leader. If there are frequent leadership changes this may be indication that the servers are overloaded and aren't meeting the soft real-time requirements for Raft, or that there are networking problems between the servers.leadership transitions / intervalcounter
consul.raft.state.followerCounts the number of times an agent has entered the follower mode. This happens when a new agent joins the cluster or after the end of a leader election.follower state entered / intervalcounter
consul.raft.transition.heartbeat_timeoutThe number of times an agent has transitioned to the Candidate state, after receive no heartbeat messages from the last known leader.timeouts / intervalcounter
consul.raft.verify_leaderCounts the number of times an agent checks whether it is still the leader or notchecks / intervalCounter
consul.rpc.accept_connIncrements when a server accepts an RPC connection.connectionscounter
consul.catalog.registerMeasures the time it takes to complete a catalog register operation.mstimer
consul.catalog.deregisterMeasures the time it takes to complete a catalog deregister operation.mstimer
consul.fsm.registerMeasures the time it takes to apply a catalog register operation to the FSM.mstimer
consul.fsm.deregisterMeasures the time it takes to apply a catalog deregister operation to the FSM.mstimer
consul.fsm.acl.Measures the time it takes to apply the given ACL operation to the FSM.mstimer
consul.fsm.session.Measures the time it takes to apply the given session operation to the FSM.mstimer
consul.fsm.kvs.Measures the time it takes to apply the given KV operation to the FSM.mstimer
consul.fsm.tombstone.Measures the time it takes to apply the given tombstone operation to the FSM.mstimer
consul.fsm.coordinate.batch-updateMeasures the time it takes to apply the given batch coordinate update to the FSM.mstimer
consul.fsm.prepared-query.Measures the time it takes to apply the given prepared query update operation to the FSM.mstimer
consul.fsm.txnMeasures the time it takes to apply the given transaction update to the FSM.mstimer
consul.fsm.autopilotMeasures the time it takes to apply the given autopilot update to the FSM.mstimer
consul.fsm.persistMeasures the time it takes to persist the FSM to a raft snapshot.mstimer
consul.fsm.intentionMeasures the time it takes to apply an intention operation to the state store.mstimer
consul.fsm.caMeasures the time it takes to apply CA configuration operations to the FSM.mstimer
consul.fsm.ca.leafMeasures the time it takes to apply an operation while signing a leaf certificate.mstimer
consul.fsm.acl.tokenMeasures the time it takes to apply an ACL token operation to the FSM.mstimer
consul.fsm.acl.policyMeasures the time it takes to apply an ACL policy operation to the FSM.mstimer
consul.fsm.acl.bindingruleMeasures the time it takes to apply an ACL binding rule operation to the FSM.mstimer
consul.fsm.acl.authmethodMeasures the time it takes to apply an ACL authmethod operation to the FSM.mstimer
consul.fsm.system_metadataMeasures the time it takes to apply a system metadata operation to the FSM.mstimer
consul.kvs.applyMeasures the time it takes to complete an update to the KV store.mstimer
consul.leader.barrierMeasures the time spent waiting for the raft barrier upon gaining leadership.mstimer
consul.leader.reconcileMeasures the time spent updating the raft store from the serf member information.mstimer
consul.leader.reconcileMemberMeasures the time spent updating the raft store for a single serf member's information.mstimer
consul.leader.reapTombstonesMeasures the time spent clearing tombstones.mstimer
consul.leader.replication.acl-policies.statusThis will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL policy replication was successful or 0 if there was an error.healthygauge
consul.leader.replication.acl-policies.indexThis will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL policies in the primary datacenter that have been successfully replicated.indexgauge
consul.leader.replication.acl-roles.statusThis will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL role replication was successful or 0 if there was an error.healthygauge
consul.leader.replication.acl-roles.indexThis will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL roles in the primary datacenter that have been successfully replicated.indexgauge
consul.leader.replication.acl-tokens.statusThis will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL token replication was successful or 0 if there was an error.healthygauge
consul.leader.replication.acl-tokens.indexThis will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL tokens in the primary datacenter that have been successfully replicated.indexgauge
consul.leader.replication.config-entries.statusThis will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of config entry replication was successful or 0 if there was an error.healthygauge
consul.leader.replication.config-entries.indexThis will only be emitted by the leader in a secondary datacenter. Increments to the index of config entries in the primary datacenter that have been successfully replicated.indexgauge
consul.leader.replication.federation-state.statusThis will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of federation state replication was successful or 0 if there was an error.healthygauge
consul.leader.replication.federation-state.indexThis will only be emitted by the leader in a secondary datacenter. Increments to the index of federation states in the primary datacenter that have been successfully replicated.indexgauge
consul.leader.replication.namespaces.status
Enterprise
This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of namespace replication was successful or 0 if there was an error.
healthygauge
consul.leader.replication.namespaces.index
Enterprise
This will only be emitted by the leader in a secondary datacenter. Increments to the index of namespaces in the primary datacenter that have been successfully replicated.
indexgauge
consul.prepared-query.applyMeasures the time it takes to apply a prepared query update.mstimer
consul.prepared-query.explainMeasures the time it takes to process a prepared query explain request.mstimer
consul.prepared-query.executeMeasures the time it takes to process a prepared query execute request.mstimer
consul.prepared-query.execute_remoteMeasures the time it takes to process a prepared query execute request that was forwarded to another datacenter.mstimer
consul.rpc.raft_handoffIncrements when a server accepts a Raft-related RPC connection.connectionscounter
consul.rpc.request_errorIncrements when a server returns an error from an RPC request.errorscounter
consul.rpc.requestIncrements when a server receives a Consul-related RPC request.requestscounter
consul.rpc.queryIncrements when a server receives a read RPC request, indicating the rate of new read queries. See consul.rpc.queries_blocking for the current number of in-flight blocking RPC calls. This metric changed in 1.7.0 to only increment on the the start of a query. The rate of queries will appear lower, but is more accurate.queriescounter
consul.rpc.queries_blockingThe current number of in-flight blocking queries the server is handling.queriesgauge
consul.rpc.cross-dcIncrements when a server sends a (potentially blocking) cross datacenter RPC query.queriescounter
consul.rpc.consistentReadMeasures the time spent confirming that a consistent read can be performed.mstimer
consul.session.applyMeasures the time spent applying a session update.mstimer
consul.session.renewMeasures the time spent renewing a session.mstimer
consul.session_ttl.invalidateMeasures the time spent invalidating an expired session.mstimer
consul.txn.applyMeasures the time spent applying a transaction operation.mstimer
consul.txn.readMeasures the time spent returning a read transaction.mstimer
consul.grpc.client.request.countCounts the number of gRPC requests made by the client agent to a Consul server.requestscounter
consul.grpc.client.connection.countCounts the number of new gRPC connections opened by the client agent to a Consul server.connectionscounter
consul.grpc.client.connectionsMeasures the number of active gRPC connections open from the client agent to any Consul servers.connectionsgauge
consul.grpc.server.request.countCounts the number of gRPC requests received by the server.requestscounter
consul.grpc.server.connection.countCounts the number of new gRPC connections received by the server.connectionscounter
consul.grpc.server.connectionsMeasures the number of active gRPC connections open on the server.connectionsgauge
consul.grpc.server.stream.countCounts the number of new gRPC streams received by the server.streamscounter
consul.grpc.server.streamsMeasures the number of active gRPC streams handled by the server.streamsgauge
consul.xds.server.streamsMeasures the number of active xDS streams handled by the server split by protocol version.streamsgauge

»Cluster Health

These metrics give insight into the health of the cluster as a whole.

MetricDescriptionUnitType
consul.memberlist.degraded.probeCounts the number of times the agent has performed failure detection on another agent at a slower probe rate. The agent uses its own health metric as an indicator to perform this action. (If its health score is low, means that the node is healthy, and vice versa.)probes / intervalcounter
consul.memberlist.degraded.timeoutCounts the number of times an agent was marked as a dead node, whilst not getting enough confirmations from a randomly selected list of agent nodes in an agent's membership.occurrence / intervalcounter
consul.memberlist.msg.deadCounts the number of times an agent has marked another agent to be a dead node.messages / intervalcounter
consul.memberlist.health.scoreDescribes a node's perception of its own health based on how well it is meeting the soft real-time requirements of the protocol. This metric ranges from 0 to 8, where 0 indicates "totally healthy". This health score is used to scale the time between outgoing probes, and higher scores translate into longer probing intervals. For more details see section IV of the Lifeguard paper: https://arxiv.org/pdf/1707.00788.pdfscoregauge
consul.memberlist.msg.suspectIncrements when an agent suspects another as failed when executing random probes as part of the gossip protocol. These can be an indicator of overloaded agents, network problems, or configuration errors where agents can not connect to each other on the required ports.suspect messages received / intervalcounter
consul.memberlist.tcp.acceptCounts the number of times an agent has accepted an incoming TCP stream connection.connections accepted / intervalcounter
consul.memberlist.udp.sent/receivedMeasures the total number of bytes sent/received by an agent through the UDP protocol.bytes sent or bytes received / intervalcounter
consul.memberlist.tcp.connectCounts the number of times an agent has initiated a push/pull sync with an other agent.push/pull initiated / intervalcounter
consul.memberlist.tcp.sentMeasures the total number of bytes sent by an agent through the TCP protocolbytes sent / intervalcounter
consul.memberlist.gossipMeasures the time taken for gossip messages to be broadcasted to a set of randomly selected nodes.mstimer
consul.memberlist.msg_aliveCounts the number of alive messages, that the agent has processed so far, based on the message information given by the network layer.messages / Intervalcounter
consul.memberlist.msg_deadThe number of dead messages that the agent has processed so far, based on the message information given by the network layer.messages / Intervalcounter
consul.memberlist.msg_suspectThe number of suspect messages that the agent has processed so far, based on the message information given by the network layer.messages / Intervalcounter
consul.memberlist.probeNodeMeasures the time taken to perform a single round of failure detection on a select agent.nodes / Intervalcounter
consul.memberlist.pushPullNodeMeasures the number of agents that have exchanged state with this agent.nodes / Intervalcounter
consul.serf.member.failedIncrements when an agent is marked dead. This can be an indicator of overloaded agents, network problems, or configuration errors where agents cannot connect to each other on the required ports.failures / intervalcounter
consul.serf.member.flapAvailable in Consul 0.7 and later, this increments when an agent is marked dead and then recovers within a short time period. This can be an indicator of overloaded agents, network problems, or configuration errors where agents cannot connect to each other on the required ports.flaps / intervalcounter
consul.serf.member.joinIncrements when an agent joins the cluster. If an agent flapped or failed this counter also increments when it re-joins.joins / intervalcounter
consul.serf.member.leftIncrements when an agent leaves the cluster.leaves / intervalcounter
consul.serf.eventsIncrements when an agent processes an event. Consul uses events internally so there may be additional events showing in telemetry. There are also a per-event counters emitted as consul.serf.events..events / intervalcounter
consul.serf.msgs.sentThis metric is sample of the number of bytes of messages broadcast to the cluster. In a given time interval, the sum of this metric is the total number of bytes sent and the count is the number of messages sent.message bytes / intervalcounter
consul.autopilot.failure_toleranceTracks the number of voting servers that the cluster can lose while continuing to function.serversgauge
consul.autopilot.healthyTracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0. All non-leader servers will report NaN.booleangauge
consul.session_ttl.activeTracks the active number of sessions being tracked.sessionsgauge
consul.catalog.service.query.Increments for each catalog query for the given service.queriescounter
consul.catalog.service.query-tag..Increments for each catalog query for the given service with the given tag.queriescounter
consul.catalog.service.query-tags..Increments for each catalog query for the given service with the given tags.queriescounter
consul.catalog.service.not-found.Increments for each catalog query where the given service could not be found.queriescounter
consul.catalog.connect.query.Increments for each connect-based catalog query for the given service.queriescounter
consul.catalog.connect.query-tag..Increments for each connect-based catalog query for the given service with the given tag.queriescounter
consul.catalog.connect.query-tags..Increments for each connect-based catalog query for the given service with the given tags.queriescounter
consul.catalog.connect.not-found.Increments for each connect-based catalog query where the given service could not be found.queriescounter

»Connect Built-in Proxy Metrics

Consul Connect's built-in proxy is by default configured to log metrics to the same sink as the agent that starts it.

When running in this mode it emits some basic metrics. These will be expanded upon in the future.

All metrics are prefixed with consul.proxy.<proxied-service-id> to distinguish between multiple proxies on a given host. The table below use web as an example service name for brevity.

»Labels

Most labels have a dst label and some have a src label. When using metrics sinks and timeseries stores that support labels or tags, these allow aggregating the connections by service name.

Assuming all services are using a managed built-in proxy, you can get a complete overview of both number of open connections and bytes sent and received between all services by aggregating over these metrics.

For example aggregating over all upstream (i.e. outbound) connections which have both src and dst labels, you can get a sum of all the bandwidth in and out of a given service or the total number of connections between two services.

»Metrics Reference

The standard go runtime metrics are exported by go-metrics as with Consul agent. The table below describes the additional metrics exported by the proxy.

MetricDescriptionUnitType
consul.proxy.web.runtime.*The same go runtime metrics as documented for the agent above.mixedmixed
consul.proxy.web.inbound.connsShows the current number of connections open from inbound requests to the proxy. Where supported a dst label is added indicating the service name the proxy represents.connectionsgauge
consul.proxy.web.inbound.rx_bytesIncrements by the number of bytes received from an inbound client connection. Where supported a dst label is added indicating the service name the proxy represents.bytescounter
consul.proxy.web.inbound.tx_bytesIncrements by the number of bytes transferred to an inbound client connection. Where supported a dst label is added indicating the service name the proxy represents.bytescounter
consul.proxy.web.upstream.connsShows the current number of connections open from a proxy instance to an upstream. Where supported a src label is added indicating the service name the proxy represents, and a dst label is added indicating the service name the upstream is connecting to.connectionsgauge
consul.proxy.web.inbound.rx_bytesIncrements by the number of bytes received from an upstream connection. Where supported a src label is added indicating the service name the proxy represents, and a dst label is added indicating the service name the upstream is connecting to.bytescounter
consul.proxy.web.inbound.tx_bytesIncrements by the number of bytes transferred to an upstream connection. Where supported a src label is added indicating the service name the proxy represents, and a dst label is added indicating the service name the upstream is connecting to.bytescounter