» Telemetry

The Consul agent collects various runtime metrics about the performance of different libraries and subsystems. These metrics are aggregated on a ten second interval and are retained for one minute.

To view this data, you must send a signal to the Consul process: on Unix, this is USR1 while on Windows it is BREAK. Once Consul receives the signal, it will dump the current telemetry information to the agent's stderr.

This telemetry information can be used for debugging or otherwise getting a better view of what Consul is doing.

Additionally, if the telemetry configuration options are provided, the telemetry information will be streamed to a statsite or statsd server where it can be aggregated and flushed to Graphite or any other metrics store. This information can also be viewed with the metrics endpoint in JSON format or using Prometheus format.

Below is sample output of a telemetry dump:

[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.num_goroutines': 19.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.alloc_bytes': 755960.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.malloc_count': 7550.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.free_count': 4387.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.heap_objects': 3163.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_pause_ns': 1151002.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_runs': 4.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.accept': Count: 5 Sum: 5.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.command': Count: 10 Sum: 10.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events': Count: 5 Sum: 5.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.foo': Count: 4 Sum: 4.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.baz': Count: 1 Sum: 1.000
[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.memberlist.gossip': Count: 50 Min: 0.007 Mean: 0.020 Max: 0.041 Stddev: 0.007 Sum: 0.989
[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Intent': Count: 10 Sum: 0.000
[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Event': Count: 10 Min: 0.000 Mean: 2.500 Max: 5.000 Stddev: 2.121 Sum: 25.000

» Key Metrics

These are some metrics emitted that can help you understand the health of your cluster at a glance. For a full list of metrics emitted by Consul, see Metrics Reference

» Transaction timing

Metric Name Description
consul.kvs.apply This measures the time it takes to complete an update to the KV store.
consul.txn.apply This measures the time spent applying a transaction operation.
consul.raft.apply This counts the number of Raft transactions occurring over the interval.
consul.raft.commitTime This measures the time it takes to commit a new entry to the Raft log on the leader.

Why they're important: Taken together, these metrics indicate how long it takes to complete write operations in various parts of the Consul cluster. Generally these should all be fairly consistent and no more than a few milliseconds. Sudden changes in any of the timing values could be due to unexpected load on the Consul servers, or due to problems on the servers themselves.

What to look for: Deviations (in any of these metrics) of more than 50% from baseline over the previous hour.

» Leadership changes

Metric Name Description
consul.raft.leader.lastContact Measures the time since the leader was last able to contact the follower nodes when checking its leader lease.
consul.raft.state.candidate This increments whenever a Consul server starts an election.
consul.raft.state.leader This increments whenever a Consul server becomes a leader.

Why they're important: Normally, your Consul cluster should have a stable leader. If there are frequent elections or leadership changes, it would likely indicate network issues between the Consul servers, or that the Consul servers themselves are unable to keep up with the load.

What to look for: For a healthy cluster, you're looking for a lastContact lower than 200ms, leader > 0 and candidate == 0. Deviations from this might indicate flapping leadership.

» Autopilot

Metric Name Description
consul.autopilot.healthy This tracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0.

Why it's important: Obviously, you want your cluster to be healthy.

What to look for: Alert if healthy is 0.

» Memory usage

Metric Name Description
consul.runtime.alloc_bytes This measures the number of bytes allocated by the Consul process.
consul.runtime.sys_bytes This is the total number of bytes of memory obtained from the OS.

Why they're important: Consul keeps all of its data in memory. If Consul consumes all available memory, it will crash.

What to look for: If consul.runtime.sys_bytes exceeds 90% of total avaliable system memory.

» Garbage collection

Metric Name Description
consul.runtime.total_gc_pause_ns Number of nanoseconds consumed by stop-the-world garbage collection (GC) pauses since Consul started.

Why it's important: GC pause is a "stop-the-world" event, meaning that all runtime threads are blocked until GC completes. Normally these pauses last only a few nanoseconds. But if memory usage is high, the Go runtime may GC so frequently that it starts to slow down Consul.

What to look for: Warning if total_gc_pause_ns exceeds 2 seconds/minute, critical if it exceeds 5 seconds/minute.

NOTE: total_gc_pause_ns is a cumulative counter, so in order to calculate rates (such as GC/minute), you will need to apply a function such as InfluxDB's non_negative_difference().

» Network activity - RPC Count

Metric Name Description
consul.client.rpc Increments whenever a Consul agent in client mode makes an RPC request to a Consul server
consul.client.rpc.exceeded Increments whenever a Consul agent in client mode makes an RPC request to a Consul server gets rate limited by that agent's limits configuration.
consul.client.rpc.failed Increments whenever a Consul agent in client mode makes an RPC request to a Consul server and fails.

Why they're important: These measurements indicate the current load created from a Consul agent, including when the load becomes high enough to be rate limited. A high RPC count, especially from consul.client.rpcexceeded meaning that the requests are being rate-limited, could imply a misconfigured Consul agent.

What to look for: Sudden large changes to the consul.client.rpc metrics (greater than 50% deviation from baseline). consul.client.rpc.exceeded or consul.client.rpc.failed count > 0, as it implies that an agent is being rate-limited or fails to make an RPC request to a Consul server

When telemetry is being streamed to an external metrics store, the interval is defined to be that store's flush interval. Otherwise, the interval can be assumed to be 10 seconds when retrieving metrics from the built-in store using the above described signals.

» Metrics Reference

This is a full list of metrics emitted by Consul.

Metric Description Unit Type
consul.acl.blocked.service.registration This increments whenever a deregistration fails for a service (blocked by an ACL) requests counter
consul.acl.blocked.<check|node|service>.registration This increments whenever a registration fails for an entity (check, node or service) is blocked by an ACL requests counter
consul.client.rpc This increments whenever a Consul agent in client mode makes an RPC request to a Consul server. This gives a measure of how much a given agent is loading the Consul servers. Currently, this is only generated by agents in client mode, not Consul servers. requests counter
consul.client.rpc.exceeded This increments whenever a Consul agent in client mode makes an RPC request to a Consul server gets rate limited by that agent's limits configuration. This gives an indication that there's an abusive application making too many requests on the agent, or that the rate limit needs to be increased. Currently, this only applies to agents in client mode, not Consul servers. rejected requests counter
consul.client.rpc.failed This increments whenever a Consul agent in client mode makes an RPC request to a Consul server and fails. requests counter
consul.client.api.catalog_register.<node> This increments whenever a Consul agent receives a catalog register request. requests counter
consul.client.api.success.catalog_register.<node> This increments whenever a Consul agent successfully responds to a catalog register request. requests counter
consul.client.rpc.error.catalog_register.<node> This increments whenever a Consul agent receives an RPC error for a catalog register request. errors counter
consul.client.api.catalog_deregister.<node> This increments whenever a Consul agent receives a catalog de-register request. requests counter
consul.client.api.success.catalog_deregister.<node> This increments whenever a Consul agent successfully responds to a catalog de-register request. requests counter
consul.client.rpc.error.catalog_deregister.<node> This increments whenever a Consul agent receives an RPC error for a catalog de-register request. errors counter
consul.client.api.catalog_datacenters.<node> This increments whenever a Consul agent receives a request to list datacenters in the catalog. requests counter
consul.client.api.success.catalog_datacenters.<node> This increments whenever a Consul agent successfully responds to a request to list datacenters. requests counter
consul.client.rpc.error.catalog_datacenters.<node> This increments whenever a Consul agent receives an RPC error for a request to list datacenters. errors counter
consul.client.api.catalog_nodes.<node> This increments whenever a Consul agent receives a request to list nodes from the catalog. requests counter
consul.client.api.success.catalog_nodes.<node> This increments whenever a Consul agent successfully responds to a request to list nodes. requests counter
consul.client.rpc.error.catalog_nodes.<node> This increments whenever a Consul agent receives an RPC error for a request to list nodes. errors counter
consul.client.api.catalog_services.<node> This increments whenever a Consul agent receives a request to list services from the catalog. requests counter
consul.client.api.success.catalog_services.<node> This increments whenever a Consul agent successfully responds to a request to list services. requests counter
consul.client.rpc.error.catalog_services.<node> This increments whenever a Consul agent receives an RPC error for a request to list services. errors counter
consul.client.api.catalog_service_nodes.<node> This increments whenever a Consul agent receives a request to list nodes offering a service. requests counter
consul.client.api.success.catalog_service_nodes.<node> This increments whenever a Consul agent successfully responds to a request to list nodes offering a service. requests counter
consul.client.rpc.error.catalog_service_nodes.<node> This increments whenever a Consul agent receives an RPC error for a request to list nodes offering a service. errors counter
consul.client.api.catalog_node_services.<node> This increments whenever a Consul agent receives a request to list services registered in a node. requests counter
consul.client.api.success.catalog_node_services.<node> This increments whenever a Consul agent successfully responds to a request to list services in a service. requests counter
consul.client.rpc.error.catalog_node_services.<node> This increments whenever a Consul agent receives an RPC error for a request to list services in a service. errors counter
consul.runtime.num_goroutines This tracks the number of running goroutines and is a general load pressure indicator. This may burst from time to time but should return to a steady state value. number of goroutines gauge
consul.runtime.alloc_bytes This measures the number of bytes allocated by the Consul process. This may burst from time to time but should return to a steady state value. bytes gauge
consul.runtime.heap_objects This measures the number of objects allocated on the heap and is a general memory pressure indicator. This may burst from time to time but should return to a steady state value. number of objects gauge
consul.acl.cache_hit The number of ACL cache hits. hits counter
consul.acl.cache_miss The number of ACL cache misses. misses counter
consul.acl.replication_hit The number of ACL replication cache hits (when not running in the ACL datacenter). hits counter
consul.dns.stale_queries This increments when an agent serves a query within the allowed stale threshold. queries counter
consul.dns.ptr_query.<node> This measures the time spent handling a reverse DNS query for the given node. ms timer
consul.dns.domain_query.<node> This measures the time spent handling a domain query for the given node. ms timer
consul.http.<verb>.<path> This tracks how long it takes to service the given HTTP request for the given verb and path. Paths do not include details like service or key names, for these an underscore will be present as a placeholder (eg. consul.http.GET.v1.kv._) ms timer

» Server Health

These metrics are used to monitor the health of the Consul servers.


Metric Description Unit Type
consul.raft.fsm.snapshot This metric measures the time taken by the FSM to record the current state for the snapshot. ms timer
consul.raft.fsm.apply This metric gives the number of logs committed since the last interval. commit logs / interval counter
consul.raft.fsm.restore This metric measures the time taken by the FSM to restore its state from a snapshot. ms timer
consul.raft.snapshot.create This metric measures the time taken to initialize the snapshot process. ms timer
consul.raft.snapshot.persist This metric measures the time taken to dump the current snapshot taken by the Consul agent to the disk. ms timer
consul.raft.snapshot.takeSnapshot This metric measures the total time involved in taking the current snapshot (creating one and persisting it) by the Consul agent. ms timer
consul.raft.replication.heartbeat This metric measures the time taken to invoke appendEntries on a peer, so that it doesn’t timeout on a periodic basis. ms timer
consul.serf.snapshot.appendLine This metric measures the time taken by the Consul agent to append an entry into the existing log. ms timer
consul.serf.snapshot.compact This metric measures the time taken by the Consul agent to compact a log. This operation occurs only when the snapshot becomes large enough to justify the compaction . ms timer
consul.raft.state.leader This increments whenever a Consul server becomes a leader. If there are frequent leadership changes this may be indication that the servers are overloaded and aren't meeting the soft real-time requirements for Raft, or that there are networking problems between the servers. leadership transitions / interval counter
consul.raft.state.candidate This increments whenever a Consul server starts an election. If this increments without a leadership change occurring it could indicate that a single server is overloaded or is experiencing network connectivity issues. election attempts / interval counter
consul.raft.apply This counts the number of Raft transactions occurring over the interval, which is a general indicator of the write load on the Consul servers. raft transactions / interval counter
consul.raft.barrier This metric counts the number of times the agent has started the barrier i.e the number of times it has issued a blocking call, to ensure that the agent has all the pending operations that were queued, to be applied to the agent's FSM. blocks / interval counter
consul.raft.verify_leader This metric counts the number of times an agent checks whether it is still the leader or not checks / interval Counter
consul.raft.restore This metric counts the number of times the restore operation has been performed by the agent. Here, restore refers to the action of raft consuming an external snapshot to restore its state. operation invoked / interval counter
consul.raft.commitTime This measures the time it takes to commit a new entry to the Raft log on the leader. ms timer
consul.raft.leader.dispatchLog This measures the time it takes for the leader to write log entries to disk. ms timer
consul.raft.replication.appendEntries This measures the time it takes to replicate log entries to followers. This is a general indicator of the load pressure on the Consul servers, as well as the performance of the communication between the servers. ms timer
consul.raft.state.follower This metric counts the number of times an agent has entered the follower mode. This happens when a new agent joins the cluster or after the end of a leader election. follower state entered / interval counter
consul.raft.transistion.heartbeat_timeout This metric gives the number of times an agent has transitioned to the Candidate state, after receive no heartbeat messages from the last known leader. timeouts / interval counter
consul.raft.restoreUserSnapshot This metric measures the time taken by the agent to restore the FSM state from a user's snapshot ms timer
consul.raft.rpc.processHeartBeat This metric measures the time taken to process a heartbeat request. ms timer
consul.raft.rpc.appendEntries This metric measures the time taken to process an append entries RPC call from an agent. ms timer
consul.raft.rpc.appendEntries.storeLogs This metric measures the time taken to add any outstanding logs for an agent, since the last appendEntries was invoked ms timer
consul.raft.rpc.appendEntries.processLogs This metric measures the time taken to process the outstanding log entries of an agent. ms timer
consul.raft.rpc.requestVote This metric measures the time taken to process the request vote RPC call. ms timer
consul.raft.rpc.installSnapshot This metric measures the time taken to process the installSnapshot RPC call. This metric should only be seen on agents which are currently in the follower state. ms timer
consul.raft.replication.appendEntries.rpc This metric measures the time taken by the append entries RFC, to replicate the log entries of a leader agent onto its follower agent(s) ms timer
consul.raft.replication.appendEntries.logs This metric measures the number of logs replicated to an agent, to bring it upto speed with the leader's logs. logs appended/ interval counter
consul.raft.leader.lastContact This will only be emitted by the Raft leader and measures the time since the leader was last able to contact the follower nodes when checking its leader lease. It can be used as a measure for how stable the Raft timing is and how close the leader is to timing out its lease.

The lease timeout is 500 ms times the raft_multiplier configuration, so this telemetry value should not be getting close to that configured value, otherwise the Raft timing is marginal and might need to be tuned, or more powerful servers might be needed. See the Server Performance guide for more details.
ms timer
consul.acl.apply This measures the time it takes to complete an update to the ACL store. ms timer
consul.acl.fault This measures the time it takes to fault in the rules for an ACL during a cache miss. ms timer
consul.acl.fetchRemoteACLs This measures the time it takes to fetch remote ACLs during replication. ms timer
consul.acl.updateLocalACLs This measures the time it takes to apply replication changes to the local ACL store. ms timer
consul.acl.replicateACLs This measures the time it takes to do one pass of the ACL replication algorithm. ms timer
consul.acl.resolveToken This measures the time it takes to resolve an ACL token. ms timer
consul.rpc.accept_conn This increments when a server accepts an RPC connection. connections counter
consul.catalog.register This measures the time it takes to complete a catalog register operation. ms timer
consul.catalog.deregister This measures the time it takes to complete a catalog deregister operation. ms timer
consul.fsm.register This measures the time it takes to apply a catalog register operation to the FSM. ms timer
consul.fsm.deregister This measures the time it takes to apply a catalog deregister operation to the FSM. ms timer
consul.fsm.acl.<op> This measures the time it takes to apply the given ACL operation to the FSM. ms timer
consul.fsm.session.<op> This measures the time it takes to apply the given session operation to the FSM. ms timer
consul.fsm.kvs.<op> This measures the time it takes to apply the given KV operation to the FSM. ms timer
consul.fsm.tombstone.<op> This measures the time it takes to apply the given tombstone operation to the FSM. ms timer
consul.fsm.coordinate.batch-update This measures the time it takes to apply the given batch coordinate update to the FSM. ms timer
consul.fsm.prepared-query.<op> This measures the time it takes to apply the given prepared query update operation to the FSM. ms timer
consul.fsm.txn This measures the time it takes to apply the given transaction update to the FSM. ms timer
consul.fsm.autopilot This measures the time it takes to apply the given autopilot update to the FSM. ms timer
consul.fsm.persist This measures the time it takes to persist the FSM to a raft snapshot. ms timer
consul.kvs.apply This measures the time it takes to complete an update to the KV store. ms timer
consul.leader.barrier This measures the time spent waiting for the raft barrier upon gaining leadership. ms timer
consul.leader.reconcile This measures the time spent updating the raft store from the serf member information. ms timer
consul.leader.reconcileMember This measures the time spent updating the raft store for a single serf member's information. ms timer
consul.leader.reapTombstones This measures the time spent clearing tombstones. ms timer
consul.prepared-query.apply This measures the time it takes to apply a prepared query update. ms timer
consul.prepared-query.explain This measures the time it takes to process a prepared query explain request. ms timer
consul.prepared-query.execute This measures the time it takes to process a prepared query execute request. ms timer
consul.prepared-query.execute_remote This measures the time it takes to process a prepared query execute request that was forwarded to another datacenter. ms timer
consul.rpc.raft_handoff This increments when a server accepts a Raft-related RPC connection. connections counter
consul.rpc.request_error This increments when a server returns an error from an RPC request. errors counter
consul.rpc.request This increments when a server receives a Consul-related RPC request. requests counter
consul.rpc.query This increments when a server receives a (potentially blocking) RPC query. queries counter
consul.rpc.cross-dc This increments when a server receives a (potentially blocking) cross datacenter RPC query. queries counter
consul.rpc.consistentRead This measures the time spent confirming that a consistent read can be performed. ms timer
consul.session.apply This measures the time spent applying a session update. ms timer
consul.session.renew This measures the time spent renewing a session. ms timer
consul.session_ttl.invalidate This measures the time spent invalidating an expired session. ms timer
consul.txn.apply This measures the time spent applying a transaction operation. ms timer
consul.txn.read This measures the time spent returning a read transaction. ms timer

» Cluster Health

These metrics give insight into the health of the cluster as a whole.


Metric Description Unit Type
consul.memberlist.degraded.probe This metric counts the number of times the agent has performed failure detection on an other agent at a slower probe rate. The agent uses its own health metric as an indicator to perform this action. (If its health score is low, means that the node is healthy, and vice versa.) probes / interval counter
consul.memberlist.degraded.timeout This metric counts the number of times an agent was marked as a dead node, whilst not getting enough confirmations from a randomly selected list of agent nodes in an agent's membership. occurrence / interval counter
consul.memberlist.msg.dead This metric counts the number of times an agent has marked another agent to be a dead node. messages / interval counter
consul.memberlist.health.score This metric describes a node's perception of its own health based on how well it is meeting the soft real-time requirements of the protocol. This metric ranges from 0 to 8, where 0 indicates "totally healthy". This health score is used to scale the time between outgoing probes, and higher scores translate into longer probing intervals. For more details see section IV of the Lifeguard paper: https://arxiv.org/pdf/1707.00788.pdf score gauge
consul.memberlist.msg.suspect This increments when an agent suspects another as failed when executing random probes as part of the gossip protocol. These can be an indicator of overloaded agents, network problems, or configuration errors where agents can not connect to each other on the required ports. suspect messages received / interval counter
consul.memberlist.tcp.accept This metric counts the number of times an agent has accepted an incoming TCP stream connection. connections accepted / interval counter
consul.memberlist.udp.sent/received This metric measures the total number of bytes sent/received by an agent through the UDP protocol. bytes sent or bytes received / interval counter
consul.memberlist.tcp.connect This metric counts the number of times an agent has initiated a push/pull sync with an other agent. push/pull initiated / interval counter
consul.memberlist.tcp.sent This metric measures the total number of bytes sent by an agent through the TCP protocol bytes sent / interval counter
consul.memberlist.gossip This metric gives the number of gossips (messages) broadcasted to a set of randomly selected nodes. messages / Interval counter
consul.memberlist.msg_alive This metric counts the number of alive agents, that the agent has mapped out so far, based on the message information given by the network layer. nodes / Interval counter
consul.memberlist.msg_dead This metric gives the number of dead agents, that the agent has mapped out so far, based on the message information given by the network layer. nodes / Interval counter
consul.memberlist.msg_suspect This metric gives the number of suspect nodes, that the agent has mapped out so far, based on the message information given by the network layer. nodes / Interval counter
consul.memberlist.probeNode This metric measures the time taken to perform a single round of failure detection on a select agent. nodes / Interval counter
consul.memberlist.pushPullNode This metric measures the number of agents that have exchanged state with this agent. nodes / Interval counter
consul.serf.member.flap Available in Consul 0.7 and later, this increments when an agent is marked dead and then recovers within a short time period. This can be an indicator of overloaded agents, network problems, or configuration errors where agents can not connect to each other on the required ports. flaps / interval counter
consul.serf.events This increments when an agent processes an event. Consul uses events internally so there may be additional events showing in telemetry. There are also a per-event counters emitted as consul.serf.events.<event name>. events / interval counter
consul.autopilot.failure_tolerance This tracks the number of voting servers that the cluster can lose while continuing to function. servers gauge
consul.autopilot.healthy This tracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0. boolean gauge
consul.session_ttl.active This tracks the active number of sessions being tracked. sessions gauge
consul.catalog.service.query.<service> This increments for each catalog query for the given service. queries counter
consul.catalog.service.query-tag.<service>.<tag> This increments for each catalog query for the given service with the given tag. queries counter
consul.catalog.service.not-found.<service> This increments for each catalog query where the given service could not be found. queries counter
consul.health.service.query.<service> This increments for each health query for the given service. queries counter
consul.health.service.query-tag.<service>.<tag> This increments for each health query for the given service with the given tag. queries counter
consul.health.service.not-found.<service> This increments for each health query where the given service could not be found. queries counter

» Connect Built-in Proxy Metrics

Consul Connect's built-in proxy is by default configured to log metrics to the same sink as the agent that starts it when running as a managed proxy.

When running in this mode it emits some basic metrics. These will be expanded upon in the future.

All metrics are prefixed with consul.proxy.<proxied-service-id> to distinguish between multiple proxies on a given host. The table below use web as an example service name for brevity.

» Labels

Most labels have a dst label and some have a src label. When using metrics sinks and timeseries stores that support labels or tags, these allow aggregating the connections by service name.

Assuming all services are using a managed built-in proxy, you can get a complete overview of both number of open connections and bytes sent and recieved between all services by aggregating over these metrics.

For example aggregating over all upstream (i.e. outbound) connections which have both src and dst labels, you can get a sum of all the bandwidth in and out of a given service or the total number of connections between two services.

» Metrics Reference

The standard go runtime metrics are exported by go-metrics as with Consul agent. The table below describes the additional metrics exported by the proxy.

Metric Description Unit Type
consul.proxy.web.runtime.* The same go runtime metrics as documented for the agent above. mixed mixed
consul.proxy.web.inbound.conns Shows the current number of connections open from inbound requests to the proxy. Where supported a dst label is added indicating the service name the proxy represents. connections gauge
consul.proxy.web.inbound.rx_bytes This increments by the number of bytes received from an inbound client connection. Where supported a dst label is added indicating the service name the proxy represents. bytes counter
consul.proxy.web.inbound.tx_bytes This increments by the number of bytes transfered to an inbound client connection. Where supported a dst label is added indicating the service name the proxy represents. bytes counter
consul.proxy.web.upstream.conns Shows the current number of connections open from a proxy instance to an upstream. Where supported a src label is added indicating the service name the proxy represents, and a dst label is added indicating the service name the upstream is connecting to. connections gauge
consul.proxy.web.inbound.rx_bytes This increments by the number of bytes received from an upstream connection. Where supported a src label is added indicating the service name the proxy represents, and a dst label is added indicating the service name the upstream is connecting to. bytes counter
consul.proxy.web.inbound.tx_bytes This increments by the number of bytes transfered to an upstream connection. Where supported a src label is added indicating the service name the proxy represents, and a dst label is added indicating the service name the upstream is connecting to. bytes counter