...
The log files from that node’s log directory.
The heap profile dumps from that node’s log directory.
The goroutine dumps from that node’s log directory.
The contents of the following tables. The tables must be scraped using a SQL client connected to that node’s SQL address&port.
They should also be scraped with a binary format so that special characters are properly preserved.
crdb_internal.feature_usagecrdb_internal.gossip_alerts
crdb_internal.gossip_liveness
crdb_internal.gossip_network
crdb_internal.gossip_nodes
crdb_internal.leases
crdb_internal.node_build_info
crdb_internal.node_metrics
crdb_internal.node_queries
crdb_internal.node_runtime_info
crdb_internal.node_sessions
crdb_internal.node_statement_statistics
crdb_internal.node_transactions
crdb_internal.node_txn_stats
The data from the following HTTP endpoints, preferably using that node’s address & HTTP port number (you may need to use
cockroach auth-session login
to get a valid authentication cookie from a user with admin credentials).
They must be scraped using that node’s node ID:
/_status/enginestats/{nodeID}
/_status/gossip/{nodeID}
/_status/nodes/{nodeID}
/_status/details/{nodeID}
/_status/prfileprofile/{nodeID}
/_status/stacks/{nodeID}?stacks_type=0
/_status/stacks/{nodeID}?stacks_type=1
/_status/ranges/{node_id}For every range ID reported by /_status/ranges/{node_id}, the content of
/_status/range/{range_id}
...