WARNING: The 2.x versions of Elasticsearch have passed their EOL dates. If you are running a 2.x version, we strongly advise you to upgrade.
This documentation is no longer maintained and may be removed. For the latest information, see the current Elasticsearch documentation.
Monitoring Individual Nodesedit
Cluster-health
is at one end of the spectrum—a very high-level overview of
everything in your cluster. The node-stats
API is at the other end. It provides
a bewildering array of statistics about each node in your cluster.
Node-stats
provides so many stats that, until you are accustomed to the output,
you may be unsure which metrics are most important to keep an eye on. We’ll
highlight the most important metrics to monitor (but we encourage you to
log all the metrics provided—or use Marvel—because you’ll never know when
you need one stat or another).
The node-stats
API can be executed with the following:
GET _nodes/stats
Starting at the top of the output, we see the cluster name and our first node:
{ "cluster_name": "elasticsearch_zach", "nodes": { "UNr6ZMf5Qk-YCPA_L18BOQ": { "timestamp": 1408474151742, "name": "Zach", "transport_address": "inet[zacharys-air/192.168.1.131:9300]", "host": "zacharys-air", "ip": [ "inet[zacharys-air/192.168.1.131:9300]", "NONE" ], ...
The nodes are listed in a hash, with the key being the UUID of the node. Some information about the node’s network properties are displayed (such as transport address, and host). These values are useful for debugging discovery problems, where nodes won’t join the cluster. Often you’ll see that the port being used is wrong, or the node is binding to the wrong IP address/interface.
indices Sectionedit
The indices
section lists aggregate statistics for all the indices that reside
on this particular node:
"indices": { "docs": { "count": 6163666, "deleted": 0 }, "store": { "size_in_bytes": 2301398179, "throttle_time_in_millis": 122850 },
The returned statistics are grouped into the following sections:
-
docs
shows how many documents reside on this node, as well as the number of deleted docs that haven’t been purged from segments yet. -
The
store
portion indicates how much physical storage is consumed by the node. This metric includes both primary and replica shards. If the throttle time is large, it may be an indicator that your disk throttling is set too low (discussed in Segments and Merging).
"indexing": { "index_total": 803441, "index_time_in_millis": 367654, "index_current": 99, "delete_total": 0, "delete_time_in_millis": 0, "delete_current": 0 }, "get": { "total": 6, "time_in_millis": 2, "exists_total": 5, "exists_time_in_millis": 2, "missing_total": 1, "missing_time_in_millis": 0, "current": 0 }, "search": { "open_contexts": 0, "query_total": 123, "query_time_in_millis": 531, "query_current": 0, "fetch_total": 3, "fetch_time_in_millis": 55, "fetch_current": 0 }, "merges": { "current": 0, "current_docs": 0, "current_size_in_bytes": 0, "total": 1128, "total_time_in_millis": 21338523, "total_docs": 7241313, "total_size_in_bytes": 5724869463 },
-
indexing
shows the number of docs that have been indexed. This value is a monotonically increasing counter; it doesn’t decrease when docs are deleted. Also note that it is incremented anytime an index operation happens internally, which includes things like updates.Also listed are times for indexing, the number of docs currently being indexed, and similar statistics for deletes.
-
get
shows statistics about get-by-ID statistics. This includesGET
andHEAD
requests for a single document. -
search
describes the number of active searches (open_contexts
), number of queries total, and the amount of time spent on queries since the node was started. The ratio betweenquery_time_in_millis / query_total
can be used as a rough indicator for how efficient your queries are. The larger the ratio, the more time each query is taking, and you should consider tuning or optimization.The fetch statistics detail the second half of the query process (the fetch in query-then-fetch). If more time is spent in fetch than query, this can be an indicator of slow disks or very large documents being fetched, or potentially search requests with paginations that are too large (for example,
size: 10000
). -
merges
contains information about Lucene segment merges. It will tell you the number of merges that are currently active, the number of docs involved, the cumulative size of segments being merged, and the amount of time spent on merges in total.Merge statistics can be important if your cluster is write heavy. Merging consumes a large amount of disk I/O and CPU resources. If your index is write heavy and you see large merge numbers, be sure to read Indexing Performance Tips.
Note: updates and deletes will contribute to large merge numbers too, since they cause segment fragmentation that needs to be merged out eventually.
"filter_cache": { "memory_size_in_bytes": 48, "evictions": 0 }, "fielddata": { "memory_size_in_bytes": 0, "evictions": 0 }, "segments": { "count": 319, "memory_in_bytes": 65812120 }, ...
-
filter_cache
indicates the amount of memory used by the cached filter bitsets, and the number of times a filter has been evicted. A large number of evictions could indicate that you need to increase the filter cache size, or that your filters are not caching well (for example, they are churning heavily because of high cardinality, such as cachingnow
date expressions).However, evictions are a difficult metric to evaluate. Filters are cached on a per-segment basis, and evicting a filter from a small segment is much less expensive than evicting a filter from a large segment. It’s possible that you have many evictions, but they all occur on small segments, which means they have little impact on query performance.
Use the eviction metric as a rough guideline. If you see a large number, investigate your filters to make sure they are caching well. Filters that constantly evict, even on small segments, will be much less effective than properly cached filters.
-
field_data
displays the memory used by fielddata, which is used for aggregations, sorting, and more. There is also an eviction count. Unlikefilter_cache
, the eviction count here is useful: it should be zero or very close. Since field data is not a cache, any eviction is costly and should be avoided. If you see evictions here, you need to reevaluate your memory situation, fielddata limits, queries, or all three. -
segments
will tell you the number of Lucene segments this node currently serves. This can be an important number. Most indices should have around 50–150 segments, even if they are terabytes in size with billions of documents. Large numbers of segments can indicate a problem with merging (for example, merging is not keeping up with segment creation). Note that this statistic is the aggregate total of all indices on the node, so keep that in mind.The
memory
statistic gives you an idea of the amount of memory being used by the Lucene segments themselves. This includes low-level data structures such as posting lists, dictionaries, and bloom filters. A very large number of segments will increase the amount of overhead lost to these data structures, and the memory usage can be a handy metric to gauge that overhead.
OS and Process Sectionsedit
The OS
and Process
sections are fairly self-explanatory and won’t be covered
in great detail. They list basic resource statistics such as CPU and load. The
OS
section describes it for the entire OS
, while the Process
section shows just
what the Elasticsearch JVM process is using.
These are obviously useful metrics, but are often being measured elsewhere in your monitoring stack. Some stats include the following:
- CPU
- Load
- Memory usage
- Swap usage
- Open file descriptors
JVM Sectionedit
The jvm
section contains some critical information about the JVM process that
is running Elasticsearch. Most important, it contains garbage collection details,
which have a large impact on the stability of your Elasticsearch cluster.
Because garbage collection is so critical to Elasticsearch, you should become intimately
familiar with this section of the node-stats
API:
"jvm": { "timestamp": 1408556438203, "uptime_in_millis": 14457, "mem": { "heap_used_in_bytes": 457252160, "heap_used_percent": 44, "heap_committed_in_bytes": 1038876672, "heap_max_in_bytes": 1038876672, "non_heap_used_in_bytes": 38680680, "non_heap_committed_in_bytes": 38993920,
-
The
jvm
section first lists some general stats about heap memory usage. You can see how much of the heap is being used, how much is committed (actually allocated to the process), and the max size the heap is allowed to grow to. Ideally,heap_committed_in_bytes
should be identical toheap_max_in_bytes
. If the committed size is smaller, the JVM will have to resize the heap eventually—and this is a very expensive process. If your numbers are not identical, see Heap: Sizing and Swapping for how to configure it correctly.The
heap_used_percent
metric is a useful number to keep an eye on. Elasticsearch is configured to initiate GCs when the heap reaches 75% full. If your node is consistently >= 75%, your node is experiencing memory pressure. This is a warning sign that slow GCs may be in your near future.If the heap usage is consistently >=85%, you are in trouble. Heaps over 90–95% are in risk of horrible performance with long 10–30s GCs at best, and out-of-memory (OOM) exceptions at worst.
"pools": { "young": { "used_in_bytes": 138467752, "max_in_bytes": 279183360, "peak_used_in_bytes": 279183360, "peak_max_in_bytes": 279183360 }, "survivor": { "used_in_bytes": 34865152, "max_in_bytes": 34865152, "peak_used_in_bytes": 34865152, "peak_max_in_bytes": 34865152 }, "old": { "used_in_bytes": 283919256, "max_in_bytes": 724828160, "peak_used_in_bytes": 283919256, "peak_max_in_bytes": 724828160 } } },
-
The
young
,survivor
, andold
sections will give you a breakdown of memory usage of each generation in the GC. These stats are handy for keeping an eye on relative sizes, but are often not overly important when debugging problems.
"gc": { "collectors": { "young": { "collection_count": 13, "collection_time_in_millis": 923 }, "old": { "collection_count": 0, "collection_time_in_millis": 0 } } }
-
gc
section shows the garbage collection counts and cumulative time for both young and old generations. You can safely ignore the young generation counts for the most part: this number will usually be large. That is perfectly normal.In contrast, the old generation collection count should remain small, and have a small
collection_time_in_millis
. These are cumulative counts, so it is hard to give an exact number when you should start worrying (for example, a node with a one-year uptime will have a large count even if it is healthy). This is one of the reasons that tools such as Marvel are so helpful. GC counts over time are the important consideration.Time spent GC’ing is also important. For example, a certain amount of garbage is generated while indexing documents. This is normal and causes a GC every now and then. These GCs are almost always fast and have little effect on the node: young generation takes a millisecond or two, and old generation takes a few hundred milliseconds. This is much different from 10-second GCs.
Our best advice is to collect collection counts and duration periodically (or use Marvel) and keep an eye out for frequent GCs. You can also enable slow-GC logging, discussed in Logging.
Threadpool Sectionedit
Elasticsearch maintains threadpools internally. These threadpools cooperate to get work done, passing work between each other as necessary. In general, you don’t need to configure or tune the threadpools, but it is sometimes useful to see their stats so you can gain insight into how your cluster is behaving.
There are about a dozen threadpools, but they all share the same format:
"index": { "threads": 1, "queue": 0, "active": 0, "rejected": 0, "largest": 1, "completed": 1 }
Each threadpool lists the number of threads that are configured (threads
),
how many of those threads are actively processing some work (active
), and how
many work units are sitting in a queue (queue
).
If the queue fills up to its limit, new work units will begin to be rejected, and
you will see that reflected in the rejected
statistic. This is often a sign
that your cluster is starting to bottleneck on some resources, since a full
queue means your node/cluster is processing at maximum speed but unable to keep
up with the influx of work.
There are a dozen threadpools. Most you can safely ignore, but a few are good to keep an eye on:
-
indexing
- Threadpool for normal indexing requests
-
bulk
- Bulk requests, which are distinct from the nonbulk indexing requests
-
get
- Get-by-ID operations
-
search
- All search and query requests
-
merging
- Threadpool dedicated to managing Lucene merges
FS and Network Sectionsedit
Continuing down the node-stats
API, you’ll see a bunch of statistics about your
filesystem: free space, data directory paths, disk I/O stats, and more. If you are
not monitoring free disk space, you can get those stats here. The disk I/O stats
are also handy, but often more specialized command-line tools (iostat
, for example)
are more useful.
Obviously, Elasticsearch has a difficult time functioning if you run out of disk space—so make sure you don’t.
There are also two sections on network statistics:
"transport": { "server_open": 13, "rx_count": 11696, "rx_size_in_bytes": 1525774, "tx_count": 10282, "tx_size_in_bytes": 1440101928 }, "http": { "current_open": 4, "total_opened": 23 },
-
transport
shows some basic stats about the transport address. This relates to inter-node communication (often on port 9300) and any transport client or node client connections. Don’t worry if you see many connections here; Elasticsearch maintains a large number of connections between nodes. -
http
represents stats about the HTTP port (often 9200). If you see a very largetotal_opened
number that is constantly increasing, that is a sure sign that one of your HTTP clients is not using keep-alive connections. Persistent, keep-alive connections are important for performance, since building up and tearing down sockets is expensive (and wastes file descriptors). Make sure your clients are configured appropriately.
Circuit Breakeredit
Finally, we come to the last section: stats about the fielddata circuit breaker (introduced in Circuit Breaker):
"fielddata_breaker": { "maximum_size_in_bytes": 623326003, "maximum_size": "594.4mb", "estimated_size_in_bytes": 0, "estimated_size": "0b", "overhead": 1.03, "tripped": 0 }
Here, you can determine the maximum circuit-breaker size (for example, at what size the circuit breaker will trip if a query attempts to use more memory). This section will also let you know the number of times the circuit breaker has been tripped, and the currently configured overhead. The overhead is used to pad estimates, because some queries are more difficult to estimate than others.
The main thing to watch is the tripped
metric. If this number is large or
consistently increasing, it’s a sign that your queries may need to be optimized
or that you may need to obtain more memory (either per box or by adding more
nodes).