WARNING: The 2.x versions of Elasticsearch have passed their EOL dates. If you are running a 2.x version, we strongly advise you to upgrade.
This documentation is no longer maintained and may be removed. For the latest information, see the current Elasticsearch documentation.
Closing Thoughtsedit
This section covered a lot of ground, and a lot of deeply technical issues. Aggregations bring a power and flexibility to Elasticsearch that is hard to overstate. The ability to nest buckets and metrics, to quickly approximate cardinality and percentiles, to find statistical anomalies in your data, all while operating on near-real-time data and in parallel to full-text search—these are game-changers to many organizations.
It is a feature that, once you start using it, you’ll find dozens of other candidate uses. Real-time reporting and analytics is central to many organizations (be it over business intelligence or server logs).
Elasticsearch has made great strides in becoming more memory friendly by defaulting to doc values for most fields, but the necessity of fielddata for string fields means you must remain vigilant.
The management of this memory can take several forms, depending on your particular use-case:
-
During the planning stage, attempt to organize your data so that aggregations are
run on
not_analyzed
strings instead of analyzed so that doc values may be leveraged. - While testing, verify that analysis chains are not creating high cardinality fields which are later aggregated on
- At search time, by utilizing approximate aggregations and data filtering
- At a node level, by setting hard memory and dynamic circuit-breaker limits
- At an operations level, by monitoring memory usage and controlling slow garbage-collection cycles, potentially by adding more nodes to the cluster
Most deployments will use one or more of the preceding methods. The exact combination is highly dependent on your particular environment.
Whatever the path you take, it is important to assess the available options and create both a short- and long-term plan. Decide how your memory situation exists today and what (if anything) needs to be done. Then decide what will happen in six months or one year as your data grows. What methods will you use to continue scaling?
It is better to plan out these life cycles of your cluster ahead of time, rather than panicking at 3 a.m. because your cluster is at 90% heap utilization.