Bucket Aggregations

Bucket aggregations don’t calculate metrics over fields like the metrics aggregations do, but instead, they create buckets of documents. Each bucket is associated with a criterion (depending on the aggregation type) which determines whether or not a document in the current context "falls" into it. In other words, the buckets effectively define document sets. In addition to the buckets themselves, the bucket aggregations also compute and return the number of documents that "fell into" each bucket.

Bucket aggregations, as opposed to metrics aggregations, can hold sub-aggregations. These sub-aggregations will be aggregated for the buckets created by their "parent" bucket aggregation.

There are different bucket aggregators, each with a different "bucketing" strategy. Some define a single bucket, some define fixed number of multiple buckets, and others dynamically create the buckets during the aggregation process.

Children Aggregation

A special single bucket aggregation that enables aggregating from buckets on parent document types to buckets on child documents.

This aggregation relies on the _parent field in the mapping. This aggregation has a single option:

  • type - The what child type the buckets in the parent space should be mapped to.

For example, let’s say we have an index of questions and answers. The answer type has the following _parent field in the mapping:

PUT child_example
{
    "mappings": {
        "answer" : {
            "_parent" : {
                "type" : "question"
            }
        }
    }
}

The question typed document contain a tag field and the answer typed documents contain an owner field. With the children aggregation the tag buckets can be mapped to the owner buckets in a single request even though the two fields exist in two different kinds of documents.

An example of a question typed document:

PUT child_example/question/1
{
    "body": "<p>I have Windows 2003 server and i bought a new Windows 2008 server...",
    "title": "Whats the best way to file transfer my site from server to a newer one?",
    "tags": [
        "windows-server-2003",
        "windows-server-2008",
        "file-transfer"
    ]
}

Examples of answer typed documents:

PUT child_example/answer/1?parent=1&refresh
{
    "owner": {
        "location": "Norfolk, United Kingdom",
        "display_name": "Sam",
        "id": 48
    },
    "body": "<p>Unfortunately you're pretty much limited to FTP...",
    "creation_date": "2009-05-04T13:45:37.030"
}
PUT child_example/answer/2?parent=1&refresh
{
    "owner": {
        "location": "Norfolk, United Kingdom",
        "display_name": "Troll",
        "id": 49
    },
    "body": "<p>Use Linux...",
    "creation_date": "2009-05-05T13:45:37.030"
}

The following request can be built that connects the two together:

POST child_example/_search?size=0
{
  "aggs": {
    "top-tags": {
      "terms": {
        "field": "tags.keyword",
        "size": 10
      },
      "aggs": {
        "to-answers": {
          "children": {
            "type" : "answer" 
          },
          "aggs": {
            "top-names": {
              "terms": {
                "field": "owner.display_name.keyword",
                "size": 10
              }
            }
          }
        }
      }
    }
  }
}

The type points to type / mapping with the name answer.

The above example returns the top question tags and per tag the top answer owners.

Possible response:

{
  "timed_out": false,
  "took": 25,
  "_shards": { "total": 5, "successful": 5, "failed": 0 },
  "hits": { "total": 3, "max_score": 0.0, hits: [] },
  "aggregations": {
    "top-tags": {
      "doc_count_error_upper_bound": 0,
      "sum_other_doc_count": 0,
      "buckets": [
        {
          "key": "file-transfer",
          "doc_count": 1, 
          "to-answers": {
            "doc_count": 2, 
            "top-names": {
              "doc_count_error_upper_bound": 0,
              "sum_other_doc_count": 0,
              "buckets": [
                {
                  "key": "Sam",
                  "doc_count": 1
                },
                {
                  "key": "Troll",
                  "doc_count": 1
                }
              ]
            }
          }
        },
        {
          "key": "windows-server-2003",
          "doc_count": 1, 
          "to-answers": {
            "doc_count": 2, 
            "top-names": {
              "doc_count_error_upper_bound": 0,
              "sum_other_doc_count": 0,
              "buckets": [
                {
                  "key": "Sam",
                  "doc_count": 1
                },
                {
                  "key": "Troll",
                  "doc_count": 1
                }
              ]
            }
          }
        },
        {
          "key": "windows-server-2008",
          "doc_count": 1, 
          "to-answers": {
            "doc_count": 2, 
            "top-names": {
              "doc_count_error_upper_bound": 0,
              "sum_other_doc_count": 0,
              "buckets": [
                {
                  "key": "Sam",
                  "doc_count": 1
                },
                {
                  "key": "Troll",
                  "doc_count": 1
                }
              ]
            }
          }
        }
      ]
    }
  }
}

The number of question documents with the tag file-transfer, windows-server-2003, etc.

The number of answer documents that are related to question documents with the tag file-transfer, windows-server-2003, etc.

Date Histogram Aggregation

A multi-bucket aggregation similar to the histogram except it can only be applied on date values. Since dates are represented in elasticsearch internally as long values, it is possible to use the normal histogram on dates as well, though accuracy will be compromised. The reason for this is in the fact that time based intervals are not fixed (think of leap years and on the number of days in a month). For this reason, we need special support for time based data. From a functionality perspective, this histogram supports the same features as the normal histogram. The main difference is that the interval can be specified by date/time expressions.

Requesting bucket intervals of a month.

{
    "aggs" : {
        "articles_over_time" : {
            "date_histogram" : {
                "field" : "date",
                "interval" : "month"
            }
        }
    }
}

Available expressions for interval: year, quarter, month, week, day, hour, minute, second

Time values can also be specified via abbreviations supported by time units parsing. Note that fractional time values are not supported, but you can address this by shifting to another time unit (e.g., 1.5h could instead be specified as 90m).

{
    "aggs" : {
        "articles_over_time" : {
            "date_histogram" : {
                "field" : "date",
                "interval" : "90m"
            }
        }
    }
}

Keys

Internally, a date is represented as a 64 bit number representing a timestamp in milliseconds-since-the-epoch. These timestamps are returned as the bucket keys. The key_as_string is the same timestamp converted to a formatted date string using the format specified with the format parameter:

Tip

If no format is specified, then it will use the first date format specified in the field mapping.

{
    "aggs" : {
        "articles_over_time" : {
            "date_histogram" : {
                "field" : "date",
                "interval" : "1M",
                "format" : "yyyy-MM-dd" 
            }
        }
    }
}

Supports expressive date format pattern

Response:

{
    "aggregations": {
        "articles_over_time": {
            "buckets": [
                {
                    "key_as_string": "2013-02-02",
                    "key": 1328140800000,
                    "doc_count": 1
                },
                {
                    "key_as_string": "2013-03-02",
                    "key": 1330646400000,
                    "doc_count": 2
                },
                ...
            ]
        }
    }
}

Time Zone

Date-times are stored in Elasticsearch in UTC. By default, all bucketing and rounding is also done in UTC. The time_zone parameter can be used to indicate that bucketing should use a different time zone.

Time zones may either be specified as an ISO 8601 UTC offset (e.g. +01:00 or -08:00) or as a timezone id, an identifier used in the TZ database like America/Los_Angeles.

Consider the following example:

PUT my_index/log/1
{
  "date": "2015-10-01T00:30:00Z"
}

PUT my_index/log/2
{
  "date": "2015-10-01T01:30:00Z"
}

GET my_index/_search?size=0
{
  "aggs": {
    "by_day": {
      "date_histogram": {
        "field":     "date",
        "interval":  "day"
      }
    }
  }
}

UTC is used if no time zone is specified, which would result in both of these documents being placed into the same day bucket, which starts at midnight UTC on 1 October 2015:

"aggregations": {
  "by_day": {
    "buckets": [
      {
        "key_as_string": "2015-10-01T00:00:00.000Z",
        "key":           1443657600000,
        "doc_count":     2
      }
    ]
  }
}

If a time_zone of -01:00 is specified, then midnight starts at one hour before midnight UTC:

GET my_index/_search?size=0
{
  "aggs": {
    "by_day": {
      "date_histogram": {
        "field":     "date",
        "interval":  "day",
        "time_zone": "-01:00"
      }
    }
  }
}

Now the first document falls into the bucket for 30 September 2015, while the second document falls into the bucket for 1 October 2015:

"aggregations": {
  "by_day": {
    "buckets": [
      {
        "key_as_string": "2015-09-30T00:00:00.000-01:00", 
        "key": 1443571200000,
        "doc_count": 1
      },
      {
        "key_as_string": "2015-10-01T00:00:00.000-01:00", 
        "key": 1443657600000,
        "doc_count": 1
      }
    ]
  }
}

The key_as_string value represents midnight on each day in the specified time zone.

Warning

When using time zones that follow DST (daylight savings time) changes, buckets close to the moment when those changes happen can have slightly different sizes than would be expected from the used interval. For example, consider a DST start in the CET time zone: on 27 March 2016 at 2am, clocks were turned forward 1 hour to 3am local time. When using day as interval, the bucket covering that day will only hold data for 23 hours instead of the usual 24 hours for other buckets. The same is true for shorter intervals like e.g. 12h. Here, we will have only a 11h bucket on the morning of 27 March when the DST shift happens.

Offset

The offset parameter is used to change the start value of each bucket by the specified positive (+) or negative offset (-) duration, such as 1h for an hour, or 1M for a month. See the section called “Time units” for more possible time duration options.

For instance, when using an interval of day, each bucket runs from midnight to midnight. Setting the offset parameter to +6h would change each bucket to run from 6am to 6am:

PUT my_index/log/1
{
  "date": "2015-10-01T05:30:00Z"
}

PUT my_index/log/2
{
  "date": "2015-10-01T06:30:00Z"
}

GET my_index/_search?size=0
{
  "aggs": {
    "by_day": {
      "date_histogram": {
        "field":     "date",
        "interval":  "day",
        "offset":    "+6h"
      }
    }
  }
}

Instead of a single bucket starting at midnight, the above request groups the documents into buckets starting at 6am:

"aggregations": {
  "by_day": {
    "buckets": [
      {
        "key_as_string": "2015-09-30T06:00:00.000Z",
        "key": 1443592800000,
        "doc_count": 1
      },
      {
        "key_as_string": "2015-10-01T06:00:00.000Z",
        "key": 1443679200000,
        "doc_count": 1
      }
    ]
  }
}
Note

The start offset of each bucket is calculated after the time_zone adjustments have been made.

Scripts

Like with the normal histogram, both document level scripts and value level scripts are supported. It is also possible to control the order of the returned buckets using the order settings and filter the returned buckets based on a min_doc_count setting (by default all buckets between the first bucket that matches documents and the last one are returned). This histogram also supports the extended_bounds setting, which enables extending the bounds of the histogram beyond the data itself (to read more on why you’d want to do that please refer to the explanation here).

Missing value

The missing parameter defines how documents that are missing a value should be treated. By default they will be ignored but it is also possible to treat them as if they had a value.

{
    "aggs" : {
        "publish_date" : {
             "date_histogram" : {
                 "field" : "publish_date",
                 "interval": "year",
                 "missing": "2000-01-01" 
             }
         }
    }
}

Documents without a value in the publish_date field will fall into the same bucket as documents that have the value 2000-01-01.

Date Range Aggregation

A range aggregation that is dedicated for date values. The main difference between this aggregation and the normal range aggregation is that the from and to values can be expressed in Date Math expressions, and it is also possible to specify a date format by which the from and to response fields will be returned. Note that this aggregation includes the from value and excludes the to value for each range.

Example:

{
    "aggs": {
        "range": {
            "date_range": {
                "field": "date",
                "format": "MM-yyy",
                "ranges": [
                    { "to": "now-10M/M" }, 
                    { "from": "now-10M/M" } 
                ]
            }
        }
    }
}

< now minus 10 months, rounded down to the start of the month.

>= now minus 10 months, rounded down to the start of the month.

In the example above, we created two range buckets, the first will "bucket" all documents dated prior to 10 months ago and the second will "bucket" all documents dated since 10 months ago

Response:

{
    ...

    "aggregations": {
        "range": {
            "buckets": [
                {
                    "to": 1.3437792E+12,
                    "to_as_string": "08-2012",
                    "doc_count": 7
                },
                {
                    "from": 1.3437792E+12,
                    "from_as_string": "08-2012",
                    "doc_count": 2
                }
            ]
        }
    }
}

Date Format/Pattern

Note

this information was copied from JodaDate

All ASCII letters are reserved as format pattern letters, which are defined as follows:

Symbol Meaning Presentation Examples

G

era

text

AD

C

century of era (>=0)

number

20

Y

year of era (>=0)

year

1996

x

weekyear

year

1996

w

week of weekyear

number

27

e

day of week

number

2

E

day of week

text

Tuesday; Tue

y

year

year

1996

D

day of year

number

189

M

month of year

month

July; Jul; 07

d

day of month

number

10

a

halfday of day

text

PM

K

hour of halfday (0~11)

number

0

h

clockhour of halfday (1~12)

number

12

H

hour of day (0~23)

number

0

k

clockhour of day (1~24)

number

24

m

minute of hour

number

30

s

second of minute

number

55

S

fraction of second

number

978

z

time zone

text

Pacific Standard Time; PST

Z

time zone offset/id

zone

-0800; -08:00; America/Los_Angeles

'

escape for text

delimiter

''

The count of pattern letters determine the format.

Text
If the number of pattern letters is 4 or more, the full form is used; otherwise a short or abbreviated form is used if available.
Number
The minimum number of digits. Shorter numbers are zero-padded to this amount.
Year
Numeric presentation for year and weekyear fields are handled specially. For example, if the count of y is 2, the year will be displayed as the zero-based year of the century, which is two digits.
Month
3 or over, use text, otherwise use number.
Zone
Z outputs offset without a colon, ZZ outputs the offset with a colon, ZZZ or more outputs the zone id.
Zone names
Time zone names (z) cannot be parsed.

Any characters in the pattern that are not in the ranges of [a..z] and [A..Z] will be treated as quoted text. For instance, characters like :, ., ' , '# and ? will appear in the resulting time text even they are not embraced within single quotes.

Time zone in date range aggregations

Dates can be converted from another time zone to UTC by specifying the time_zone parameter.

Time zones may either be specified as an ISO 8601 UTC offset (e.g. +01:00 or -08:00) or as one of the time zone ids from the TZ database.

The time_zone parameter is also applied to rounding in date math expressions. As an example, to round to the beginning of the day in the CET time zone, you can do the following:

{
   "aggs": {
           "range": {
               "date_range": {
                   "field": "date",
                   "time_zone": "CET",
                   "ranges": [
                      { "to": "2016-02-15/d" }, 
                      { "from": "2016-02-15/d", "to" : "now/d" },
                      { "from": "now/d" },
                  ]
              }
          }
      }
  }

This date will be converted to 2016-02-15T00:00:00.000+01:00.

now/d will be rounded to the beginning of the day in the CET time zone.

Diversified Sampler Aggregation

Warning

This functionality is experimental and may be changed or removed completely in a future release.

A filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents. Diversity settings are used to limit the number of matches that share a common value such as an "author".

Example use cases:

  • Tightening the focus of analytics to high-relevance matches rather than the potentially very long tail of low-quality matches
  • Removing bias from analytics by ensuring fair representation of content from different sources
  • Reducing the running cost of aggregations that can produce useful results using only samples e.g. significant_terms

Example:

{
    "query": {
        "match": {
            "text": "iphone"
        }
    },
    "aggs": {
        "sample": {
            "diversified_sampler": {
                "shard_size": 200,
                "field" : "user.id"
            },
            "aggs": {
                "keywords": {
                    "significant_terms": {
                        "field": "text"
                    }
                }
            }
        }
    }
}

Response:

{
    ...
        "aggregations": {
        "sample": {
            "doc_count": 1000,
            "keywords": {
                "doc_count": 1000,
                "buckets": [
                    ...
                    {
                        "key": "bend",
                        "doc_count": 58,
                        "score": 37.982536582524276,
                        "bg_count": 103
                    },
                    ....
}

1000 documents were sampled in total because we asked for a maximum of 200 from an index with 5 shards. The cost of performing the nested significant_terms aggregation was therefore limited rather than unbounded.

The results of the significant_terms aggregation are not skewed by any single over-active Twitter user because we asked for a maximum of one tweet from any one user in our sample.

shard_size

The shard_size parameter limits how many top-scoring documents are collected in the sample processed on each shard. The default value is 100.

Controlling diversity

=field or script and max_docs_per_value settings are used to control the maximum number of documents collected on any one shard which share a common value. The choice of value (e.g. author) is loaded from a regular field or derived dynamically by a script.

The aggregation will throw an error if the choice of field or script produces multiple values for a document. It is currently not possible to offer this form of de-duplication using many values, primarily due to concerns over efficiency.

Note

Any good market researcher will tell you that when working with samples of data it is important that the sample represents a healthy variety of opinions rather than being skewed by any single voice. The same is true with aggregations and sampling with these diversify settings can offer a way to remove the bias in your content (an over-populated geography, a large spike in a timeline or an over-active forum spammer).

Field

Controlling diversity using a field:

{
    "aggs" : {
        "sample" : {
            "diversified_sampler" : {
                "field" : "author",
                "max_docs_per_value" : 3
            }
        }
    }
}

Note that the max_docs_per_value setting applies on a per-shard basis only for the purposes of shard-local sampling. It is not intended as a way of providing a global de-duplication feature on search results.

Script

Controlling diversity using a script:

{
    "aggs" : {
        "sample" : {
            "diversified_sampler" : {
                "script" : {
                    "lang" : "painless",
                    "inline" : "doc['author'].value + '/' + doc['genre'].value"
                }
            }
        }
    }
}

Note in the above example we chose to use the default max_docs_per_value setting of 1 and combine author and genre fields to ensure each shard sample has, at most, one match for an author/genre pair.

execution_hint

When using the settings to control diversity, the optional execution_hint setting can influence the management of the values used for de-duplication. Each option will hold up to shard_size values in memory while performing de-duplication but the type of value held can be controlled as follows:

  • hold field values directly (map)
  • hold ordinals of the field as determined by the Lucene index (global_ordinals)
  • hold hashes of the field values - with potential for hash collisions (bytes_hash)

The default setting is to use global_ordinals if this information is available from the Lucene index and reverting to map if not. The bytes_hash setting may prove faster in some cases but introduces the possibility of false positives in de-duplication logic due to the possibility of hash collisions. Please note that Elasticsearch will ignore the choice of execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints.

Limitations

Cannot be nested under breadth_first aggregations

Being a quality-based filter the sampler aggregation needs access to the relevance score produced for each document. It therefore cannot be nested under a terms aggregation which has the collect_mode switched from the default depth_first mode to breadth_first as this discards scores. In this situation an error will be thrown.

Limited de-dup logic.

The de-duplication logic in the diversify settings applies only at a shard level so will not apply across shards.

No specialized syntax for geo/date fields

Currently the syntax for defining the diversifying values is defined by a choice of field or script - there is no added syntactical sugar for expressing geo or date units such as "7d" (7 days). This support may be added in a later release and users will currently have to create these sorts of values using a script.

Filter Aggregation

Defines a single bucket of all the documents in the current document set context that match a specified filter. Often this will be used to narrow down the current aggregation context to a specific set of documents.

Example:

{
    "aggs" : {
        "red_products" : {
            "filter" : { "term": { "color": "red" } },
            "aggs" : {
                "avg_price" : { "avg" : { "field" : "price" } }
            }
        }
    }
}

In the above example, we calculate the average price of all the products that are red.

Response:

{
    ...

    "aggs" : {
        "red_products" : {
            "doc_count" : 100,
            "avg_price" : { "value" : 56.3 }
        }
    }
}

Filters Aggregation

Defines a multi bucket aggregation where each bucket is associated with a filter. Each bucket will collect all documents that match its associated filter.

Example:

PUT /logs/message/_bulk?refresh
{ "index" : { "_id" : 1 } }
{ "body" : "warning: page could not be rendered" }
{ "index" : { "_id" : 2 } }
{ "body" : "authentication error" }
{ "index" : { "_id" : 3 } }
{ "body" : "warning: connection timed out" }

GET logs/_search
{
  "size": 0,
  "aggs" : {
    "messages" : {
      "filters" : {
        "filters" : {
          "errors" :   { "match" : { "body" : "error"   }},
          "warnings" : { "match" : { "body" : "warning" }}
        }
      }
    }
  }
}

In the above example, we analyze log messages. The aggregation will build two collection (buckets) of log messages - one for all those containing an error, and another for all those containing a warning.

Response:

{
  "took": 9,
  "timed_out": false,
  "_shards": ...,
  "hits": ...,
  "aggregations": {
    "messages": {
      "buckets": {
        "errors": {
          "doc_count": 1
        },
        "warnings": {
          "doc_count": 2
        }
      }
    }
  }
}

Anonymous filters

The filters field can also be provided as an array of filters, as in the following request:

GET logs/_search
{
  "size": 0,
  "aggs" : {
    "messages" : {
      "filters" : {
        "filters" : [
          { "match" : { "body" : "error"   }},
          { "match" : { "body" : "warning" }}
        ]
      }
    }
  }
}

The filtered buckets are returned in the same order as provided in the request. The response for this example would be:

{
  "took": 4,
  "timed_out": false,
  "_shards": ...,
  "hits": ...,
  "aggregations": {
    "messages": {
      "buckets": [
        {
          "doc_count": 1
        },
        {
          "doc_count": 2
        }
      ]
    }
  }
}

Other Bucket

The other_bucket parameter can be set to add a bucket to the response which will contain all documents that do not match any of the given filters. The value of this parameter can be as follows:

false
Does not compute the other bucket
true
Returns the other bucket bucket either in a bucket (named _other_ by default) if named filters are being used, or as the last bucket if anonymous filters are being used

The other_bucket_key parameter can be used to set the key for the other bucket to a value other than the default _other_. Setting this parameter will implicitly set the other_bucket parameter to true.

The following snippet shows a response where the other bucket is requested to be named other_messages.

PUT logs/message/4?refresh
{
  "body": "info: user Bob logged out"
}

GET logs/_search
{
  "size": 0,
  "aggs" : {
    "messages" : {
      "filters" : {
        "other_bucket_key": "other_messages",
        "filters" : {
          "errors" :   { "match" : { "body" : "error"   }},
          "warnings" : { "match" : { "body" : "warning" }}
        }
      }
    }
  }
}

The response would be something like the following:

{
  "took": 3,
  "timed_out": false,
  "_shards": ...,
  "hits": ...,
  "aggregations": {
    "messages": {
      "buckets": {
        "errors": {
          "doc_count": 1
        },
        "warnings": {
          "doc_count": 2
        },
        "other_messages": {
          "doc_count": 1
        }
      }
    }
  }
}

Geo Distance Aggregation

A multi-bucket aggregation that works on geo_point fields and conceptually works very similar to the range aggregation. The user can define a point of origin and a set of distance range buckets. The aggregation evaluate the distance of each document value from the origin point and determines the buckets it belongs to based on the ranges (a document belongs to a bucket if the distance between the document and the origin falls within the distance range of the bucket).

{
    "aggs" : {
        "rings_around_amsterdam" : {
            "geo_distance" : {
                "field" : "location",
                "origin" : "52.3760, 4.894",
                "ranges" : [
                    { "to" : 100 },
                    { "from" : 100, "to" : 300 },
                    { "from" : 300 }
                ]
            }
        }
    }
}

Response:

{
    "aggregations": {
        "rings" : {
            "buckets": [
                {
                    "key": "*-100.0",
                    "from": 0,
                    "to": 100.0,
                    "doc_count": 3
                },
                {
                    "key": "100.0-300.0",
                    "from": 100.0,
                    "to": 300.0,
                    "doc_count": 1
                },
                {
                    "key": "300.0-*",
                    "from": 300.0,
                    "doc_count": 7
                }
            ]
        }
    }
}

The specified field must be of type geo_point (which can only be set explicitly in the mappings). And it can also hold an array of geo_point fields, in which case all will be taken into account during aggregation. The origin point can accept all formats supported by the geo_point type:

  • Object format: { "lat" : 52.3760, "lon" : 4.894 } - this is the safest format as it is the most explicit about the lat & lon values
  • String format: "52.3760, 4.894" - where the first number is the lat and the second is the lon
  • Array format: [4.894, 52.3760] - which is based on the GeoJson standard and where the first number is the lon and the second one is the lat

By default, the distance unit is m (metres) but it can also accept: mi (miles), in (inches), yd (yards), km (kilometers), cm (centimeters), mm (millimeters).

{
    "aggs" : {
        "rings" : {
            "geo_distance" : {
                "field" : "location",
                "origin" : "52.3760, 4.894",
                "unit" : "mi", 
                "ranges" : [
                    { "to" : 100 },
                    { "from" : 100, "to" : 300 },
                    { "from" : 300 }
                ]
            }
        }
    }
}

The distances will be computed as miles

There are three distance calculation modes: sloppy_arc (the default), arc (most accurate) and plane (fastest). The arc calculation is the most accurate one but also the more expensive one in terms of performance. The sloppy_arc is faster but less accurate. The plane is the fastest but least accurate distance function. Consider using plane when your search context is "narrow" and spans smaller geographical areas (like cities or even countries). plane may return higher error mergins for searches across very large areas (e.g. cross continent search). The distance calculation type can be set using the distance_type parameter:

{
    "aggs" : {
        "rings" : {
            "geo_distance" : {
                "field" : "location",
                "origin" : "52.3760, 4.894",
                "distance_type" : "plane",
                "ranges" : [
                    { "to" : 100 },
                    { "from" : 100, "to" : 300 },
                    { "from" : 300 }
                ]
            }
        }
    }
}

GeoHash grid Aggregation

A multi-bucket aggregation that works on geo_point fields and groups points into buckets that represent cells in a grid. The resulting grid can be sparse and only contains cells that have matching data. Each cell is labeled using a geohash which is of user-definable precision.

  • High precision geohashes have a long string length and represent cells that cover only a small area.
  • Low precision geohashes have a short string length and represent cells that each cover a large area.

Geohashes used in this aggregation can have a choice of precision between 1 and 12.

Warning

The highest-precision geohash of length 12 produces cells that cover less than a square metre of land and so high-precision requests can be very costly in terms of RAM and result sizes. Please see the example below on how to first filter the aggregation to a smaller geographic area before requesting high-levels of detail.

The specified field must be of type geo_point (which can only be set explicitly in the mappings) and it can also hold an array of geo_point fields, in which case all points will be taken into account during aggregation.

Simple low-precision request

{
    "aggregations" : {
        "myLarge-GrainGeoHashGrid" : {
            "geohash_grid" : {
                "field" : "location",
                "precision" : 3
            }
        }
    }
}

Response:

{
    "aggregations": {
        "myLarge-GrainGeoHashGrid": {
            "buckets": [
                {
                    "key": "svz",
                    "doc_count": 10964
                },
                {
                    "key": "sv8",
                    "doc_count": 3198
                }
            ]
        }
    }
}

High-precision requests

When requesting detailed buckets (typically for displaying a "zoomed in" map) a filter like geo_bounding_box should be applied to narrow the subject area otherwise potentially millions of buckets will be created and returned.

{
    "aggregations" : {
        "zoomedInView" : {
            "filter" : {
                "geo_bounding_box" : {
                    "location" : {
                        "top_left" : "51.73, 0.9",
                        "bottom_right" : "51.55, 1.1"
                    }
                }
            },
            "aggregations":{
                "zoom1":{
                    "geohash_grid" : {
                        "field":"location",
                        "precision":8
                    }
                }
            }
        }
    }
 }

Cell dimensions at the equator

The table below shows the metric dimensions for cells covered by various string lengths of geohash. Cell dimensions vary with latitude and so the table is for the worst-case scenario at the equator.

GeoHash length

Area width x height

1

5,009.4km x 4,992.6km

2

1,252.3km x 624.1km

3

156.5km x 156km

4

39.1km x 19.5km

5

4.9km x 4.9km

6

1.2km x 609.4m

7

152.9m x 152.4m

8

38.2m x 19m

9

4.8m x 4.8m

10

1.2m x 59.5cm

11

14.9cm x 14.9cm

12

3.7cm x 1.9cm

Options

field

Mandatory. The name of the field indexed with GeoPoints.

precision

Optional. The string length of the geohashes used to define cells/buckets in the results. Defaults to 5.

size

Optional. The maximum number of geohash buckets to return (defaults to 10,000). When results are trimmed, buckets are prioritised based on the volumes of documents they contain.

shard_size

Optional. To allow for more accurate counting of the top cells returned in the final result the aggregation defaults to returning max(10,(size x number-of-shards)) buckets from each shard. If this heuristic is undesirable, the number considered from each shard can be over-ridden using this parameter.

Global Aggregation

Defines a single bucket of all the documents within the search execution context. This context is defined by the indices and the document types you’re searching on, but is not influenced by the search query itself.

Note

Global aggregators can only be placed as top level aggregators (it makes no sense to embed a global aggregator within another bucket aggregator)

Example:

{
    "query" : {
        "match" : { "title" : "shirt" }
    },
    "aggs" : {
        "all_products" : {
            "global" : {}, 
            "aggs" : { 
                "avg_price" : { "avg" : { "field" : "price" } }
            }
        }
    }
}

The global aggregation has an empty body

The sub-aggregations that are registered for this global aggregation

The above aggregation demonstrates how one would compute aggregations (avg_price in this example) on all the documents in the search context, regardless of the query (in our example, it will compute the average price over all products in our catalog, not just on the "shirts").

The response for the above aggregation:

{
    ...

    "aggregations" : {
        "all_products" : {
            "doc_count" : 100, 
            "avg_price" : {
                "value" : 56.3
            }
        }
    }
}

The number of documents that were aggregated (in our case, all documents within the search context)

Histogram Aggregation

A multi-bucket values source based aggregation that can be applied on numeric values extracted from the documents. It dynamically builds fixed size (a.k.a. interval) buckets over the values. For example, if the documents have a field that holds a price (numeric), we can configure this aggregation to dynamically build buckets with interval 5 (in case of price it may represent $5). When the aggregation executes, the price field of every document will be evaluated and will be rounded down to its closest bucket - for example, if the price is 32 and the bucket size is 5 then the rounding will yield 30 and thus the document will "fall" into the bucket that is associated with the key 30. To make this more formal, here is the rounding function that is used:

bucket_key = Math.floor((value - offset) / interval) * interval + offset

The interval must be a positive decimal, while the offset must be a decimal in [0, interval[.

The following snippet "buckets" the products based on their price by interval of 50:

{
    "aggs" : {
        "prices" : {
            "histogram" : {
                "field" : "price",
                "interval" : 50
            }
        }
    }
}

And the following may be the response:

{
    "aggregations": {
        "prices" : {
            "buckets": [
                {
                    "key": 0,
                    "doc_count": 2
                },
                {
                    "key": 50,
                    "doc_count": 4
                },
                {
                    "key": 100,
                    "doc_count": 0
                },
                {
                    "key": 150,
                    "doc_count": 3
                }
            ]
        }
    }
}

Minimum document count

The response above show that no documents has a price that falls within the range of [100 - 150). By default the response will fill gaps in the histogram with empty buckets. It is possible change that and request buckets with a higher minimum count thanks to the min_doc_count setting:

{
    "aggs" : {
        "prices" : {
            "histogram" : {
                "field" : "price",
                "interval" : 50,
                "min_doc_count" : 1
            }
        }
    }
}

Response:

{
    "aggregations": {
        "prices" : {
            "buckets": [
                {
                    "key": 0,
                    "doc_count": 2
                },
                {
                    "key": 50,
                    "doc_count": 4
                },
                {
                    "key": 150,
                    "doc_count": 3
                }
            ]
        }
    }
}

By default the histogram returns all the buckets within the range of the data itself, that is, the documents with the smallest values (on which with histogram) will determine the min bucket (the bucket with the smallest key) and the documents with the highest values will determine the max bucket (the bucket with the highest key). Often, when requesting empty buckets, this causes a confusion, specifically, when the data is also filtered.

To understand why, let’s look at an example:

Lets say the you’re filtering your request to get all docs with values between 0 and 500, in addition you’d like to slice the data per price using a histogram with an interval of 50. You also specify "min_doc_count" : 0 as you’d like to get all buckets even the empty ones. If it happens that all products (documents) have prices higher than 100, the first bucket you’ll get will be the one with 100 as its key. This is confusing, as many times, you’d also like to get those buckets between 0 - 100.

With extended_bounds setting, you now can "force" the histogram aggregation to start building buckets on a specific min values and also keep on building buckets up to a max value (even if there are no documents anymore). Using extended_bounds only makes sense when min_doc_count is 0 (the empty buckets will never be returned if min_doc_count is greater than 0).

Note that (as the name suggest) extended_bounds is not filtering buckets. Meaning, if the extended_bounds.min is higher than the values extracted from the documents, the documents will still dictate what the first bucket will be (and the same goes for the extended_bounds.max and the last bucket). For filtering buckets, one should nest the histogram aggregation under a range filter aggregation with the appropriate from/to settings.

Example:

{
    "query" : {
        "constant_score" : { "filter": { "range" : { "price" : { "to" : "500" } } } }
    },
    "aggs" : {
        "prices" : {
            "histogram" : {
                "field" : "price",
                "interval" : 50,
                "extended_bounds" : {
                    "min" : 0,
                    "max" : 500
                }
            }
        }
    }
}

Order

By default the returned buckets are sorted by their key ascending, though the order behaviour can be controlled using the order setting.

Ordering the buckets by their key - descending:

{
    "aggs" : {
        "prices" : {
            "histogram" : {
                "field" : "price",
                "interval" : 50,
                "order" : { "_key" : "desc" }
            }
        }
    }
}

Ordering the buckets by their doc_count - ascending:

{
    "aggs" : {
        "prices" : {
            "histogram" : {
                "field" : "price",
                "interval" : 50,
                "order" : { "_count" : "asc" }
            }
        }
    }
}

If the histogram aggregation has a direct metrics sub-aggregation, the latter can determine the order of the buckets:

{
    "aggs" : {
        "prices" : {
            "histogram" : {
                "field" : "price",
                "interval" : 50,
                "order" : { "price_stats.min" : "asc" } 
            },
            "aggs" : {
                "price_stats" : { "stats" : {} } 
            }
        }
    }
}

The { "price_stats.min" : asc" } will sort the buckets based on min value of their price_stats sub-aggregation.

There is no need to configure the price field for the price_stats aggregation as it will inherit it by default from its parent histogram aggregation.

It is also possible to order the buckets based on a "deeper" aggregation in the hierarchy. This is supported as long as the aggregations path are of a single-bucket type, where the last aggregation in the path may either by a single-bucket one or a metrics one. If it’s a single-bucket type, the order will be defined by the number of docs in the bucket (i.e. doc_count), in case it’s a metrics one, the same rules as above apply (where the path must indicate the metric name to sort by in case of a multi-value metrics aggregation, and in case of a single-value metrics aggregation the sort will be applied on that value).

The path must be defined in the following form:

AGG_SEPARATOR       =  '>' ;
METRIC_SEPARATOR    =  '.' ;
AGG_NAME            =  <the name of the aggregation> ;
METRIC              =  <the name of the metric (in case of multi-value metrics aggregation)> ;
PATH                =  <AGG_NAME> [ <AGG_SEPARATOR>, <AGG_NAME> ]* [ <METRIC_SEPARATOR>, <METRIC> ] ;
{
    "aggs" : {
        "prices" : {
            "histogram" : {
                "field" : "price",
                "interval" : 50,
                "order" : { "promoted_products>rating_stats.avg" : "desc" } 
            },
            "aggs" : {
                "promoted_products" : {
                    "filter" : { "term" : { "promoted" : true }},
                    "aggs" : {
                        "rating_stats" : { "stats" : { "field" : "rating" }}
                    }
                }
            }
        }
    }
}

The above will sort the buckets based on the avg rating among the promoted products

Offset

By default the bucket keys start with 0 and then continue in even spaced steps of interval, e.g. if the interval is 10 the first buckets (assuming there is data inside them) will be [0 - 9], [10-19], [20-29]. The bucket boundaries can be shifted by using the offset option.

This can be best illustrated with an example. If there are 10 documents with values ranging from 5 to 14, using interval 10 will result in two buckets with 5 documents each. If an additional offset 5 is used, there will be only one single bucket [5-14] containing all the 10 documents.

Response Format

By default, the buckets are returned as an ordered array. It is also possible to request the response as a hash instead keyed by the buckets keys:

{
    "aggs" : {
        "prices" : {
            "histogram" : {
                "field" : "price",
                "interval" : 50,
                "keyed" : true
            }
        }
    }
}

Response:

{
    "aggregations": {
        "prices": {
            "buckets": {
                "0": {
                    "key": 0,
                    "doc_count": 2
                },
                "50": {
                    "key": 50,
                    "doc_count": 4
                },
                "150": {
                    "key": 150,
                    "doc_count": 3
                }
            }
        }
    }
}

Missing value

The missing parameter defines how documents that are missing a value should be treated. By default they will be ignored but it is also possible to treat them as if they had a value.

{
    "aggs" : {
        "quantity" : {
             "histogram" : {
                 "field" : "quantity",
                 "interval": 10,
                 "missing": 0 
             }
         }
    }
}

Documents without a value in the quantity field will fall into the same bucket as documents that have the value 0.

IP Range Aggregation

Just like the dedicated date range aggregation, there is also a dedicated range aggregation for IP typed fields:

Example:

{
    "aggs" : {
        "ip_ranges" : {
            "ip_range" : {
                "field" : "ip",
                "ranges" : [
                    { "to" : "10.0.0.5" },
                    { "from" : "10.0.0.5" }
                ]
            }
        }
    }
}

Response:

{
    ...

    "aggregations": {
        "ip_ranges": {
            "buckets" : [
                {
                    "to": "10.0.0.5",
                    "doc_count": 4
                },
                {
                    "from": "10.0.0.5",
                    "doc_count": 6
                }
            ]
        }
    }
}

IP ranges can also be defined as CIDR masks:

{
    "aggs" : {
        "ip_ranges" : {
            "ip_range" : {
                "field" : "ip",
                "ranges" : [
                    { "mask" : "10.0.0.0/25" },
                    { "mask" : "10.0.0.127/25" }
                ]
            }
        }
    }
}

Response:

{
    "aggregations": {
        "ip_ranges": {
            "buckets": [
                {
                    "key": "10.0.0.0/25",
                    "from": "10.0.0.0",
                    "to": "10.0.0.127",
                    "doc_count": 127
                },
                {
                    "key": "10.0.0.127/25",
                    "from": "10.0.0.0",
                    "to": "10.0.0.127",
                    "doc_count": 127
                }
            ]
        }
    }
}

Missing Aggregation

A field data based single bucket aggregation, that creates a bucket of all documents in the current document set context that are missing a field value (effectively, missing a field or having the configured NULL value set). This aggregator will often be used in conjunction with other field data bucket aggregators (such as ranges) to return information for all the documents that could not be placed in any of the other buckets due to missing field data values.

Example:

{
    "aggs" : {
        "products_without_a_price" : {
            "missing" : { "field" : "price" }
        }
    }
}

In the above example, we get the total number of products that do not have a price.

Response:

{
    ...

    "aggs" : {
        "products_without_a_price" : {
            "doc_count" : 10
        }
    }
}

Nested Aggregation

A special single bucket aggregation that enables aggregating nested documents.

For example, lets say we have a index of products, and each product holds the list of resellers - each having its own price for the product. The mapping could look like:

{
    ...

    "product" : {
        "properties" : {
            "resellers" : { 
                "type" : "nested",
                "properties" : {
                    "name" : { "type" : "text" },
                    "price" : { "type" : "double" }
                }
            }
        }
    }
}

The resellers is an array that holds nested documents under the product object.

The following aggregations will return the minimum price products can be purchased in:

{
    "query" : {
        "match" : { "name" : "led tv" }
    },
    "aggs" : {
        "resellers" : {
            "nested" : {
                "path" : "resellers"
            },
            "aggs" : {
                "min_price" : { "min" : { "field" : "resellers.price" } }
            }
        }
    }
}

As you can see above, the nested aggregation requires the path of the nested documents within the top level documents. Then one can define any type of aggregation over these nested documents.

Response:

{
    "aggregations": {
        "resellers": {
            "min_price": {
                "value" : 350
            }
        }
    }
}

Range Aggregation

A multi-bucket value source based aggregation that enables the user to define a set of ranges - each representing a bucket. During the aggregation process, the values extracted from each document will be checked against each bucket range and "bucket" the relevant/matching document. Note that this aggregation includes the from value and excludes the to value for each range.

Example:

{
    "aggs" : {
        "price_ranges" : {
            "range" : {
                "field" : "price",
                "ranges" : [
                    { "to" : 50 },
                    { "from" : 50, "to" : 100 },
                    { "from" : 100 }
                ]
            }
        }
    }
}

Response:

{
    ...

    "aggregations": {
        "price_ranges" : {
            "buckets": [
                {
                    "to": 50,
                    "doc_count": 2
                },
                {
                    "from": 50,
                    "to": 100,
                    "doc_count": 4
                },
                {
                    "from": 100,
                    "doc_count": 4
                }
            ]
        }
    }
}

Keyed Response

Setting the keyed flag to true will associate a unique string key with each bucket and return the ranges as a hash rather than an array:

{
    "aggs" : {
        "price_ranges" : {
            "range" : {
                "field" : "price",
                "keyed" : true,
                "ranges" : [
                    { "to" : 50 },
                    { "from" : 50, "to" : 100 },
                    { "from" : 100 }
                ]
            }
        }
    }
}

Response:

{
    ...

    "aggregations": {
        "price_ranges" : {
            "buckets": {
                "*-50.0": {
                    "to": 50,
                    "doc_count": 2
                },
                "50.0-100.0": {
                    "from": 50,
                    "to": 100,
                    "doc_count": 4
                },
                "100.0-*": {
                    "from": 100,
                    "doc_count": 4
                }
            }
        }
    }
}

It is also possible to customize the key for each range:

{
    "aggs" : {
        "price_ranges" : {
            "range" : {
                "field" : "price",
                "keyed" : true,
                "ranges" : [
                    { "key" : "cheap", "to" : 50 },
                    { "key" : "average", "from" : 50, "to" : 100 },
                    { "key" : "expensive", "from" : 100 }
                ]
            }
        }
    }
}

Script

{
    "aggs" : {
        "price_ranges" : {
            "range" : {
                "script" : {
                    "lang": "painless",
                    "inline": "doc['price'].value"
                },
                "ranges" : [
                    { "to" : 50 },
                    { "from" : 50, "to" : 100 },
                    { "from" : 100 }
                ]
            }
        }
    }
}

This will interpret the script parameter as an inline script with the painless script language and no script parameters. To use a file script use the following syntax:

{
    "aggs" : {
        "price_ranges" : {
            "range" : {
                "script" : {
                    "file": "my_script",
                    "params": {
                        "field": "price"
                    }
                },
                "ranges" : [
                    { "to" : 50 },
                    { "from" : 50, "to" : 100 },
                    { "from" : 100 }
                ]
            }
        }
    }
}
Tip

for indexed scripts replace the file parameter with an id parameter.

Value Script

Lets say the product prices are in USD but we would like to get the price ranges in EURO. We can use value script to convert the prices prior the aggregation (assuming conversion rate of 0.8)

{
    "aggs" : {
        "price_ranges" : {
            "range" : {
                "field" : "price",
                "script" : {
                    "lang": "painless",
                    "inline": "_value * params.conversion_rate",
                    "params" : {
                        "conversion_rate" : 0.8
                    }
                },
                "ranges" : [
                    { "to" : 35 },
                    { "from" : 35, "to" : 70 },
                    { "from" : 70 }
                ]
            }
        }
    }
}

Sub Aggregations

The following example, not only "bucket" the documents to the different buckets but also computes statistics over the prices in each price range

{
    "aggs" : {
        "price_ranges" : {
            "range" : {
                "field" : "price",
                "ranges" : [
                    { "to" : 50 },
                    { "from" : 50, "to" : 100 },
                    { "from" : 100 }
                ]
            },
            "aggs" : {
                "price_stats" : {
                    "stats" : { "field" : "price" }
                }
            }
        }
    }
}

Response:

{
    "aggregations": {
        "price_ranges" : {
            "buckets": [
                {
                    "to": 50,
                    "doc_count": 2,
                    "price_stats": {
                        "count": 2,
                        "min": 20,
                        "max": 47,
                        "avg": 33.5,
                        "sum": 67
                    }
                },
                {
                    "from": 50,
                    "to": 100,
                    "doc_count": 4,
                    "price_stats": {
                        "count": 4,
                        "min": 60,
                        "max": 98,
                        "avg": 82.5,
                        "sum": 330
                    }
                },
                {
                    "from": 100,
                    "doc_count": 4,
                    "price_stats": {
                        "count": 4,
                        "min": 134,
                        "max": 367,
                        "avg": 216,
                        "sum": 864
                    }
                }
            ]
        }
    }
}

If a sub aggregation is also based on the same value source as the range aggregation (like the stats aggregation in the example above) it is possible to leave out the value source definition for it. The following will return the same response as above:

{
    "aggs" : {
        "price_ranges" : {
            "range" : {
                "field" : "price",
                "ranges" : [
                    { "to" : 50 },
                    { "from" : 50, "to" : 100 },
                    { "from" : 100 }
                ]
            },
            "aggs" : {
                "price_stats" : {
                    "stats" : {} 
                }
            }
        }
    }
}

We don’t need to specify the price as we "inherit" it by default from the parent range aggregation

Reverse nested Aggregation

A special single bucket aggregation that enables aggregating on parent docs from nested documents. Effectively this aggregation can break out of the nested block structure and link to other nested structures or the root document, which allows nesting other aggregations that aren’t part of the nested object in a nested aggregation.

The reverse_nested aggregation must be defined inside a nested aggregation.

Options:

  • path - Which defines to what nested object field should be joined back. The default is empty, which means that it joins back to the root / main document level. The path cannot contain a reference to a nested object field that falls outside the nested aggregation’s nested structure a reverse_nested is in.

For example, lets say we have an index for a ticket system with issues and comments. The comments are inlined into the issue documents as nested documents. The mapping could look like:

{
    ...

    "issue" : {
        "properties" : {
            "tags" : { "type" : "text" }
            "comments" : { 
                "type" : "nested"
                "properties" : {
                    "username" : { "type" : "keyword" },
                    "comment" : { "type" : "text" }
                }
            }
        }
    }
}

The comments is an array that holds nested documents under the issue object.

The following aggregations will return the top commenters' username that have commented and per top commenter the top tags of the issues the user has commented on:

{
  "query": {
    "match": {
      "name": "led tv"
    }
  },
  "aggs": {
    "comments": {
      "nested": {
        "path": "comments"
      },
      "aggs": {
        "top_usernames": {
          "terms": {
            "field": "comments.username"
          },
          "aggs": {
            "comment_to_issue": {
              "reverse_nested": {}, 
              "aggs": {
                "top_tags_per_comment": {
                  "terms": {
                    "field": "tags"
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

As you can see above, the reverse_nested aggregation is put in to a nested aggregation as this is the only place in the dsl where the reversed_nested aggregation can be used. Its sole purpose is to join back to a parent doc higher up in the nested structure.

A reverse_nested aggregation that joins back to the root / main document level, because no path has been defined. Via the path option the reverse_nested aggregation can join back to a different level, if multiple layered nested object types have been defined in the mapping

Possible response snippet:

{
  "aggregations": {
    "comments": {
      "top_usernames": {
        "buckets": [
          {
            "key": "username_1",
            "doc_count": 12,
            "comment_to_issue": {
              "top_tags_per_comment": {
                "buckets": [
                  {
                    "key": "tag1",
                    "doc_count": 9
                  },
                  ...
                ]
              }
            }
          },
          ...
        ]
      }
    }
  }
}

Sampler Aggregation

Warning

This functionality is experimental and may be changed or removed completely in a future release.

A filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents.

Example use cases:

  • Tightening the focus of analytics to high-relevance matches rather than the potentially very long tail of low-quality matches
  • Reducing the running cost of aggregations that can produce useful results using only samples e.g. significant_terms

Example:

{
    "query": {
        "match": {
            "text": "iphone"
        }
    },
    "aggs": {
        "sample": {
            "sampler": {
                "shard_size": 200
            },
            "aggs": {
                "keywords": {
                    "significant_terms": {
                        "field": "text"
                    }
                }
            }
        }
    }
}

Response:

{
    ...
        "aggregations": {
        "sample": {
            "doc_count": 1000,
            "keywords": {
                "doc_count": 1000,
                "buckets": [
                    ...
                    {
                        "key": "bend",
                        "doc_count": 58,
                        "score": 37.982536582524276,
                        "bg_count": 103
                    },
                    ....
}

1000 documents were sampled in total because we asked for a maximum of 200 from an index with 5 shards. The cost of performing the nested significant_terms aggregation was therefore limited rather than unbounded.

shard_size

The shard_size parameter limits how many top-scoring documents are collected in the sample processed on each shard. The default value is 100.

Limitations

Cannot be nested under breadth_first aggregations

Being a quality-based filter the sampler aggregation needs access to the relevance score produced for each document. It therefore cannot be nested under a terms aggregation which has the collect_mode switched from the default depth_first mode to breadth_first as this discards scores. In this situation an error will be thrown.

Significant Terms Aggregation

An aggregation that returns interesting or unusual occurrences of terms in a set.

Warning

The significant_terms aggregation can be very heavy when run on large indices. Work is in progress to provide more lightweight sampling techniques. As a result, the API for this feature may change in non-backwards compatible ways.

Example use cases:

  • Suggesting "H5N1" when users search for "bird flu" in text
  • Identifying the merchant that is the "common point of compromise" from the transaction history of credit card owners reporting loss
  • Suggesting keywords relating to stock symbol $ATI for an automated news classifier
  • Spotting the fraudulent doctor who is diagnosing more than his fair share of whiplash injuries
  • Spotting the tire manufacturer who has a disproportionate number of blow-outs

In all these cases the terms being selected are not simply the most popular terms in a set. They are the terms that have undergone a significant change in popularity measured between a foreground and background set. If the term "H5N1" only exists in 5 documents in a 10 million document index and yet is found in 4 of the 100 documents that make up a user’s search results that is significant and probably very relevant to their search. 5/10,000,000 vs 4/100 is a big swing in frequency.

Single-set analysis

In the simplest case, the foreground set of interest is the search results matched by a query and the background set used for statistical comparisons is the index or indices from which the results were gathered.

Example:

{
    "query" : {
        "terms" : {"force" : [ "British Transport Police" ]}
    },
    "aggregations" : {
        "significantCrimeTypes" : {
            "significant_terms" : { "field" : "crime_type" }
        }
    }
}

Response:

{
    ...

    "aggregations" : {
        "significantCrimeTypes" : {
            "doc_count": 47347,
            "buckets" : [
                {
                    "key": "Bicycle theft",
                    "doc_count": 3640,
                    "score": 0.371235374214817,
                    "bg_count": 66799
                }
                ...
            ]
        }
    }
}

When querying an index of all crimes from all police forces, what these results show is that the British Transport Police force stand out as a force dealing with a disproportionately large number of bicycle thefts. Ordinarily, bicycle thefts represent only 1% of crimes (66799/5064554) but for the British Transport Police, who handle crime on railways and stations, 7% of crimes (3640/47347) is a bike theft. This is a significant seven-fold increase in frequency and so this anomaly was highlighted as the top crime type.

The problem with using a query to spot anomalies is it only gives us one subset to use for comparisons. To discover all the other police forces' anomalies we would have to repeat the query for each of the different forces.

This can be a tedious way to look for unusual patterns in an index

Multi-set analysis

A simpler way to perform analysis across multiple categories is to use a parent-level aggregation to segment the data ready for analysis.

Example using a parent aggregation for segmentation:

{
    "aggregations": {
        "forces": {
            "terms": {"field": "force"},
            "aggregations": {
                "significantCrimeTypes": {
                    "significant_terms": {"field": "crime_type"}
                }
            }
        }
    }
}

Response:

{
 ...

 "aggregations": {
    "forces": {
        "buckets": [
            {
                "key": "Metropolitan Police Service",
                "doc_count": 894038,
                "significantCrimeTypes": {
                    "doc_count": 894038,
                    "buckets": [
                        {
                            "key": "Robbery",
                            "doc_count": 27617,
                            "score": 0.0599,
                            "bg_count": 53182
                        },
                        ...
                    ]
                }
            },
            {
                "key": "British Transport Police",
                "doc_count": 47347,
                "significantCrimeTypes": {
                    "doc_count": 47347,
                    "buckets": [
                        {
                            "key": "Bicycle theft",
                            "doc_count": 3640,
                            "score": 0.371,
                            "bg_count": 66799
                        },
                        ...
                    ]
                }
            }
        ]
    }
}

Now we have anomaly detection for each of the police forces using a single request.

We can use other forms of top-level aggregations to segment our data, for example segmenting by geographic area to identify unusual hot-spots of a particular crime type:

{
    "aggs": {
        "hotspots": {
            "geohash_grid" : {
                "field":"location",
                "precision":5,
            },
            "aggs": {
                "significantCrimeTypes": {
                    "significant_terms": {"field": "crime_type"}
                }
            }
        }
    }
}

This example uses the geohash_grid aggregation to create result buckets that represent geographic areas, and inside each bucket we can identify anomalous levels of a crime type in these tightly-focused areas e.g.

  • Airports exhibit unusual numbers of weapon confiscations
  • Universities show uplifts of bicycle thefts

At a higher geohash_grid zoom-level with larger coverage areas we would start to see where an entire police-force may be tackling an unusual volume of a particular crime type.

Obviously a time-based top-level segmentation would help identify current trends for each point in time where a simple terms aggregation would typically show the very popular "constants" that persist across all time slots.

Use on free-text fields

The significant_terms aggregation can be used effectively on tokenized free-text fields to suggest:

  • keywords for refining end-user searches
  • keywords for use in percolator queries
Warning

Picking a free-text field as the subject of a significant terms analysis can be expensive! It will attempt to load every unique word into RAM. It is recommended to only use this on smaller indices.

Tip

Show significant_terms in context. Free-text significant_terms are much more easily understood when viewed in context. Take the results of significant_terms suggestions from a free-text field and use them in a terms query on the same field with a highlight clause to present users with example snippets of documents. When the terms are presented unstemmed, highlighted, with the right case, in the right order and with some context, their significance/meaning is more readily apparent.

Custom background sets

Ordinarily, the foreground set of documents is "diffed" against a background set of all the documents in your index. However, sometimes it may prove useful to use a narrower background set as the basis for comparisons. For example, a query on documents relating to "Madrid" in an index with content from all over the world might reveal that "Spanish" was a significant term. This may be true but if you want some more focused terms you could use a background_filter on the term spain to establish a narrower set of documents as context. With this as a background "Spanish" would now be seen as commonplace and therefore not as significant as words like "capital" that relate more strongly with Madrid. Note that using a background filter will slow things down - each term’s background frequency must now be derived on-the-fly from filtering posting lists rather than reading the index’s pre-computed count for a term.

Limitations

Significant terms must be indexed values

Unlike the terms aggregation it is currently not possible to use script-generated terms for counting purposes. Because of the way the significant_terms aggregation must consider both foreground and background frequencies it would be prohibitively expensive to use a script on the entire index to obtain background frequencies for comparisons. Also DocValues are not supported as sources of term data for similar reasons.

No analysis of floating point fields

Floating point fields are currently not supported as the subject of significant_terms analysis. While integer or long fields can be used to represent concepts like bank account numbers or category numbers which can be interesting to track, floating point fields are usually used to represent quantities of something. As such, individual floating point terms are not useful for this form of frequency analysis.

Use as a parent aggregation

If there is the equivalent of a match_all query or no query criteria providing a subset of the index the significant_terms aggregation should not be used as the top-most aggregation - in this scenario the foreground set is exactly the same as the background set and so there is no difference in document frequencies to observe and from which to make sensible suggestions.

Another consideration is that the significant_terms aggregation produces many candidate results at shard level that are only later pruned on the reducing node once all statistics from all shards are merged. As a result, it can be inefficient and costly in terms of RAM to embed large child aggregations under a significant_terms aggregation that later discards many candidate terms. It is advisable in these cases to perform two searches - the first to provide a rationalized list of significant_terms and then add this shortlist of terms to a second query to go back and fetch the required child aggregations.

Approximate counts

The counts of how many documents contain a term provided in results are based on summing the samples returned from each shard and as such may be:

  • low if certain shards did not provide figures for a given term in their top sample
  • high when considering the background frequency as it may count occurrences found in deleted documents

Like most design decisions, this is the basis of a trade-off in which we have chosen to provide fast performance at the cost of some (typically small) inaccuracies. However, the size and shard size settings covered in the next section provide tools to help control the accuracy levels.

Parameters

JLH score

The scores are derived from the doc frequencies in foreground and background sets. The absolute change in popularity (foregroundPercent - backgroundPercent) would favor common terms whereas the relative change in popularity (foregroundPercent/ backgroundPercent) would favor rare terms. Rare vs common is essentially a precision vs recall balance and so the absolute and relative changes are multiplied to provide a sweet spot between precision and recall.

mutual information

Mutual information as described in "Information Retrieval", Manning et al., Chapter 13.5.1 can be used as significance score by adding the parameter

         "mutual_information": {
              "include_negatives": true
         }

Mutual information does not differentiate between terms that are descriptive for the subset or for documents outside the subset. The significant terms therefore can contain terms that appear more or less frequent in the subset than outside the subset. To filter out the terms that appear less often in the subset than in documents outside the subset, include_negatives can be set to false.

Per default, the assumption is that the documents in the bucket are also contained in the background. If instead you defined a custom background filter that represents a different set of documents that you want to compare to, set

"background_is_superset": false

Chi square

Chi square as described in "Information Retrieval", Manning et al., Chapter 13.5.2 can be used as significance score by adding the parameter

         "chi_square": {
         }

Chi square behaves like mutual information and can be configured with the same parameters include_negatives and background_is_superset.

google normalized distance

Google normalized distance as described in "The Google Similarity Distance", Cilibrasi and Vitanyi, 2007 (http://arxiv.org/pdf/cs/0412098v3.pdf) can be used as significance score by adding the parameter

         "gnd": {
         }

gnd also accepts the background_is_superset parameter.

Percentage

A simple calculation of the number of documents in the foreground sample with a term divided by the number of documents in the background with the term. By default this produces a score greater than zero and less than one.

The benefit of this heuristic is that the scoring logic is simple to explain to anyone familiar with a "per capita" statistic. However, for fields with high cardinality there is a tendency for this heuristic to select the rarest terms such as typos that occur only once because they score 1/1 = 100%.

It would be hard for a seasoned boxer to win a championship if the prize was awarded purely on the basis of percentage of fights won - by these rules a newcomer with only one fight under his belt would be impossible to beat. Multiple observations are typically required to reinforce a view so it is recommended in these cases to set both min_doc_count and shard_min_doc_count to a higher value such as 10 in order to filter out the low-frequency terms that otherwise take precedence.

         "percentage": {
         }

Which one is best?

Roughly, mutual_information prefers high frequent terms even if they occur also frequently in the background. For example, in an analysis of natural language text this might lead to selection of stop words. mutual_information is unlikely to select very rare terms like misspellings. gnd prefers terms with a high co-occurrence and avoids selection of stopwords. It might be better suited for synonym detection. However, gnd has a tendency to select very rare terms that are, for example, a result of misspelling. chi_square and jlh are somewhat in-between.

It is hard to say which one of the different heuristics will be the best choice as it depends on what the significant terms are used for (see for example [Yang and Pedersen, "A Comparative Study on Feature Selection in Text Categorization", 1997](http://courses.ischool.berkeley.edu/i256/f06/papers/yang97comparative.pdf) for a study on using significant terms for feature selection for text classification).

If none of the above measures suits your usecase than another option is to implement a custom significance measure:

scripted

Customized scores can be implemented via a script:

            "script_heuristic": {
              "script": {
                "lang": "painless",
                "inline": "params._subset_freq/(params._superset_freq - params._subset_freq + 1)"
              }
            }

Scripts can be inline (as in above example), indexed or stored on disk. For details on the options, see script documentation.

Available parameters in the script are

_subset_freq

Number of documents the term appears in in the subset.

_superset_freq

Number of documents the term appears in in the superset.

_subset_size

Number of documents in the subset.

_superset_size

Number of documents in the superset.

Size & Shard Size

The size parameter can be set to define how many term buckets should be returned out of the overall terms list. By default, the node coordinating the search process will request each shard to provide its own top term buckets and once all shards respond, it will reduce the results to the final list that will then be returned to the client. If the number of unique terms is greater than size, the returned list can be slightly off and not accurate (it could be that the term counts are slightly off and it could even be that a term that should have been in the top size buckets was not returned).

To ensure better accuracy a multiple of the final size is used as the number of terms to request from each shard using a heuristic based on the number of shards. To take manual control of this setting the shard_size parameter can be used to control the volumes of candidate terms produced by each shard.

Low-frequency terms can turn out to be the most interesting ones once all results are combined so the significant_terms aggregation can produce higher-quality results when the shard_size parameter is set to values significantly higher than the size setting. This ensures that a bigger volume of promising candidate terms are given a consolidated review by the reducing node before the final selection. Obviously large candidate term lists will cause extra network traffic and RAM usage so this is quality/cost trade off that needs to be balanced. If shard_size is set to -1 (the default) then shard_size will be automatically estimated based on the number of shards and the size parameter.

Note

shard_size cannot be smaller than size (as it doesn’t make much sense). When it is, elasticsearch will override it and reset it to be equal to size.

Minimum document count

It is possible to only return terms that match more than a configured number of hits using the min_doc_count option:

{
    "aggs" : {
        "tags" : {
            "significant_terms" : {
                "field" : "tag",
                "min_doc_count": 10
            }
        }
    }
}

The above aggregation would only return tags which have been found in 10 hits or more. Default value is 3.

Terms that score highly will be collected on a shard level and merged with the terms collected from other shards in a second step. However, the shard does not have the information about the global term frequencies available. The decision if a term is added to a candidate list depends only on the score computed on the shard using local shard frequencies, not the global frequencies of the word. The min_doc_count criterion is only applied after merging local terms statistics of all shards. In a way the decision to add the term as a candidate is made without being very certain about if the term will actually reach the required min_doc_count. This might cause many (globally) high frequent terms to be missing in the final result if low frequent but high scoring terms populated the candidate lists. To avoid this, the shard_size parameter can be increased to allow more candidate terms on the shards. However, this increases memory consumption and network traffic.

shard_min_doc_count parameter

The parameter shard_min_doc_count regulates the certainty a shard has if the term should actually be added to the candidate list or not with respect to the min_doc_count. Terms will only be considered if their local shard frequency within the set is higher than the shard_min_doc_count. If your dictionary contains many low frequent words and you are not interested in these (for example misspellings), then you can set the shard_min_doc_count parameter to filter out candidate terms on a shard level that will with a reasonable certainty not reach the required min_doc_count even after merging the local frequencies. shard_min_doc_count is set to 1 per default and has no effect unless you explicitly set it.

Warning

Setting min_doc_count to 1 is generally not advised as it tends to return terms that are typos or other bizarre curiosities. Finding more than one instance of a term helps reinforce that, while still rare, the term was not the result of a one-off accident. The default value of 3 is used to provide a minimum weight-of-evidence. Setting shard_min_doc_count too high will cause significant candidate terms to be filtered out on a shard level. This value should be set much lower than min_doc_count/#shards.

Custom background context

The default source of statistical information for background term frequencies is the entire index and this scope can be narrowed through the use of a background_filter to focus in on significant terms within a narrower context:

{
    "query" : {
        "match" : "madrid"
    },
    "aggs" : {
        "tags" : {
            "significant_terms" : {
                "field" : "tag",
                "background_filter": {
                        "term" : { "text" : "spain"}
                }
            }
        }
    }
}

The above filter would help focus in on terms that were peculiar to the city of Madrid rather than revealing terms like "Spanish" that are unusual in the full index’s worldwide context but commonplace in the subset of documents containing the word "Spain".

Warning

Use of background filters will slow the query as each term’s postings must be filtered to determine a frequency

Filtering Values

It is possible (although rarely required) to filter the values for which buckets will be created. This can be done using the include and exclude parameters which are based on a regular expression string or arrays of exact terms. This functionality mirrors the features described in the terms aggregation documentation.

Execution hint

There are different mechanisms by which terms aggregations can be executed:

  • by using field values directly in order to aggregate data per-bucket (map)
  • by using ordinals of the field and preemptively allocating one bucket per ordinal value (global_ordinals)
  • by using ordinals of the field and dynamically allocating one bucket per ordinal value (global_ordinals_hash)

Elasticsearch tries to have sensible defaults so this is something that generally doesn’t need to be configured.

map should only be considered when very few documents match a query. Otherwise the ordinals-based execution modes are significantly faster. By default, map is only used when running an aggregation on scripts, since they don’t have ordinals.

global_ordinals is the second fastest option, but the fact that it preemptively allocates buckets can be memory-intensive, especially if you have one or more sub aggregations. It is used by default on top-level terms aggregations.

global_ordinals_hash on the contrary to global_ordinals and global_ordinals_low_cardinality allocates buckets dynamically so memory usage is linear to the number of values of the documents that are part of the aggregation scope. It is used by default in inner aggregations.

{
    "aggs" : {
        "tags" : {
             "significant_terms" : {
                 "field" : "tags",
                 "execution_hint": "map" 
             }
         }
    }
}

the possible values are map, global_ordinals and global_ordinals_hash

Please note that Elasticsearch will ignore this execution hint if it is not applicable.

Terms Aggregation

A multi-bucket value source based aggregation where buckets are dynamically built - one per unique value.

Example:

{
    "aggs" : {
        "genres" : {
            "terms" : { "field" : "genre" }
        }
    }
}

Response:

{
    ...

    "aggregations" : {
        "genres" : {
            "doc_count_error_upper_bound": 0, 
            "sum_other_doc_count": 0, 
            "buckets" : [ 
                {
                    "key" : "jazz",
                    "doc_count" : 10
                },
                {
                    "key" : "rock",
                    "doc_count" : 10
                },
                {
                    "key" : "electronic",
                    "doc_count" : 10
                },
            ]
        }
    }
}

an upper bound of the error on the document counts for each term, see below

when there are lots of unique terms, elasticsearch only returns the top terms; this number is the sum of the document counts for all buckets that are not part of the response

the list of the top buckets, the meaning of top being defined by the order

By default, the terms aggregation will return the buckets for the top ten terms ordered by the doc_count. One can change this default behaviour by setting the size parameter.

Size

The size parameter can be set to define how many term buckets should be returned out of the overall terms list. By default, the node coordinating the search process will request each shard to provide its own top size term buckets and once all shards respond, it will reduce the results to the final list that will then be returned to the client. This means that if the number of unique terms is greater than size, the returned list is slightly off and not accurate (it could be that the term counts are slightly off and it could even be that a term that should have been in the top size buckets was not returned).

Document counts are approximate

As described above, the document counts (and the results of any sub aggregations) in the terms aggregation are not always accurate. This is because each shard provides its own view of what the ordered list of terms should be and these are combined to give a final view. Consider the following scenario:

A request is made to obtain the top 5 terms in the field product, ordered by descending document count from an index with 3 shards. In this case each shard is asked to give its top 5 terms.

{
    "aggs" : {
        "products" : {
            "terms" : {
                "field" : "product",
                "size" : 5
            }
        }
    }
}

The terms for each of the three shards are shown below with their respective document counts in brackets:

Shard A Shard B Shard C

1

Product A (25)

Product A (30)

Product A (45)

2

Product B (18)

Product B (25)

Product C (44)

3

Product C (6)

Product F (17)

Product Z (36)

4

Product D (3)

Product Z (16)

Product G (30)

5

Product E (2)

Product G (15)

Product E (29)

6

Product F (2)

Product H (14)

Product H (28)

7

Product G (2)

Product I (10)

Product Q (2)

8

Product H (2)

Product Q (6)

Product D (1)

9

Product I (1)

Product J (8)

10

Product J (1)

Product C (4)

The shards will return their top 5 terms so the results from the shards will be:

Shard A Shard B Shard C

1

Product A (25)

Product A (30)

Product A (45)

2

Product B (18)

Product B (25)

Product C (44)

3

Product C (6)

Product F (17)

Product Z (36)

4

Product D (3)

Product Z (16)

Product G (30)

5

Product E (2)

Product G (15)

Product E (29)

Taking the top 5 results from each of the shards (as requested) and combining them to make a final top 5 list produces the following:

1

Product A (100)

2

Product Z (52)

3

Product C (50)

4

Product G (45)

5

Product B (43)

Because Product A was returned from all shards we know that its document count value is accurate. Product C was only returned by shards A and C so its document count is shown as 50 but this is not an accurate count. Product C exists on shard B, but its count of 4 was not high enough to put Product C into the top 5 list for that shard. Product Z was also returned only by 2 shards but the third shard does not contain the term. There is no way of knowing, at the point of combining the results to produce the final list of terms, that there is an error in the document count for Product C and not for Product Z. Product H has a document count of 44 across all 3 shards but was not included in the final list of terms because it did not make it into the top five terms on any of the shards.

Shard Size

The higher the requested size is, the more accurate the results will be, but also, the more expensive it will be to compute the final results (both due to bigger priority queues that are managed on a shard level and due to bigger data transfers between the nodes and the client).

The shard_size parameter can be used to minimize the extra work that comes with bigger requested size. When defined, it will determine how many terms the coordinating node will request from each shard. Once all the shards responded, the coordinating node will then reduce them to a final result which will be based on the size parameter - this way, one can increase the accuracy of the returned terms and avoid the overhead of streaming a big list of buckets back to the client.

Note

shard_size cannot be smaller than size (as it doesn’t make much sense). When it is, elasticsearch will override it and reset it to be equal to size.

The default shard_size will be size if the search request needs to go to a single shard, and (size * 1.5 + 10) otherwise.

Calculating Document Count Error

There are two error values which can be shown on the terms aggregation. The first gives a value for the aggregation as a whole which represents the maximum potential document count for a term which did not make it into the final list of terms. This is calculated as the sum of the document count from the last term returned from each shard .For the example given above the value would be 46 (2 + 15 + 29). This means that in the worst case scenario a term which was not returned could have the 4th highest document count.

{
    ...

    "aggregations" : {
        "products" : {
            "doc_count_error_upper_bound" : 46,
            "buckets" : [
                {
                    "key" : "Product A",
                    "doc_count" : 100
                },
                {
                    "key" : "Product Z",
                    "doc_count" : 52
                },
                ...
            ]
        }
    }
}

Per bucket document count error

Warning

This functionality is experimental and may be changed or removed completely in a future release.

The second error value can be enabled by setting the show_term_doc_count_error parameter to true. This shows an error value for each term returned by the aggregation which represents the worst case error in the document count and can be useful when deciding on a value for the shard_size parameter. This is calculated by summing the document counts for the last term returned by all shards which did not return the term. In the example above the error in the document count for Product C would be 15 as Shard B was the only shard not to return the term and the document count of the last term it did return was 15. The actual document count of Product C was 54 so the document count was only actually off by 4 even though the worst case was that it would be off by 15. Product A, however has an error of 0 for its document count, since every shard returned it we can be confident that the count returned is accurate.

{
    ...

    "aggregations" : {
        "products" : {
            "doc_count_error_upper_bound" : 46,
            "buckets" : [
                {
                    "key" : "Product A",
                    "doc_count" : 100,
                    "doc_count_error_upper_bound" : 0
                },
                {
                    "key" : "Product Z",
                    "doc_count" : 52,
                    "doc_count_error_upper_bound" : 2
                },
                ...
            ]
        }
    }
}

These errors can only be calculated in this way when the terms are ordered by descending document count. When the aggregation is ordered by the terms values themselves (either ascending or descending) there is no error in the document count since if a shard does not return a particular term which appears in the results from another shard, it must not have that term in its index. When the aggregation is either sorted by a sub aggregation or in order of ascending document count, the error in the document counts cannot be determined and is given a value of -1 to indicate this.

Order

The order of the buckets can be customized by setting the order parameter. By default, the buckets are ordered by their doc_count descending. It is possible to change this behaviour as documented below:

Warning

Sorting by ascending _count or by sub aggregation is discouraged as it increases the error on document counts. It is fine when a single shard is queried, or when the field that is being aggregated was used as a routing key at index time: in these cases results will be accurate since shards have disjoint values. However otherwise, errors are unbounded. One particular case that could still be useful is sorting by min or max aggregation: counts will not be accurate but at least the top buckets will be correctly picked.

Ordering the buckets by their doc _count in an ascending manner:

{
    "aggs" : {
        "genres" : {
            "terms" : {
                "field" : "genre",
                "order" : { "_count" : "asc" }
            }
        }
    }
}

Ordering the buckets alphabetically by their terms in an ascending manner:

{
    "aggs" : {
        "genres" : {
            "terms" : {
                "field" : "genre",
                "order" : { "_term" : "asc" }
            }
        }
    }
}

Ordering the buckets by single value metrics sub-aggregation (identified by the aggregation name):

{
    "aggs" : {
        "genres" : {
            "terms" : {
                "field" : "genre",
                "order" : { "avg_play_count" : "desc" }
            },
            "aggs" : {
                "avg_play_count" : { "avg" : { "field" : "play_count" } }
            }
        }
    }
}

Ordering the buckets by multi value metrics sub-aggregation (identified by the aggregation name):

{
    "aggs" : {
        "genres" : {
            "terms" : {
                "field" : "genre",
                "order" : { "playback_stats.avg" : "desc" }
            },
            "aggs" : {
                "playback_stats" : { "stats" : { "field" : "play_count" } }
            }
        }
    }
}
Note

Pipeline aggs cannot be used for sorting

Pipeline aggregations are run during the reduce phase after all other aggregations have already completed. For this reason, they cannot be used for ordering.

It is also possible to order the buckets based on a "deeper" aggregation in the hierarchy. This is supported as long as the aggregations path are of a single-bucket type, where the last aggregation in the path may either be a single-bucket one or a metrics one. If it’s a single-bucket type, the order will be defined by the number of docs in the bucket (i.e. doc_count), in case it’s a metrics one, the same rules as above apply (where the path must indicate the metric name to sort by in case of a multi-value metrics aggregation, and in case of a single-value metrics aggregation the sort will be applied on that value).

The path must be defined in the following form:

AGG_SEPARATOR       =  '>' ;
METRIC_SEPARATOR    =  '.' ;
AGG_NAME            =  <the name of the aggregation> ;
METRIC              =  <the name of the metric (in case of multi-value metrics aggregation)> ;
PATH                =  <AGG_NAME> [ <AGG_SEPARATOR>, <AGG_NAME> ]* [ <METRIC_SEPARATOR>, <METRIC> ] ;
{
    "aggs" : {
        "countries" : {
            "terms" : {
                "field" : "artist.country",
                "order" : { "rock>playback_stats.avg" : "desc" }
            },
            "aggs" : {
                "rock" : {
                    "filter" : { "term" : { "genre" :  "rock" }},
                    "aggs" : {
                        "playback_stats" : { "stats" : { "field" : "play_count" }}
                    }
                }
            }
        }
    }
}

The above will sort the artist’s countries buckets based on the average play count among the rock songs.

Multiple criteria can be used to order the buckets by providing an array of order criteria such as the following:

{
    "aggs" : {
        "countries" : {
            "terms" : {
                "field" : "artist.country",
                "order" : [ { "rock>playback_stats.avg" : "desc" }, { "_count" : "desc" } ]
            },
            "aggs" : {
                "rock" : {
                    "filter" : { "term" : { "genre" : { "rock" }}},
                    "aggs" : {
                        "playback_stats" : { "stats" : { "field" : "play_count" }}
                    }
                }
            }
        }
    }
}

The above will sort the artist’s countries buckets based on the average play count among the rock songs and then by their doc_count in descending order.

Note

In the event that two buckets share the same values for all order criteria the bucket’s term value is used as a tie-breaker in ascending alphabetical order to prevent non-deterministic ordering of buckets.

Minimum document count

It is possible to only return terms that match more than a configured number of hits using the min_doc_count option:

{
    "aggs" : {
        "tags" : {
            "terms" : {
                "field" : "tags",
                "min_doc_count": 10
            }
        }
    }
}

The above aggregation would only return tags which have been found in 10 hits or more. Default value is 1.

Terms are collected and ordered on a shard level and merged with the terms collected from other shards in a second step. However, the shard does not have the information about the global document count available. The decision if a term is added to a candidate list depends only on the order computed on the shard using local shard frequencies. The min_doc_count criterion is only applied after merging local terms statistics of all shards. In a way the decision to add the term as a candidate is made without being very certain about if the term will actually reach the required min_doc_count. This might cause many (globally) high frequent terms to be missing in the final result if low frequent terms populated the candidate lists. To avoid this, the shard_size parameter can be increased to allow more candidate terms on the shards. However, this increases memory consumption and network traffic.

shard_min_doc_count parameter

The parameter shard_min_doc_count regulates the certainty a shard has if the term should actually be added to the candidate list or not with respect to the min_doc_count. Terms will only be considered if their local shard frequency within the set is higher than the shard_min_doc_count. If your dictionary contains many low frequent terms and you are not interested in those (for example misspellings), then you can set the shard_min_doc_count parameter to filter out candidate terms on a shard level that will with a reasonable certainty not reach the required min_doc_count even after merging the local counts. shard_min_doc_count is set to 0 per default and has no effect unless you explicitly set it.

Note

Setting min_doc_count=0 will also return buckets for terms that didn’t match any hit. However, some of the returned terms which have a document count of zero might only belong to deleted documents or documents from other types, so there is no warranty that a match_all query would find a positive document count for those terms.

Warning

When NOT sorting on doc_count descending, high values of min_doc_count may return a number of buckets which is less than size because not enough data was gathered from the shards. Missing buckets can be back by increasing shard_size. Setting shard_min_doc_count too high will cause terms to be filtered out on a shard level. This value should be set much lower than min_doc_count/#shards.

Script

Generating the terms using a script:

{
    "aggs" : {
        "genres" : {
            "terms" : {
                "script" : {
                    "inline": "doc['genre'].value",
                    "lang": "painless"
                }
            }
        }
    }
}

This will interpret the script parameter as an inline script with the default script language and no script parameters. To use a file script use the following syntax:

{
    "aggs" : {
        "genres" : {
            "terms" : {
                "script" : {
                    "file": "my_script",
                    "params": {
                        "field": "genre"
                    }
                }
            }
        }
    }
}
Tip

for indexed scripts replace the file parameter with an id parameter.

Value Script

{
    "aggs" : {
        "genres" : {
            "terms" : {
                "field" : "gender",
                "script" : {
                    "inline" : "'Genre: ' +_value"
                    "lang" : "painless"
                }
            }
        }
    }
}

Filtering Values

It is possible to filter the values for which buckets will be created. This can be done using the include and exclude parameters which are based on regular expression strings or arrays of exact values.

{
    "aggs" : {
        "tags" : {
            "terms" : {
                "field" : "tags",
                "include" : ".*sport.*",
                "exclude" : "water_.*"
            }
        }
    }
}

In the above example, buckets will be created for all the tags that has the word sport in them, except those starting with water_ (so the tag water_sports will no be aggregated). The include regular expression will determine what values are "allowed" to be aggregated, while the exclude determines the values that should not be aggregated. When both are defined, the exclude has precedence, meaning, the include is evaluated first and only then the exclude.

The syntax is the same as regexp queries.

For matching based on exact values the include and exclude parameters can simply take an array of strings that represent the terms as they are found in the index:

{
    "aggs" : {
        "JapaneseCars" : {
             "terms" : {
                 "field" : "make",
                 "include" : ["mazda", "honda"]
             }
         },
        "ActiveCarManufacturers" : {
             "terms" : {
                 "field" : "make",
                 "exclude" : ["rover", "jensen"]
             }
         }
    }
}

Multi-field terms aggregation

The terms aggregation does not support collecting terms from multiple fields in the same document. The reason is that the terms agg doesn’t collect the string term values themselves, but rather uses global ordinals to produce a list of all of the unique values in the field. Global ordinals results in an important performance boost which would not be possible across multiple fields.

There are two approaches that you can use to perform a terms agg across multiple fields:

Script
Use a script to retrieve terms from multiple fields. This disables the global ordinals optimization and will be slower than collecting terms from a single field, but it gives you the flexibility to implement this option at search time.
copy_to field
If you know ahead of time that you want to collect the terms from two or more fields, then use copy_to in your mapping to create a new dedicated field at index time which contains the values from both fields. You can aggregate on this single field, which will benefit from the global ordinals optimization.

Collect mode

Deferring calculation of child aggregations

For fields with many unique terms and a small number of required results it can be more efficient to delay the calculation of child aggregations until the top parent-level aggs have been pruned. Ordinarily, all branches of the aggregation tree are expanded in one depth-first pass and only then any pruning occurs. In some scenarios this can be very wasteful and can hit memory constraints. An example problem scenario is querying a movie database for the 10 most popular actors and their 5 most common co-stars:

{
    "aggs" : {
        "actors" : {
             "terms" : {
                 "field" : "actors",
                 "size" : 10
             },
            "aggs" : {
                "costars" : {
                     "terms" : {
                         "field" : "actors",
                         "size" : 5
                     }
                 }
            }
         }
    }
}

Even though the number of actors may be comparatively small and we want only 50 result buckets there is a combinatorial explosion of buckets during calculation - a single actor can produce n² buckets where n is the number of actors. The sane option would be to first determine the 10 most popular actors and only then examine the top co-stars for these 10 actors. This alternative strategy is what we call the breadth_first collection mode as opposed to the depth_first mode.

Note

The breadth_first is the default mode for fields with a cardinality bigger than the requested size or when the cardinality is unknown (numeric fields or scripts for instance). It is possible to override the default heuristic and to provide a collect mode directly in the request:

{
    "aggs" : {
        "actors" : {
             "terms" : {
                 "field" : "actors",
                 "size" : 10,
                 "collect_mode" : "breadth_first" 
             },
            "aggs" : {
                "costars" : {
                     "terms" : {
                         "field" : "actors",
                         "size" : 5
                     }
                 }
            }
         }
    }
}

the possible values are breadth_first and depth_first

When using breadth_first mode the set of documents that fall into the uppermost buckets are cached for subsequent replay so there is a memory overhead in doing this which is linear with the number of matching documents. Note that the order parameter can still be used to refer to data from a child aggregation when using the breadth_first setting - the parent aggregation understands that this child aggregation will need to be called first before any of the other child aggregations.

Warning

Nested aggregations such as top_hits which require access to score information under an aggregation that uses the breadth_first collection mode need to replay the query on the second pass but only for the documents belonging to the top buckets.

Execution hint

Warning

The automated execution optimization is experimental, so this parameter is provided temporarily as a way to override the default behaviour.

There are different mechanisms by which terms aggregations can be executed:

  • by using field values directly in order to aggregate data per-bucket (map)
  • by using ordinals of the field and preemptively allocating one bucket per ordinal value (global_ordinals)
  • by using ordinals of the field and dynamically allocating one bucket per ordinal value (global_ordinals_hash)
  • by using per-segment ordinals to compute counts and remap these counts to global counts using global ordinals (global_ordinals_low_cardinality)

Elasticsearch tries to have sensible defaults so this is something that generally doesn’t need to be configured.

map should only be considered when very few documents match a query. Otherwise the ordinals-based execution modes are significantly faster. By default, map is only used when running an aggregation on scripts, since they don’t have ordinals.

global_ordinals_low_cardinality only works for leaf terms aggregations but is usually the fastest execution mode. Memory usage is linear with the number of unique values in the field, so it is only enabled by default on low-cardinality fields.

global_ordinals is the second fastest option, but the fact that it preemptively allocates buckets can be memory-intensive, especially if you have one or more sub aggregations. It is used by default on top-level terms aggregations.

global_ordinals_hash on the contrary to global_ordinals and global_ordinals_low_cardinality allocates buckets dynamically so memory usage is linear to the number of values of the documents that are part of the aggregation scope. It is used by default in inner aggregations.

{
    "aggs" : {
        "tags" : {
             "terms" : {
                 "field" : "tags",
                 "execution_hint": "map" 
             }
         }
    }
}

[experimental] This functionality is experimental and may be changed or removed completely in a future release. the possible values are map, global_ordinals, global_ordinals_hash and global_ordinals_low_cardinality

Please note that Elasticsearch will ignore this execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints.

Missing value

The missing parameter defines how documents that are missing a value should be treated. By default they will be ignored but it is also possible to treat them as if they had a value.

{
    "aggs" : {
        "tags" : {
             "terms" : {
                 "field" : "tags",
                 "missing": "N/A" 
             }
         }
    }
}

Documents without a value in the tags field will fall into the same bucket as documents that have the value N/A.