Request Body Search

The search request can be executed with a search DSL, which includes the Query DSL, within its body. Here is an example:

GET /twitter/tweet/_search
{
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

And here is a sample response:

{
    "took": 1,
    "timed_out": false,
    "_shards":{
        "total" : 1,
        "successful" : 1,
        "failed" : 0
    },
    "hits":{
        "total" : 1,
        "max_score": 1.3862944,
        "hits" : [
            {
                "_index" : "twitter",
                "_type" : "tweet",
                "_id" : "0",
                "_score": 1.3862944,
                "_source" : {
                    "user" : "kimchy",
                    "message": "trying out Elasticsearch",
                    "date" : "2009-11-15T14:12:12",
                    "likes" : 0
                }
            }
        ]
    }
}

Parameters

timeout

A search timeout, bounding the search request to be executed within the specified time value and bail with the hits accumulated up to that point when expired. Defaults to no timeout. See the section called “Time units”.

from

To retrieve hits from a certain offset. Defaults to 0.

size

The number of hits to return. Defaults to 10. If you do not care about getting some hits back but only about the number of matches and/or aggregations, setting the value to 0 will help performance.

search_type

The type of the search operation to perform. Can be dfs_query_then_fetch or query_then_fetch. Defaults to query_then_fetch. See Search Type for more.

request_cache

Set to true or false to enable or disable the caching of search results for requests where size is 0, ie aggregations and suggestions (no top hits returned). See Shard request cache.

terminate_after

The maximum number of documents to collect for each shard, upon reaching which the query execution will terminate early. If set, the response will have a boolean field terminated_early to indicate whether the query execution has actually terminated_early. Defaults to no terminate_after.

Out of the above, the search_type and the request_cache must be passed as query-string parameters. The rest of the search request should be passed within the body itself. The body content can also be passed as a REST parameter named source.

Both HTTP GET and HTTP POST can be used to execute search with body. Since not all clients support GET with body, POST is allowed as well.

Fast check for any matching docs

In case we only want to know if there are any documents matching a specific query, we can set the size to 0 to indicate that we are not interested in the search results. Also we can set terminate_after to 1 to indicate that the query execution can be terminated whenever the first matching document was found (per shard).

GET /_search?q=message:elasticsearch&size=0&terminate_after=1

The response will not contain any hits as the size was set to 0. The hits.total will be either equal to 0, indicating that there were no matching documents, or greater than 0 meaning that there were at least as many documents matching the query when it was early terminated. Also if the query was terminated early, the terminated_early flag will be set to true in the response.

{
  "took": 3,
  "timed_out": false,
  "terminated_early": true,
  "_shards": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "hits": {
    "total": 1,
    "max_score": 0.0,
    "hits": []
  }
}

Query

The query element within the search request body allows to define a query using the Query DSL.

GET /_search
{
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

From / Size

Pagination of results can be done by using the from and size parameters. The from parameter defines the offset from the first result you want to fetch. The size parameter allows you to configure the maximum amount of hits to be returned.

Though from and size can be set as request parameters, they can also be set within the search body. from defaults to 0, and size defaults to 10.

GET /_search
{
    "from" : 0, "size" : 10,
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

Note that from + size can not be more than the index.max_result_window index setting which defaults to 10,000. See the Scroll or Search After API for more efficient ways to do deep scrolling.

Sort

Allows to add one or more sort on specific fields. Each sort can be reversed as well. The sort is defined on a per field level, with special field name for _score to sort by score, and _doc to sort by index order.

Assuming the following index mapping:

PUT /my_index
{
    "mappings": {
        "my_type": {
            "properties": {
                "post_date": { "type": "date" },
                "user": {
                    "type": "keyword"
                },
                "name": {
                    "type": "keyword"
                },
                "age": { "type": "integer" }
            }
        }
    }
}
GET /my_index/my_type/_search
{
    "sort" : [
        { "post_date" : {"order" : "asc"}},
        "user",
        { "name" : "desc" },
        { "age" : "desc" },
        "_score"
    ],
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}
Note

_doc has no real use-case besides being the most efficient sort order. So if you don’t care about the order in which documents are returned, then you should sort by _doc. This especially helps when scrolling.

Sort Values

The sort values for each document returned are also returned as part of the response.

Sort Order

The order option can have the following values:

asc

Sort in ascending order

desc

Sort in descending order

The order defaults to desc when sorting on the _score, and defaults to asc when sorting on anything else.

Sort mode option

Elasticsearch supports sorting by array or multi-valued fields. The mode option controls what array value is picked for sorting the document it belongs to. The mode option can have the following values:

min

Pick the lowest value.

max

Pick the highest value.

sum

Use the sum of all values as sort value. Only applicable for number based array fields.

avg

Use the average of all values as sort value. Only applicable for number based array fields.

median

Use the median of all values as sort value. Only applicable for number based array fields.

Sort mode example usage

In the example below the field price has multiple prices per document. In this case the result hits will be sorted by price ascending based on the average price per document.

PUT /my_index/my_type/1?refresh
{
   "product": "chocolate",
    "price": [20, 4]
}

POST /_search
{
   "query" : {
      "term" : { "product" : "chocolate" }
   },
   "sort" : [
      {"price" : {"order" : "asc", "mode" : "avg"}}
   ]
}

Sorting within nested objects.

Elasticsearch also supports sorting by fields that are inside one or more nested objects. The sorting by nested field support has the following parameters on top of the already existing sort options:

nested_path
Defines on which nested object to sort. The actual sort field must be a direct field inside this nested object. When sorting by nested field, this field is mandatory.
nested_filter
A filter that the inner objects inside the nested path should match with in order for its field values to be taken into account by sorting. Common case is to repeat the query / filter inside the nested filter or query. By default no nested_filter is active.

Nested sorting example

In the below example offer is a field of type nested. The nested_path needs to be specified; otherwise, elasticsearch doesn’t know on what nested level sort values need to be captured.

POST /_search
{
   "query" : {
      "term" : { "product" : "chocolate" }
   },
   "sort" : [
       {
          "offer.price" : {
             "mode" :  "avg",
             "order" : "asc",
             "nested_path" : "offer",
             "nested_filter" : {
                "term" : { "offer.color" : "blue" }
             }
          }
       }
    ]
}

Nested sorting is also supported when sorting by scripts and sorting by geo distance.

Missing Values

The missing parameter specifies how docs which are missing the field should be treated: The missing value can be set to _last, _first, or a custom value (that will be used for missing docs as the sort value). For example:

GET /_search
{
    "sort" : [
        { "price" : {"missing" : "_last"} }
    ],
    "query" : {
        "term" : { "product" : "chocolate" }
    }
}
Note

If a nested inner object doesn’t match with the nested_filter then a missing value is used.

Ignoring Unmapped Fields

By default, the search request will fail if there is no mapping associated with a field. The unmapped_type option allows to ignore fields that have no mapping and not sort by them. The value of this parameter is used to determine what sort values to emit. Here is an example of how it can be used:

GET /_search
{
    "sort" : [
        { "price" : {"unmapped_type" : "long"} }
    ],
    "query" : {
        "term" : { "product" : "chocolate" }
    }
}

If any of the indices that are queried doesn’t have a mapping for price then Elasticsearch will handle it as if there was a mapping of type long, with all documents in this index having no value for this field.

Geo Distance Sorting

Allow to sort by _geo_distance. Here is an example, assuming pin.location is a field of type geo_point:

GET /_search
{
    "sort" : [
        {
            "_geo_distance" : {
                "pin.location" : [-70, 40],
                "order" : "asc",
                "unit" : "km",
                "mode" : "min",
                "distance_type" : "sloppy_arc"
            }
        }
    ],
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}
distance_type
How to compute the distance. Can either be sloppy_arc (default), arc (slightly more precise but significantly slower) or plane (faster, but inaccurate on long distances and close to the poles).
mode
What to do in case a field has several geo points. By default, the shortest distance is taken into account when sorting in ascending order and the longest distance when sorting in descending order. Supported values are min, max, median and avg.
unit
The unit to use when computing sort values. The default is m (meters).
Note

geo distance sorting does not support configurable missing values: the distance will always be considered equal to Infinity when a document does not have values for the field that is used for distance computation.

The following formats are supported in providing the coordinates:

Lat Lon as Properties

GET /_search
{
    "sort" : [
        {
            "_geo_distance" : {
                "pin.location" : {
                    "lat" : 40,
                    "lon" : -70
                },
                "order" : "asc",
                "unit" : "km"
            }
        }
    ],
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

Lat Lon as String

Format in lat,lon.

GET /_search
{
    "sort" : [
        {
            "_geo_distance" : {
                "pin.location" : "40,-70",
                "order" : "asc",
                "unit" : "km"
            }
        }
    ],
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

Geohash

GET /_search
{
    "sort" : [
        {
            "_geo_distance" : {
                "pin.location" : "drm3btev3e86",
                "order" : "asc",
                "unit" : "km"
            }
        }
    ],
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

Lat Lon as Array

Format in [lon, lat], note, the order of lon/lat here in order to conform with GeoJSON.

GET /_search
{
    "sort" : [
        {
            "_geo_distance" : {
                "pin.location" : [-70, 40],
                "order" : "asc",
                "unit" : "km"
            }
        }
    ],
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

Multiple reference points

Multiple geo points can be passed as an array containing any geo_point format, for example

GET /_search
{
    "sort" : [
        {
            "_geo_distance" : {
                "pin.location" : [[-70, 40], [-71, 42]],
                "order" : "asc",
                "unit" : "km"
            }
        }
    ],
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

and so forth.

The final distance for a document will then be min/max/avg (defined via mode) distance of all points contained in the document to all points given in the sort request.

Script Based Sorting

Allow to sort based on custom scripts, here is an example:

GET /_search
{
    "query" : {
        "term" : { "user" : "kimchy" }
    },
    "sort" : {
        "_script" : {
            "type" : "number",
            "script" : {
                "lang": "painless",
                "inline": "doc['field_name'].value * params.factor",
                "params" : {
                    "factor" : 1.1
                }
            },
            "order" : "asc"
        }
    }
}

Track Scores

When sorting on a field, scores are not computed. By setting track_scores to true, scores will still be computed and tracked.

GET /_search
{
    "track_scores": true,
    "sort" : [
        { "post_date" : {"order" : "desc"} },
        { "name" : "desc" },
        { "age" : "desc" }
    ],
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

Memory Considerations

When sorting, the relevant sorted field values are loaded into memory. This means that per shard, there should be enough memory to contain them. For string based types, the field sorted on should not be analyzed / tokenized. For numeric types, if possible, it is recommended to explicitly set the type to narrower types (like short, integer and float).

Source filtering

Allows to control how the _source field is returned with every hit.

By default operations return the contents of the _source field unless you have used the stored_fields parameter or if the _source field is disabled.

You can turn off _source retrieval by using the _source parameter:

To disable _source retrieval set to false:

GET /_search
{
    "_source": false,
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

The _source also accepts one or more wildcard patterns to control what parts of the _source should be returned:

For example:

GET /_search
{
    "_source": "obj.*",
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

Or

GET /_search
{
    "_source": [ "obj1.*", "obj2.*" ],
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

Finally, for complete control, you can specify both includes and excludes patterns:

GET /_search
{
    "_source": {
        "includes": [ "obj1.*", "obj2.*" ],
        "excludes": [ "*.description" ]
    },
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

Fields

Warning

The stored_fields parameter is about fields that are explicitly marked as stored in the mapping, which is off by default and generally not recommended. Use source filtering instead to select subsets of the original source document to be returned.

Allows to selectively load specific stored fields for each document represented by a search hit.

GET /_search
{
    "stored_fields" : ["user", "postDate"],
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

* can be used to load all stored fields from the document.

An empty array will cause only the _id and _type for each hit to be returned, for example:

GET /_search
{
    "stored_fields" : [],
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

If the requested fields are not stored (store mapping set to false), they will be ignored.

Stored field values fetched from the document itself are always returned as an array. On the contrary, metadata fields like _routing and _parent fields are never returned as an array.

Also only leaf fields can be returned via the field option. So object fields can’t be returned and such requests will fail.

Script fields can also be automatically detected and used as fields, so things like _source.obj1.field1 can be used, though not recommended, as obj1.field1 will work as well.

Disable stored fields entirely

To disable the stored fields (and metadata fields) entirely use: \_none_:

GET /_search
{
    "stored_fields": "_none_",
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}
Note

_source and version parameters cannot be activated if _none_ is used.

Script Fields

Allows to return a script evaluation (based on different fields) for each hit, for example:

GET /_search
{
    "query" : {
        "match_all": {}
    },
    "script_fields" : {
        "test1" : {
            "script" : {
                "lang": "painless",
                "inline": "doc['my_field_name'].value * 2"
            }
        },
        "test2" : {
            "script" : {
                "lang": "painless",
                "inline": "doc['my_field_name'].value * factor",
                "params" : {
                    "factor"  : 2.0
                }
            }
        }
    }
}

Script fields can work on fields that are not stored (my_field_name in the above case), and allow to return custom values to be returned (the evaluated value of the script).

Script fields can also access the actual _source document indexed and extract specific elements to be returned from it (can be an "object" type). Here is an example:

GET /_search
    {
        "query" : {
            "match_all": {}
        },
        "script_fields" : {
            "test1" : {
                "script" : "_source.obj1.obj2"
            }
        }
    }

Note the _source keyword here to navigate the json-like model.

It’s important to understand the difference between doc['my_field'].value and _source.my_field. The first, using the doc keyword, will cause the terms for that field to be loaded to memory (cached), which will result in faster execution, but more memory consumption. Also, the doc[...] notation only allows for simple valued fields (can’t return a json object from it) and make sense only on non-analyzed or single term based fields.

The _source on the other hand causes the source to be loaded, parsed, and then only the relevant part of the json is returned.

Doc value Fields

Allows to return the doc value representation of a field for each hit, for example:

GET /_search
{
    "query" : {
        "match_all": {}
    },
    "docvalue_fields" : ["test1", "test2"]
}

Doc value fields can work on fields that are not stored.

Note that if the fields parameter specifies fields without docvalues it will try to load the value from the fielddata cache causing the terms for that field to be loaded to memory (cached), which will result in more memory consumption.

Post filter

The post_filter is applied to the search hits at the very end of a search request, after aggregations have already been calculated. Its purpose is best explained by example:

Imagine that you are selling shirts that have the following properties:

PUT /shirts
{
    "mappings": {
        "item": {
            "properties": {
                "brand": { "type": "keyword"},
                "color": { "type": "keyword"},
                "model": { "type": "keyword"}
            }
        }
    }
}

PUT /shirts/item/1?refresh
{
    "brand": "gucci",
    "color": "red",
    "model": "slim"
}

Imagine a user has specified two filters:

color:red and brand:gucci. You only want to show them red shirts made by Gucci in the search results. Normally you would do this with a bool query:

GET /shirts/_search
{
  "query": {
    "bool": {
      "filter": [
        { "term": { "color": "red"   }},
        { "term": { "brand": "gucci" }}
      ]
    }
  }
}

However, you would also like to use faceted navigation to display a list of other options that the user could click on. Perhaps you have a model field that would allow the user to limit their search results to red Gucci t-shirts or dress-shirts.

This can be done with a terms aggregation:

GET /shirts/_search
{
  "query": {
    "bool": {
      "filter": [
        { "term": { "color": "red"   }},
        { "term": { "brand": "gucci" }}
      ]
    }
  },
  "aggs": {
    "models": {
      "terms": { "field": "model" } 
    }
  }
}

Returns the most popular models of red shirts by Gucci.

But perhaps you would also like to tell the user how many Gucci shirts are available in other colors. If you just add a terms aggregation on the color field, you will only get back the color red, because your query returns only red shirts by Gucci.

Instead, you want to include shirts of all colors during aggregation, then apply the colors filter only to the search results. This is the purpose of the post_filter:

GET /shirts/_search
{
  "query": {
    "bool": {
      "filter": {
        "term": { "brand": "gucci" } 
      }
    }
  },
  "aggs": {
    "colors": {
      "terms": { "field": "color" } 
    },
    "color_red": {
      "filter": {
        "term": { "color": "red" } 
      },
      "aggs": {
        "models": {
          "terms": { "field": "model" } 
        }
      }
    }
  },
  "post_filter": { 
    "term": { "color": "red" }
  }
}

The main query now finds all shirts by Gucci, regardless of color.

The colors agg returns popular colors for shirts by Gucci.

The color_red agg limits the models sub-aggregation to red Gucci shirts.

Finally, the post_filter removes colors other than red from the search hits.

Highlighting

Allows to highlight search results on one or more fields. The implementation uses either the lucene plain highlighter, the fast vector highlighter (fvh) or postings highlighter. The following is an example of the search request body:

GET /_search
{
    "query" : {
        "match": { "user": "kimchy" }
    },
    "highlight" : {
        "fields" : {
            "content" : {}
        }
    }
}

In the above case, the content field will be highlighted for each search hit (there will be another element in each search hit, called highlight, which includes the highlighted fields and the highlighted fragments).

Note

In order to perform highlighting, the actual content of the field is required. If the field in question is stored (has store set to true in the mapping) it will be used, otherwise, the actual _source will be loaded and the relevant field will be extracted from it.

The _all field cannot be extracted from _source, so it can only be used for highlighting if it mapped to have store set to true.

The field name supports wildcard notation. For example, using comment_* will cause all text and keyword fields (and string from versions before 5.0) that match the expression to be highlighted. Note that all other fields will not be highlighted. If you use a custom mapper and want to highlight on a field anyway, you have to provide the field name explicitly.

Plain highlighter

The default choice of highlighter is of type plain and uses the Lucene highlighter. It tries hard to reflect the query matching logic in terms of understanding word importance and any word positioning criteria in phrase queries.

Warning

If you want to highlight a lot of fields in a lot of documents with complex queries this highlighter will not be fast. In its efforts to accurately reflect query logic it creates a tiny in-memory index and re-runs the original query criteria through Lucene’s query execution planner to get access to low-level match information on the current document. This is repeated for every field and every document that needs highlighting. If this presents a performance issue in your system consider using an alternative highlighter.

Postings highlighter

If index_options is set to offsets in the mapping the postings highlighter will be used instead of the plain highlighter. The postings highlighter:

  • Is faster since it doesn’t require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be
  • Requires less disk space than term_vectors, needed for the fast vector highlighter
  • Breaks the text into sentences and highlights them. Plays really well with natural languages, not as well with fields containing for instance html markup
  • Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm

Here is an example of setting the content field in the index mapping to allow for highlighting using the postings highlighter on it:

{
    "type_name" : {
        "content" : {"index_options" : "offsets"}
    }
}
Note

Note that the postings highlighter is meant to perform simple query terms highlighting, regardless of their positions. That means that when used for instance in combination with a phrase query, it will highlight all the terms that the query is composed of, regardless of whether they are actually part of a query match, effectively ignoring their positions.

Warning

The postings highlighter doesn’t support highlighting some complex queries, like a match query with type set to match_phrase_prefix. No highlighted snippets will be returned in that case.

Fast vector highlighter

If term_vector information is provided by setting term_vector to with_positions_offsets in the mapping then the fast vector highlighter will be used instead of the plain highlighter. The fast vector highlighter:

  • Is faster especially for large fields (> 1MB)
  • Can be customized with boundary_chars, boundary_max_scan, and fragment_offset (see below)
  • Requires setting term_vector to with_positions_offsets which increases the size of the index
  • Can combine matches from multiple fields into one result. See matched_fields
  • Can assign different weights to matches at different positions allowing for things like phrase matches being sorted above term matches when highlighting a Boosting Query that boosts phrase matches over term matches

Here is an example of setting the content field to allow for highlighting using the fast vector highlighter on it (this will cause the index to be bigger):

{
    "type_name" : {
        "content" : {"term_vector" : "with_positions_offsets"}
    }
}

Force highlighter type

The type field allows to force a specific highlighter type. This is useful for instance when needing to use the plain highlighter on a field that has term_vectors enabled. The allowed values are: plain, postings and fvh. The following is an example that forces the use of the plain highlighter:

GET /_search
{
    "query" : {
        "match": { "user": "kimchy" }
    },
    "highlight" : {
        "fields" : {
            "content" : {"type" : "plain"}
        }
    }
}

Force highlighting on source

Forces the highlighting to highlight fields based on the source even if fields are stored separately. Defaults to false.

GET /_search
{
    "query" : {
        "match": { "user": "kimchy" }
    },
    "highlight" : {
        "fields" : {
            "content" : {"force_source" : true}
        }
    }
}

Highlighting Tags

By default, the highlighting will wrap highlighted text in <em> and </em>. This can be controlled by setting pre_tags and post_tags, for example:

GET /_search
{
    "query" : {
        "match": { "user": "kimchy" }
    },
    "highlight" : {
        "pre_tags" : ["<tag1>"],
        "post_tags" : ["</tag1>"],
        "fields" : {
            "_all" : {}
        }
    }
}

Using the fast vector highlighter there can be more tags, and the "importance" is ordered.

GET /_search
{
    "query" : {
        "match": { "user": "kimchy" }
    },
    "highlight" : {
        "pre_tags" : ["<tag1>", "<tag2>"],
        "post_tags" : ["</tag1>", "</tag2>"],
        "fields" : {
            "_all" : {}
        }
    }
}

There are also built in "tag" schemas, with currently a single schema called styled with the following pre_tags:

<em class="hlt1">, <em class="hlt2">, <em class="hlt3">,
<em class="hlt4">, <em class="hlt5">, <em class="hlt6">,
<em class="hlt7">, <em class="hlt8">, <em class="hlt9">,
<em class="hlt10">

and </em> as post_tags. If you think of more nice to have built in tag schemas, just send an email to the mailing list or open an issue. Here is an example of switching tag schemas:

GET /_search
{
    "query" : {
        "match": { "user": "kimchy" }
    },
    "highlight" : {
        "tags_schema" : "styled",
        "fields" : {
            "content" : {}
        }
    }
}

Encoder

An encoder parameter can be used to define how highlighted text will be encoded. It can be either default (no encoding) or html (will escape html, if you use html highlighting tags).

Highlighted Fragments

Each field highlighted can control the size of the highlighted fragment in characters (defaults to 100), and the maximum number of fragments to return (defaults to 5). For example:

GET /_search
{
    "query" : {
        "match": { "user": "kimchy" }
    },
    "highlight" : {
        "fields" : {
            "content" : {"fragment_size" : 150, "number_of_fragments" : 3}
        }
    }
}

The fragment_size is ignored when using the postings highlighter, as it outputs sentences regardless of their length.

On top of this it is possible to specify that highlighted fragments need to be sorted by score:

GET /_search
{
    "query" : {
        "match": { "user": "kimchy" }
    },
    "highlight" : {
        "order" : "score",
        "fields" : {
            "content" : {"fragment_size" : 150, "number_of_fragments" : 3}
        }
    }
}

If the number_of_fragments value is set to 0 then no fragments are produced, instead the whole content of the field is returned, and of course it is highlighted. This can be very handy if short texts (like document title or address) need to be highlighted but no fragmentation is required. Note that fragment_size is ignored in this case.

GET /_search
{
    "query" : {
        "match": { "user": "kimchy" }
    },
    "highlight" : {
        "fields" : {
            "_all" : {},
            "bio.title" : {"number_of_fragments" : 0}
        }
    }
}

When using fvh one can use fragment_offset parameter to control the margin to start highlighting from.

In the case where there is no matching fragment to highlight, the default is to not return anything. Instead, we can return a snippet of text from the beginning of the field by setting no_match_size (default 0) to the length of the text that you want returned. The actual length may be shorter than specified as it tries to break on a word boundary. When using the postings highlighter it is not possible to control the actual size of the snippet, therefore the first sentence gets returned whenever no_match_size is greater than 0.

GET /_search
{
    "query" : {
        "match": { "user": "kimchy" }
    },
    "highlight" : {
        "fields" : {
            "content" : {
                "fragment_size" : 150,
                "number_of_fragments" : 3,
                "no_match_size": 150
            }
        }
    }
}

Highlight query

It is also possible to highlight against a query other than the search query by setting highlight_query. This is especially useful if you use a rescore query because those are not taken into account by highlighting by default. Elasticsearch does not validate that highlight_query contains the search query in any way so it is possible to define it so legitimate query results aren’t highlighted at all. Generally it is better to include the search query in the highlight_query. Here is an example of including both the search query and the rescore query in highlight_query.

GET /_search
{
    "stored_fields": [ "_id" ],
    "query" : {
        "match": {
            "content": {
                "query": "foo bar"
            }
        }
    },
    "rescore": {
        "window_size": 50,
        "query": {
            "rescore_query" : {
                "match_phrase": {
                    "content": {
                        "query": "foo bar",
                        "slop": 1
                    }
                }
            },
            "rescore_query_weight" : 10
        }
    },
    "highlight" : {
        "order" : "score",
        "fields" : {
            "content" : {
                "fragment_size" : 150,
                "number_of_fragments" : 3,
                "highlight_query": {
                    "bool": {
                        "must": {
                            "match": {
                                "content": {
                                    "query": "foo bar"
                                }
                            }
                        },
                        "should": {
                            "match_phrase": {
                                "content": {
                                    "query": "foo bar",
                                    "slop": 1,
                                    "boost": 10.0
                                }
                            }
                        },
                        "minimum_should_match": 0
                    }
                }
            }
        }
    }
}

Note that the score of text fragment in this case is calculated by the Lucene highlighting framework. For implementation details you can check the ScoreOrderFragmentsBuilder.java class. On the other hand when using the postings highlighter the fragments are scored using, as mentioned above, the BM25 algorithm.

Global Settings

Highlighting settings can be set on a global level and then overridden at the field level.

GET /_search
{
    "query" : {
        "match": { "user": "kimchy" }
    },
    "highlight" : {
        "number_of_fragments" : 3,
        "fragment_size" : 150,
        "fields" : {
            "_all" : { "pre_tags" : ["<em>"], "post_tags" : ["</em>"] },
            "bio.title" : { "number_of_fragments" : 0 },
            "bio.author" : { "number_of_fragments" : 0 },
            "bio.content" : { "number_of_fragments" : 5, "order" : "score" }
        }
    }
}

Require Field Match

require_field_match can be set to false which will cause any field to be highlighted regardless of whether the query matched specifically on them. The default behaviour is true, meaning that only fields that hold a query match will be highlighted.

GET /_search
{
    "query" : {
        "match": { "user": "kimchy" }
    },
    "highlight" : {
        "require_field_match": false,
        "fields": {
                "_all" : { "pre_tags" : ["<em>"], "post_tags" : ["</em>"] }
        }
    }
}

Boundary Characters

When highlighting a field using the fast vector highlighter, boundary_chars can be configured to define what constitutes a boundary for highlighting. It’s a single string with each boundary character defined in it. It defaults to .,!? \t\n.

The boundary_max_scan allows to control how far to look for boundary characters, and defaults to 20.

Matched Fields

The Fast Vector Highlighter can combine matches on multiple fields to highlight a single field using matched_fields. This is most intuitive for multifields that analyze the same string in different ways. All matched_fields must have term_vector set to with_positions_offsets but only the field to which the matches are combined is loaded so only that field would benefit from having store set to yes.

In the following examples content is analyzed by the english analyzer and content.plain is analyzed by the standard analyzer.

GET /_search
{
    "query": {
        "query_string": {
            "query": "content.plain:running scissors",
            "fields": ["content"]
        }
    },
    "highlight": {
        "order": "score",
        "fields": {
            "content": {
                "matched_fields": ["content", "content.plain"],
                "type" : "fvh"
            }
        }
    }
}

The above matches both "run with scissors" and "running with scissors" and would highlight "running" and "scissors" but not "run". If both phrases appear in a large document then "running with scissors" is sorted above "run with scissors" in the fragments list because there are more matches in that fragment.

GET /_search
{
    "query": {
        "query_string": {
            "query": "running scissors",
            "fields": ["content", "content.plain^10"]
        }
    },
    "highlight": {
        "order": "score",
        "fields": {
            "content": {
                "matched_fields": ["content", "content.plain"],
                "type" : "fvh"
            }
        }
    }
}

The above highlights "run" as well as "running" and "scissors" but still sorts "running with scissors" above "run with scissors" because the plain match ("running") is boosted.

GET /_search
{
    "query": {
        "query_string": {
            "query": "running scissors",
            "fields": ["content", "content.plain^10"]
        }
    },
    "highlight": {
        "order": "score",
        "fields": {
            "content": {
                "matched_fields": ["content.plain"],
                "type" : "fvh"
            }
        }
    }
}

The above query wouldn’t highlight "run" or "scissor" but shows that it is just fine not to list the field to which the matches are combined (content) in the matched fields.

Note

Technically it is also fine to add fields to matched_fields that don’t share the same underlying string as the field to which the matches are combined. The results might not make much sense and if one of the matches is off the end of the text then the whole query will fail.

Note

There is a small amount of overhead involved with setting matched_fields to a non-empty array so always prefer

    "highlight": {
        "fields": {
            "content": {}
        }
    }

to

    "highlight": {
        "fields": {
            "content": {
                "matched_fields": ["content"],
                "type" : "fvh"
            }
        }
    }

Phrase Limit

The fast vector highlighter has a phrase_limit parameter that prevents it from analyzing too many phrases and eating tons of memory. It defaults to 256 so only the first 256 matching phrases in the document scored considered. You can raise the limit with the phrase_limit parameter but keep in mind that scoring more phrases consumes more time and memory.

If using matched_fields keep in mind that phrase_limit phrases per matched field are considered.

Field Highlight Order

Elasticsearch highlights the fields in the order that they are sent. Per the json spec objects are unordered but if you need to be explicit about the order that fields are highlighted then you can use an array for fields like this:

    "highlight": {
        "fields": [
            {"title":{ /*params*/ }},
            {"text":{ /*params*/ }}
        ]
    }

None of the highlighters built into Elasticsearch care about the order that the fields are highlighted but a plugin may.

Rescoring

Rescoring can help to improve precision by reordering just the top (eg 100 - 500) documents returned by the query and post_filter phases, using a secondary (usually more costly) algorithm, instead of applying the costly algorithm to all documents in the index.

A rescore request is executed on each shard before it returns its results to be sorted by the node handling the overall search request.

Currently the rescore API has only one implementation: the query rescorer, which uses a query to tweak the scoring. In the future, alternative rescorers may be made available, for example, a pair-wise rescorer.

Note

the rescore phase is not executed when sort is used.

Note

when exposing pagination to your users, you should not change window_size as you step through each page (by passing different from values) since that can alter the top hits causing results to confusingly shift as the user steps through pages.

Query rescorer

The query rescorer executes a second query only on the Top-K results returned by the query and post_filter phases. The number of docs which will be examined on each shard can be controlled by the window_size parameter, which defaults to from and size.

By default the scores from the original query and the rescore query are combined linearly to produce the final _score for each document. The relative importance of the original query and of the rescore query can be controlled with the query_weight and rescore_query_weight respectively. Both default to 1.

For example:

curl -s -XPOST 'localhost:9200/_search' -d '{
   "query" : {
      "match" : {
         "field1" : {
            "operator" : "or",
            "query" : "the quick brown",
            "type" : "boolean"
         }
      }
   },
   "rescore" : {
      "window_size" : 50,
      "query" : {
         "rescore_query" : {
            "match" : {
               "field1" : {
                  "query" : "the quick brown",
                  "type" : "phrase",
                  "slop" : 2
               }
            }
         },
         "query_weight" : 0.7,
         "rescore_query_weight" : 1.2
      }
   }
}
'

The way the scores are combined can be controlled with the score_mode:

Score Mode Description

total

Add the original score and the rescore query score. The default.

multiply

Multiply the original score by the rescore query score. Useful for function query rescores.

avg

Average the original score and the rescore query score.

max

Take the max of original score and the rescore query score.

min

Take the min of the original score and the rescore query score.

Multiple Rescores

It is also possible to execute multiple rescores in sequence:

curl -s -XPOST 'localhost:9200/_search' -d '{
   "query" : {
      "match" : {
         "field1" : {
            "operator" : "or",
            "query" : "the quick brown",
            "type" : "boolean"
         }
      }
   },
   "rescore" : [ {
      "window_size" : 100,
      "query" : {
         "rescore_query" : {
            "match" : {
               "field1" : {
                  "query" : "the quick brown",
                  "type" : "phrase",
                  "slop" : 2
               }
            }
         },
         "query_weight" : 0.7,
         "rescore_query_weight" : 1.2
      }
   }, {
      "window_size" : 10,
      "query" : {
         "score_mode": "multiply",
         "rescore_query" : {
            "function_score" : {
               "script_score": {
                  "script": {
                    "lang": "painless",
                    "inline": "Math.log10(doc['numeric'].value + 2)"
                  }
               }
            }
         }
      }
   } ]
}
'

The first one gets the results of the query then the second one gets the results of the first, etc. The second rescore will "see" the sorting done by the first rescore so it is possible to use a large window on the first rescore to pull documents into a smaller window for the second rescore.

Search Type

There are different execution paths that can be done when executing a distributed search. The distributed search operation needs to be scattered to all the relevant shards and then all the results are gathered back. When doing scatter/gather type execution, there are several ways to do that, specifically with search engines.

One of the questions when executing a distributed search is how much results to retrieve from each shard. For example, if we have 10 shards, the 1st shard might hold the most relevant results from 0 till 10, with other shards results ranking below it. For this reason, when executing a request, we will need to get results from 0 till 10 from all shards, sort them, and then return the results if we want to ensure correct results.

Another question, which relates to the search engine, is the fact that each shard stands on its own. When a query is executed on a specific shard, it does not take into account term frequencies and other search engine information from the other shards. If we want to support accurate ranking, we would need to first gather the term frequencies from all shards to calculate global term frequencies, then execute the query on each shard using these global frequencies.

Also, because of the need to sort the results, getting back a large document set, or even scrolling it, while maintaining the correct sorting behavior can be a very expensive operation. For large result set scrolling, it is best to sort by _doc if the order in which documents are returned is not important.

Elasticsearch is very flexible and allows to control the type of search to execute on a per search request basis. The type can be configured by setting the search_type parameter in the query string. The types are:

Query Then Fetch

Parameter value: query_then_fetch.

The request is processed in two phases. In the first phase, the query is forwarded to all involved shards. Each shard executes the search and generates a sorted list of results, local to that shard. Each shard returns just enough information to the coordinating node to allow it merge and re-sort the shard level results into a globally sorted set of results, of maximum length size.

During the second phase, the coordinating node requests the document content (and highlighted snippets, if any) from only the relevant shards.

Note

This is the default setting, if you do not specify a search_type in your request.

Dfs, Query Then Fetch

Parameter value: dfs_query_then_fetch.

Same as "Query Then Fetch", except for an initial scatter phase which goes and computes the distributed term frequencies for more accurate scoring.

Scroll

While a search request returns a single “page” of results, the scroll API can be used to retrieve large numbers of results (or even all results) from a single search request, in much the same way as you would use a cursor on a traditional database.

Scrolling is not intended for real time user requests, but rather for processing large amounts of data, e.g. in order to reindex the contents of one index into a new index with a different configuration.

Note

The results that are returned from a scroll request reflect the state of the index at the time that the initial search request was made, like a snapshot in time. Subsequent changes to documents (index, update or delete) will only affect later search requests.

In order to use scrolling, the initial search request should specify the scroll parameter in the query string, which tells Elasticsearch how long it should keep the “search context” alive (see Keeping the search context alive), eg ?scroll=1m.

POST /twitter/tweet/_search?scroll=1m
{
    "size": 100,
    "query": {
        "match" : {
            "title" : "elasticsearch"
        }
    }
}

The result from the above request includes a _scroll_id, which should be passed to the scroll API in order to retrieve the next batch of results.

POST  /_search/scroll 
{
    "scroll" : "1m", 
    "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==" 
}

GET or POST can be used.

The URL should not include the index or type name — these are specified in the original search request instead.

The scroll parameter tells Elasticsearch to keep the search context open for another 1m.

The scroll_id parameter

The size parameter allows you to configure the maximum number of hits to be returned with each batch of results. Each call to the scroll API returns the next batch of results until there are no more results left to return, ie the hits array is empty.

Important

The initial search request and each subsequent scroll request returns a new _scroll_id — only the most recent _scroll_id should be used.

Note

If the request specifies aggregations, only the initial search response will contain the aggregations results.

Note

Scroll requests have optimizations that make them faster when the sort order is _doc. If you want to iterate over all documents regardless of the order, this is the most efficient option:

GET /_search?scroll=1m
{
  "sort": [
    "_doc"
  ]
}

Keeping the search context alive

The scroll parameter (passed to the search request and to every scroll request) tells Elasticsearch how long it should keep the search context alive. Its value (e.g. 1m, see the section called “Time units”) does not need to be long enough to process all data — it just needs to be long enough to process the previous batch of results. Each scroll request (with the scroll parameter) sets a new expiry time.

Normally, the background merge process optimizes the index by merging together smaller segments to create new bigger segments, at which time the smaller segments are deleted. This process continues during scrolling, but an open search context prevents the old segments from being deleted while they are still in use. This is how Elasticsearch is able to return the results of the initial search request, regardless of subsequent changes to documents.

Tip

Keeping older segments alive means that more file handles are needed. Ensure that you have configured your nodes to have ample free file handles. See File Descriptors.

You can check how many search contexts are open with the nodes stats API:

GET /_nodes/stats/indices/search

Clear scroll API

Search context are automatically removed when the scroll timeout has been exceeded. However keeping scrolls open has a cost, as discussed in the previous section so scrolls should be explicitly cleared as soon as the scroll is not being used anymore using the clear-scroll API:

DELETE /_search/scroll
{
    "scroll_id" : ["DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="]
}

Multiple scroll IDs can be passed as array:

DELETE /_search/scroll
{
    "scroll_id" : [
      "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==",
      "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB"
    ]
}

All search contexts can be cleared with the _all parameter:

DELETE /_search/scroll/_all

The scroll_id can also be passed as a query string parameter or in the request body. Multiple scroll IDs can be passed as comma separated values:

DELETE /_search/scroll/DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB

Sliced Scroll

For scroll queries that return a lot of documents it is possible to split the scroll in multiple slices which can be consumed independently:

GET /twitter/tweet/_search?scroll=1m
{
    "slice": {
        "id": 0, 
        "max": 2 
    },
    "query": {
        "match" : {
            "title" : "elasticsearch"
        }
    }
}
GET /twitter/tweet/_search?scroll=1m
{
    "slice": {
        "id": 1,
        "max": 2
    },
    "query": {
        "match" : {
            "title" : "elasticsearch"
        }
    }
}

The id of the slice

The maximum number of slices

The result from the first request returned documents that belong to the first slice (id: 0) and the result from the second request returned documents that belong to the second slice. Since the maximum number of slices is set to 2 the union of the results of the two requests is equivalent to the results of a scroll query without slicing. By default the splitting is done on the shards first and then locally on each shard using the _uid field with the following formula: slice(doc) = floorMod(hashCode(doc._uid), max) For instance if the number of shards is equal to 2 and the user requested 4 slices then the slices 0 and 2 are assigned to the first shard and the slices 1 and 3 are assigned to the second shard.

Each scroll is independent and can be processed in parallel like any scroll request.

Note

If the number of slices is bigger than the number of shards the slice filter is very slow on the first calls, it has a complexity of O(N) and a memory cost equals to N bits per slice where N is the total number of documents in the shard. After few calls the filter should be cached and subsequent calls should be faster but you should limit the number of sliced query you perform in parallel to avoid the memory explosion.

To avoid this cost entirely it is possible to use the doc_values of another field to do the slicing but the user must ensure that the field has the following properties:

  • The field is numeric.
  • doc_values are enabled on that field
  • Every document should contain a single value. If a document has multiple values for the specified field, the first value is used.
  • The value for each document should be set once when the document is created and never updated. This ensures that each slice gets deterministic results.
  • The cardinality of the field should be high. This ensures that each slice gets approximately the same amount of documents.
GET /twitter/tweet/_search?scroll=1m
{
    "slice": {
        "field": "date",
        "id": 0,
        "max": 10
    },
    "query": {
        "match" : {
            "title" : "elasticsearch"
        }
    }
}

For append only time-based indices, the timestamp field can be used safely.

Note

By default the maximum number of slices allowed per scroll is limited to 1024. You can update the index.max_slices_per_scroll index setting to bypass this limit.

Preference

Controls a preference of which shard replicas to execute the search request on. By default, the operation is randomized between the shard replicas.

The preference is a query string parameter which can be set to:

_primary

The operation will go and be executed only on the primary shards.

_primary_first

The operation will go and be executed on the primary shard, and if not available (failover), will execute on other shards.

_replica

The operation will go and be executed only on a replica shard.

_replica_first

The operation will go and be executed only on a replica shard, and if not available (failover), will execute on other shards.

_local

The operation will prefer to be executed on a local allocated shard if possible.

_prefer_nodes:abc,xyz

Prefers execution on the nodes with the provided node ids (abc or xyz in this case) if applicable.

_shards:2,3

Restricts the operation to the specified shards. (2 and 3 in this case). This preference can be combined with other preferences but it has to appear first: _shards:2,3|_primary

_only_nodes

Restricts the operation to nodes specified in node specification https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.html

Custom (string) value

A custom value will be used to guarantee that the same shards will be used for the same custom value. This can help with "jumping values" when hitting different shards in different refresh states. A sample value can be something like the web session id, or the user name.

For instance, use the user’s session ID to ensure consistent ordering of results for the user:

GET /_search?preference=xyzabc123
{
    "query": {
        "match": {
            "title": "elasticsearch"
        }
    }
}

Explain

Enables explanation for each hit on how its score was computed.

GET /_search
{
    "explain": true,
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

Version

Returns a version for each search hit.

GET /_search
{
    "version": true,
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

Index Boost

Allows to configure different boost level per index when searching across more than one indices. This is very handy when hits coming from one index matter more than hits coming from another index (think social graph where each user has an index).

GET /_search
{
    "indices_boost" : {
        "index1" : 1.4,
        "index2" : 1.3
    }
}

min_score

Exclude documents which have a _score less than the minimum specified in min_score:

GET /_search
{
    "min_score": 0.5,
    "query" : {
        "term" : { "user" : "kimchy" }
    }
}

Note, most times, this does not make much sense, but is provided for advanced use cases.

Named Queries

Each filter and query can accept a _name in its top level definition.

GET /_search
{
    "query": {
        "bool" : {
            "should" : [
                {"match" : { "name.first" : {"query" : "shay", "_name" : "first"} }},
                {"match" : { "name.last" : {"query" : "banon", "_name" : "last"} }}
            ],
            "filter" : {
                "terms" : {
                    "name.last" : ["banon", "kimchy"],
                    "_name" : "test"
                }
            }
        }
    }
}

The search response will include for each hit the matched_queries it matched on. The tagging of queries and filters only make sense for the bool query.

Inner hits

The parent/child and nested features allow the return of documents that have matches in a different scope. In the parent/child case, parent documents are returned based on matches in child documents or child documents are returned based on matches in parent documents. In the nested case, documents are returned based on matches in nested inner objects.

In both cases, the actual matches in the different scopes that caused a document to be returned is hidden. In many cases, it’s very useful to know which inner nested objects (in the case of nested) or children/parent documents (in the case of parent/child) caused certain information to be returned. The inner hits feature can be used for this. This feature returns per search hit in the search response additional nested hits that caused a search hit to match in a different scope.

Inner hits can be used by defining an inner_hits definition on a nested, has_child or has_parent query and filter. The structure looks like this:

"<query>" : {
    "inner_hits" : {
        <inner_hits_options>
    }
}

If inner_hits is defined on a query that supports it then each search hit will contain an inner_hits json object with the following structure:

"hits": [
     {
        "_index": ...,
        "_type": ...,
        "_id": ...,
        "inner_hits": {
           "<inner_hits_name>": {
              "hits": {
                 "total": ...,
                 "hits": [
                    {
                       "_type": ...,
                       "_id": ...,
                       ...
                    },
                    ...
                 ]
              }
           }
        },
        ...
     },
     ...
]

Options

Inner hits support the following options:

from

The offset from where the first hit to fetch for each inner_hits in the returned regular search hits.

size

The maximum number of hits to return per inner_hits. By default the top three matching hits are returned.

sort

How the inner hits should be sorted per inner_hits. By default the hits are sorted by the score.

name

The name to be used for the particular inner hit definition in the response. Useful when multiple inner hits have been defined in a single search request. The default depends in which query the inner hit is defined. For has_child query and filter this is the child type, has_parent query and filter this is the parent type and the nested query and filter this is the nested path.

Inner hits also supports the following per document features:

Nested inner hits

The nested inner_hits can be used to include nested inner objects as inner hits to a search hit.

The example below assumes that there is a nested object field defined with the name comments:

{
    "query" : {
        "nested" : {
            "path" : "comments",
            "query" : {
                "match" : {"comments.message" : "[actual query]"}
            },
            "inner_hits" : {} 
        }
    }
}

The inner hit definition in the nested query. No other options need to be defined.

An example of a response snippet that could be generated from the above search request:

...
"hits": {
  ...
  "hits": [
     {
        "_index": "my-index",
        "_type": "question",
        "_id": "1",
        "_source": ...,
        "inner_hits": {
           "comments": { 
              "hits": {
                 "total": ...,
                 "hits": [
                    {
                       "_nested": {
                          "field": "comments",
                          "offset": 2
                       },
                       "_source": ...
                    },
                    ...
                 ]
              }
           }
        }
     },
     ...

The name used in the inner hit definition in the search request. A custom key can be used via the name option.

The _nested metadata is crucial in the above example, because it defines from what inner nested object this inner hit came from. The field defines the object array field the nested hit is from and the offset relative to its location in the _source. Due to sorting and scoring the actual location of the hit objects in the inner_hits is usually different than the location a nested inner object was defined.

By default the _source is returned also for the hit objects in inner_hits, but this can be changed. Either via _source filtering feature part of the source can be returned or be disabled. If stored fields are defined on the nested level these can also be returned via the fields feature.

An important default is that the _source returned in hits inside inner_hits is relative to the _nested metadata. So in the above example only the comment part is returned per nested hit and not the entire source of the top level document that contained the comment.

Hierarchical levels of nested object fields and inner hits.

If a mapping has multiple levels of hierarchical nested object fields each level can be accessed via dot notated path. For example if there is a comments nested field that contains a votes nested field and votes should directly be returned with the root hits then the following path can be defined:

{
   "query" : {
      "nested" : {
         "path" : "comments.votes",
         "query" : { ... },
         "inner_hits" : {}
      }
    }
}

This indirect referencing is only supported for nested inner hits.

Parent/child inner hits

The parent/child inner_hits can be used to include parent or child

The examples below assumes that there is a _parent field mapping in the comment type:

{
    "query" : {
        "has_child" : {
            "type" : "comment",
            "query" : {
                "match" : {"message" : "[actual query]"}
            },
            "inner_hits" : {} 
        }
    }
}

The inner hit definition like in the nested example.

An example of a response snippet that could be generated from the above search request:

...
"hits": {
  ...
  "hits": [
     {
        "_index": "my-index",
        "_type": "question",
        "_id": "1",
        "_source": ...,
        "inner_hits": {
           "comment": {
              "hits": {
                 "total": ...,
                 "hits": [
                    {
                       "_type": "comment",
                       "_id": "5",
                       "_source": ...
                    },
                    ...
                 ]
              }
           }
        }
     },
     ...

Search After

Pagination of results can be done by using the from and size but the cost becomes prohibitive when the deep pagination is reached. The index.max_result_window which defaults to 10,000 is a safeguard, search requests take heap memory and time proportional to from + size. The Scroll api is recommended for efficient deep scrolling but scroll contexts are costly and it is not recommended to use it for real time user requests. The search_after parameter circumvents this problem by providing a live cursor. The idea is to use the results from the previous page to help the retrieval of the next page.

Suppose that the query to retrieve the first page looks like this:

GET twitter/tweet/_search
{
    "size": 10,
    "query": {
        "match" : {
            "title" : "elasticsearch"
        }
    },
    "sort": [
        {"date": "asc"},
        {"_uid": "desc"}
    ]
}
Note

A field with one unique value per document should be used as the tiebreaker of the sort specification. Otherwise the sort order for documents that have the same sort values would be undefined. The recommended way is to use the field _uid which is certain to contain one unique value for each document.

The result from the above request includes an array of sort values for each document. These sort values can be used in conjunction with the search_after parameter to start returning results "after" any document in the result list. For instance we can use the sort values of the last document and pass it to search_after to retrieve the next page of results:

GET twitter/tweet/_search
{
    "size": 10,
    "query": {
        "match" : {
            "title" : "elasticsearch"
        }
    },
    "search_after": [1463538857, "tweet#654323"],
    "sort": [
        {"date": "asc"},
        {"_uid": "desc"}
    ]
}
Note

The parameter from must be set to 0 (or -1) when search_after is used.

search_after is not a solution to jump freely to a random page but rather to scroll many queries in parallel. It is very similar to the scroll API but unlike it, the search_after parameter is stateless, it is always resolved against the latest version of the searcher. For this reason the sort order may change during a walk depending on the updates and deletes of your index.