Edge n-gram tokenizeredit
The edge_ngram
tokenizer first breaks text down into words whenever it
encounters one of a list of specified characters, then it emits
N-grams of each word where the start of
the N-gram is anchored to the beginning of the word.
Edge N-Grams are useful for search-as-you-type queries.
When you need search-as-you-type for text which has a widely known order, such as movie or song titles, the completion suggester is a much more efficient choice than edge N-grams. Edge N-grams have the advantage when trying to autocomplete words that can appear in any order.
Example outputedit
With the default settings, the edge_ngram
tokenizer treats the initial text as a
single token and produces N-grams with minimum length 1
and maximum length
2
:
POST _analyze { "tokenizer": "edge_ngram", "text": "Quick Fox" }
The above sentence would produce the following terms:
[ Q, Qu ]
These default gram lengths are almost entirely useless. You need to
configure the edge_ngram
before using it.
Configurationedit
The edge_ngram
tokenizer accepts the following parameters:
-
min_gram
-
Minimum length of characters in a gram. Defaults to
1
. -
max_gram
-
Maximum length of characters in a gram. Defaults to
2
. -
token_chars
-
Character classes that should be included in a token. Elasticsearch will split on characters that don’t belong to the classes specified. Defaults to
[]
(keep all characters).Character classes may be any of the following:
-
letter
— for examplea
,b
,ï
or京
-
digit
— for example3
or7
-
whitespace
— for example" "
or"\n"
-
punctuation
— for example!
or"
-
symbol
— for example$
or√
-
custom
— custom characters which need to be set using thecustom_token_chars
setting.
-
-
custom_token_chars
-
Custom characters that should be treated as part of a token. For example,
setting this to
+-_
will make the tokenizer treat the plus, minus and underscore sign as part of a token.
Limitations of the max_gram
parameteredit
The edge_ngram
tokenizer’s max_gram
value limits the character length of
tokens. When the edge_ngram
tokenizer is used with an index analyzer, this
means search terms longer than the max_gram
length may not match any indexed
terms.
For example, if the max_gram
is 3
, searches for apple
won’t match the
indexed term app
.
To account for this, you can use the
truncate
token filter with a search analyzer
to shorten search terms to the max_gram
character length. However, this could
return irrelevant results.
For example, if the max_gram
is 3
and search terms are truncated to three
characters, the search term apple
is shortened to app
. This means searches
for apple
return any indexed terms matching app
, such as apply
, snapped
,
and apple
.
We recommend testing both approaches to see which best fits your use case and desired search experience.
Example configurationedit
In this example, we configure the edge_ngram
tokenizer to treat letters and
digits as tokens, and to produce grams with minimum length 2
and maximum
length 10
:
PUT my_index { "settings": { "analysis": { "analyzer": { "my_analyzer": { "tokenizer": "my_tokenizer" } }, "tokenizer": { "my_tokenizer": { "type": "edge_ngram", "min_gram": 2, "max_gram": 10, "token_chars": [ "letter", "digit" ] } } } } } POST my_index/_analyze { "analyzer": "my_analyzer", "text": "2 Quick Foxes." }
The above example produces the following terms:
[ Qu, Qui, Quic, Quick, Fo, Fox, Foxe, Foxes ]
Usually we recommend using the same analyzer
at index time and at search
time. In the case of the edge_ngram
tokenizer, the advice is different. It
only makes sense to use the edge_ngram
tokenizer at index time, to ensure
that partial words are available for matching in the index. At search time,
just search for the terms the user has typed in, for instance: Quick Fo
.
Below is an example of how to set up a field for search-as-you-type.
Note that the max_gram
value for the index analyzer is 10
, which limits
indexed terms to 10 characters. Search terms are not truncated, meaning that
search terms longer than 10 characters may not match any indexed terms.
PUT my_index { "settings": { "analysis": { "analyzer": { "autocomplete": { "tokenizer": "autocomplete", "filter": [ "lowercase" ] }, "autocomplete_search": { "tokenizer": "lowercase" } }, "tokenizer": { "autocomplete": { "type": "edge_ngram", "min_gram": 2, "max_gram": 10, "token_chars": [ "letter" ] } } } }, "mappings": { "properties": { "title": { "type": "text", "analyzer": "autocomplete", "search_analyzer": "autocomplete_search" } } } } PUT my_index/_doc/1 { "title": "Quick Foxes" } POST my_index/_refresh GET my_index/_search { "query": { "match": { "title": { "query": "Quick Fo", "operator": "and" } } } }