原文地址: https://www.elastic.co/guide/en/elasticsearch/reference/7.7/analysis-standard-tokenizer.html, 原文档版权归 www.elastic.co 所有
IMPORTANT: No additional bug fixes or documentation updates
will be released for this version. For the latest information, see the
current release documentation.
Standard Tokenizeredit
The standard
tokenizer provides grammar based tokenization (based on the
Unicode Text Segmentation algorithm, as specified in
Unicode Standard Annex #29) and works well
for most languages.
Example outputedit
POST _analyze { "tokenizer": "standard", "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." }
The above sentence would produce the following terms:
[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ]
Configurationedit
The standard
tokenizer accepts the following parameters:
|
The maximum token length. If a token is seen that exceeds this length then
it is split at |
Example configurationedit
In this example, we configure the standard
tokenizer to have a
max_token_length
of 5 (for demonstration purposes):
PUT my_index { "settings": { "analysis": { "analyzer": { "my_analyzer": { "tokenizer": "my_tokenizer" } }, "tokenizer": { "my_tokenizer": { "type": "standard", "max_token_length": 5 } } } } } POST my_index/_analyze { "analyzer": "my_analyzer", "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." }
The above example produces the following terms:
[ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ]