本地英文版地址: ../en/grok-processor.html
Grok Processoredit
Extracts structured fields out of a single text field within a document. You choose which field to extract matched fields from, as well as the grok pattern you expect will match. A grok pattern is like a regular expression that supports aliased expressions that can be reused.
This tool is perfect for syslog logs, apache and other webserver logs, mysql logs, and in general, any log format that is generally written for humans and not computer consumption. This processor comes packaged with many reusable patterns.
If you need help building patterns to match your logs, you will find the Grok Debugger tool quite useful! The Grok Debugger is an X-Pack feature under the Basic License and is therefore free to use. The Grok Constructor at http://grokconstructor.appspot.com/ is also a useful tool.
Grok Basicsedit
Grok sits on top of regular expressions, so any regular expressions are valid in grok as well. The regular expression library is Oniguruma, and you can see the full supported regexp syntax on the Oniguruma site.
Grok works by leveraging this regular expression language to allow naming existing patterns and combining them into more complex patterns that match your fields.
The syntax for reusing a grok pattern comes in three forms: %{SYNTAX:SEMANTIC}
, %{SYNTAX}
, %{SYNTAX:SEMANTIC:TYPE}
.
The SYNTAX
is the name of the pattern that will match your text. For example, 3.44
will be matched by the NUMBER
pattern and 55.3.244.1
will be matched by the IP
pattern. The syntax is how you match. NUMBER
and IP
are both
patterns that are provided within the default patterns set.
The SEMANTIC
is the identifier you give to the piece of text being matched. For example, 3.44
could be the
duration of an event, so you could call it simply duration
. Further, a string 55.3.244.1
might identify
the client
making a request.
The TYPE
is the type you wish to cast your named field. int
, long
, double
, float
and boolean
are supported types for coercion.
For example, you might want to match the following text:
3.44 55.3.244.1
You may know that the message in the example is a number followed by an IP address. You can match this text by using the following Grok expression.
%{NUMBER:duration} %{IP:client}
Using the Grok Processor in a Pipelineedit
Table 48. Grok Options
Name | Required | Default | Description |
---|---|---|---|
|
yes |
- |
The field to use for grok expression parsing |
|
yes |
- |
An ordered list of grok expression to match and extract named captures with. Returns on the first expression in the list that matches. |
|
no |
- |
A map of pattern-name and pattern tuples defining custom patterns to be used by the current processor. Patterns matching existing names will override the pre-existing definition. |
|
no |
false |
when true, |
|
no |
false |
If |
|
no |
- |
Conditionally execute this processor. |
|
no |
- |
Handle failures for this processor. See Handling Failures in Pipelines. |
|
no |
|
Ignore failures for this processor. See Handling Failures in Pipelines. |
|
no |
- |
An identifier for this processor. Useful for debugging and metrics. |
Here is an example of using the provided patterns to extract out and name structured fields from a string field in a document.
POST _ingest/pipeline/_simulate { "pipeline": { "description" : "...", "processors": [ { "grok": { "field": "message", "patterns": ["%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes:int} %{NUMBER:duration:double}"] } } ] }, "docs":[ { "_source": { "message": "55.3.244.1 GET /index.html 15824 0.043" } } ] }
This pipeline will insert these named captures as new fields within the document, like so:
{ "docs": [ { "doc": { "_index": "_index", "_type": "_doc", "_id": "_id", "_source" : { "duration" : 0.043, "request" : "/index.html", "method" : "GET", "bytes" : 15824, "client" : "55.3.244.1", "message" : "55.3.244.1 GET /index.html 15824 0.043" }, "_ingest": { "timestamp": "2016-11-08T19:43:03.850+0000" } } } ] }
Custom Patternsedit
The Grok processor comes pre-packaged with a base set of patterns. These patterns may not always have what you are looking for. Patterns have a very basic format. Each entry has a name and the pattern itself.
You can add your own patterns to a processor definition under the pattern_definitions
option.
Here is an example of a pipeline specifying custom pattern definitions:
{ "description" : "...", "processors": [ { "grok": { "field": "message", "patterns": ["my %{FAVORITE_DOG:dog} is colored %{RGB:color}"], "pattern_definitions" : { "FAVORITE_DOG" : "beagle", "RGB" : "RED|GREEN|BLUE" } } } ] }
Providing Multiple Match Patternsedit
Sometimes one pattern is not enough to capture the potential structure of a field. Let’s assume we
want to match all messages that contain your favorite pet breeds of either cats or dogs. One way to accomplish
this is to provide two distinct patterns that can be matched, instead of one really complicated expression capturing
the same or
behavior.
Here is an example of such a configuration executed against the simulate API:
POST _ingest/pipeline/_simulate { "pipeline": { "description" : "parse multiple patterns", "processors": [ { "grok": { "field": "message", "patterns": ["%{FAVORITE_DOG:pet}", "%{FAVORITE_CAT:pet}"], "pattern_definitions" : { "FAVORITE_DOG" : "beagle", "FAVORITE_CAT" : "burmese" } } } ] }, "docs":[ { "_source": { "message": "I love burmese cats!" } } ] }
response:
{ "docs": [ { "doc": { "_type": "_doc", "_index": "_index", "_id": "_id", "_source": { "message": "I love burmese cats!", "pet": "burmese" }, "_ingest": { "timestamp": "2016-11-08T19:43:03.850+0000" } } } ] }
Both patterns will set the field pet
with the appropriate match, but what if we want to trace which of our
patterns matched and populated our fields? We can do this with the trace_match
parameter. Here is the output of
that same pipeline, but with "trace_match": true
configured:
{ "docs": [ { "doc": { "_type": "_doc", "_index": "_index", "_id": "_id", "_source": { "message": "I love burmese cats!", "pet": "burmese" }, "_ingest": { "_grok_match_index": "1", "timestamp": "2016-11-08T19:43:03.850+0000" } } } ] }
In the above response, you can see that the index of the pattern that matched was "1"
. This is to say that it was the
second (index starts at zero) pattern in patterns
to match.
This trace metadata enables debugging which of the patterns matched. This information is stored in the ingest metadata and will not be indexed.
Retrieving patterns from REST endpointedit
The Grok Processor comes packaged with its own REST endpoint for retrieving which patterns the processor is packaged with.
GET _ingest/processor/grok
The above request will return a response body containing a key-value representation of the built-in patterns dictionary.
{ "patterns" : { "BACULA_CAPACITY" : "%{INT}{1,3}(,%{INT}{3})*", "PATH" : "(?:%{UNIXPATH}|%{WINPATH})", ... }
This can be useful to reference as the built-in patterns change across versions.
Grok watchdogedit
Grok expressions that take too long to execute are interrupted and the grok processor then fails with an exception. The grok processor has a watchdog thread that determines when evaluation of a grok expression takes too long and is controlled by the following settings:
Table 49. Grok watchdog settings
Name | Default | Description |
---|---|---|
|
1s |
How often to check whether there are grok evaluations that take longer than the maximum allowed execution time. |
|
1s |
The maximum allowed execution of a grok expression evaluation. |
Grok debuggingedit
It is advised to use the Grok Debugger to debug grok patterns. From there you can test one or more patterns in the UI against sample data. Under the covers it uses the same engine as ingest node processor.
Additionally, it is recommended to enable debug logging for Grok so that any additional messages may also be seen in the Elasticsearch server log.
PUT _cluster/settings { "transient": { "logger.org.elasticsearch.ingest.common.GrokProcessor": "debug" } }
- Elasticsearch权威指南: 其他版本:
- Elasticsearch是什么?
- 7.7版本的新特性
- 开始使用Elasticsearch
- 安装和设置
- 升级Elasticsearch
- 搜索你的数据
- 查询领域特定语言(Query DSL)
- SQL access(暂时不翻译)
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- 聚合
- 度量(metric)聚合
- 桶(bucket)聚合
- adjacency_matrix 聚合
- auto_date_histogram 聚合
- children 聚合
- composite 聚合
- date_histogram 聚合
- date_range 聚合
- diversified_sampler 聚合
- filter 聚合
- filters 聚合
- geo_distance 聚合
- geohash_grid 聚合
- geotile_grid 聚合
- global 聚合
- histogram 聚合
- ip_range 聚合
- missing 聚合
- nested 聚合
- parent 聚合
- range 聚合
- rare_terms 聚合
- reverse_nested 聚合
- sampler 聚合
- significant_terms 聚合
- significant_text 聚合
- terms 聚合
- 给范围字段分桶的微妙之处
- 管道(pipeline)聚合
- 矩阵(matrix)聚合
- 重度缓存的聚合
- 只返回聚合的结果
- 聚合元数据
- Returning the type of the aggregation
- 使用转换对聚合结果进行索引
- 脚本
- 映射
- 删除的映射类型
- 字段数据类型
- alias(别名)
- array(数组)
- binary(二进制)
- boolean(布尔)
- date(日期)
- date_nanos(日期纳秒)
- dense_vector(密集矢量)
- histogram(直方图)
- flattened(扁平)
- geo_point(地理坐标点)
- geo_shape(地理形状)
- IP
- join(联结)
- keyword(关键词)
- nested(嵌套)
- numeric(数值)
- object(对象)
- percolator(渗透器)
- range(范围)
- rank_feature(特征排名)
- rank_features(特征排名)
- search_as_you_type(输入即搜索)
- Sparse vector
- Text
- Token count
- Shape
- Constant keyword
- Meta-Fields
- Mapping parameters
- Dynamic Mapping
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Char Group Tokenizer
- Classic Tokenizer
- Edge n-gram tokenizer
- Keyword Tokenizer
- Letter Tokenizer
- Lowercase Tokenizer
- N-gram tokenizer
- Path Hierarchy Tokenizer
- Path Hierarchy Tokenizer Examples
- Pattern Tokenizer
- Simple Pattern Tokenizer
- Simple Pattern Split Tokenizer
- Standard Tokenizer
- Thai Tokenizer
- UAX URL Email Tokenizer
- Whitespace Tokenizer
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Index modules
- Ingest node
- Pipeline Definition
- Accessing Data in Pipelines
- Conditional Execution in Pipelines
- Handling Failures in Pipelines
- Enrich your data
- Processors
- Append Processor
- Bytes Processor
- Circle Processor
- Convert Processor
- CSV Processor
- Date Processor
- Date Index Name Processor
- Dissect Processor
- Dot Expander Processor
- Drop Processor
- Enrich Processor
- Fail Processor
- Foreach Processor
- GeoIP Processor
- Grok Processor
- Gsub Processor
- HTML Strip Processor
- Inference Processor
- Join Processor
- JSON Processor
- KV Processor
- Lowercase Processor
- Pipeline Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Set Security User Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- URL Decode Processor
- User Agent processor
- ILM: Manage the index lifecycle
- Monitor a cluster
- Frozen indices
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure a cluster
- Overview
- Configuring security
- User authentication
- Built-in users
- Internal users
- Token-based authentication services
- Realms
- Realm chains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Security privileges
- Document level security
- Field level security
- Granting privileges for indices and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enabling audit logging
- Encrypting communications
- Restricting connections with IP filtering
- Cross cluster search, clients, and integrations
- Tutorial: Getting started with security
- Tutorial: Encrypting communications
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Alerting on cluster and index events
- Command line tools
- How To
- Glossary of terms
- REST APIs
- API conventions
- cat APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat shards
- cat segments
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Nodes reload secure settings
- Nodes stats
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Cross-cluster replication APIs
- Document APIs
- Enrich APIs
- Explore API
- Index APIs
- Add index alias
- Analyze
- Clear cache
- Clone index
- Close index
- Create index
- Delete index
- Delete index alias
- Delete index template
- Flush
- Force merge
- Freeze index
- Get field mapping
- Get index
- Get index alias
- Get index settings
- Get index template
- Get mapping
- Index alias exists
- Index exists
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists
- Open index
- Put index template
- Put mapping
- Refresh
- Rollover index
- Shrink index
- Split index
- Synced flush
- Type exists
- Unfreeze index
- Update index alias
- Update index settings
- Index lifecycle management API
- Ingest APIs
- Info API
- Licensing APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendar
- Create datafeeds
- Create filter
- Delete calendar
- Delete datafeeds
- Delete events from calendar
- Delete filter
- Delete forecast
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Find file structure
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get machine learning info
- Get model snapshots
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Revert model snapshots
- Set upgrade mode
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filter
- Update jobs
- Update model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Create inference trained model
- Delete data frame analytics jobs
- Delete inference trained model
- Evaluate data frame analytics
- Explain data frame analytics API
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Get inference trained model
- Get inference trained model stats
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Migration APIs
- Reload search analyzers
- Rollup APIs
- Search APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete users
- Disable users
- Enable users
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Get token
- Get users
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect Prepare Authentication API
- OpenID Connect authenticate API
- OpenID Connect logout API
- SAML prepare authentication API
- SAML authenticate API
- SAML logout API
- SAML invalidate API
- SSL certificate
- Snapshot and restore APIs
- Snapshot lifecycle management API
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Breaking changes
- Release notes
- Elasticsearch version 7.7.1
- Elasticsearch version 7.7.0
- Elasticsearch version 7.6.2
- Elasticsearch version 7.6.1
- Elasticsearch version 7.6.0
- Elasticsearch version 7.5.2
- Elasticsearch version 7.5.1
- Elasticsearch version 7.5.0
- Elasticsearch version 7.4.2
- Elasticsearch version 7.4.1
- Elasticsearch version 7.4.0
- Elasticsearch version 7.3.2
- Elasticsearch version 7.3.1
- Elasticsearch version 7.3.0
- Elasticsearch version 7.2.1
- Elasticsearch version 7.2.0
- Elasticsearch version 7.1.1
- Elasticsearch version 7.1.0
- Elasticsearch version 7.0.0
- Elasticsearch version 7.0.0-rc2
- Elasticsearch version 7.0.0-rc1
- Elasticsearch version 7.0.0-beta1
- Elasticsearch version 7.0.0-alpha2
- Elasticsearch version 7.0.0-alpha1