原英文版地址: https://www.elastic.co/guide/en/elasticsearch/reference/7.7/inference-processor.html, 原文档版权归 www.elastic.co 所有
本地英文版地址: ../en/inference-processor.html

Inference Processoredit

This functionality is experimental and may be changed or removed completely in a future release. Elastic will take a best effort approach to fix any issues, but experimental features are not subject to the support SLA of official GA features.

Uses a pre-trained data frame analytics model to infer against the data that is being ingested in the pipeline.

Table 52. Inference Options

Name Required Default Description

model_id

yes

-

(String) The ID of the model to load and infer against.

target_field

no

ml.inference.<processor_tag>

(String) Field added to incoming documents to contain results objects.

field_map

yes

-

(Object) Maps the document field names to the known field names of the model. This mapping takes precedence over any default mappings provided in the model configuration.

inference_config

yes

-

(Object) Contains the inference type and its options. There are two types: regression and classification.

if

no

-

Conditionally execute this processor.

on_failure

no

-

Handle failures for this processor. See Handling Failures in Pipelines.

ignore_failure

no

false

Ignore failures for this processor. See Handling Failures in Pipelines.

tag

no

-

An identifier for this processor. Useful for debugging and metrics.

{
  "inference": {
    "model_id": "flight_delay_regression-1571767128603",
    "target_field": "FlightDelayMin_prediction_infer",
    "field_map": {},
    "inference_config": { "regression": {} }
  }
}

Regression configuration optionsedit

results_field

(Optional, string) Specifies the field to which the inference prediction is written. Defaults to predicted_value.

num_top_feature_importance_values
(Optional, integer) Specifies the maximum number of feature importance values per document. By default, it is zero and no feature importance calculation occurs.

Classification configuration optionsedit

results_field
(Optional, string) The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
num_top_classes
(Optional, integer) Specifies the number of top class predictions to return. Defaults to 0.
top_classes_results_field

(Optional, string) Specifies the field to which the top classes are written. Defaults to top_classes.

num_top_feature_importance_values
(Optional, integer) Specifies the maximum number of feature importance values per document. By default, it is zero and no feature importance calculation occurs.

inference_config examplesedit

{
  "inference_config": {
    "regression": {
      "results_field": "my_regression"
    }
  }
}

This configuration specifies a regression inference and the results are written to the my_regression field contained in the target_field results object.

{
  "inference_config": {
    "classification": {
      "num_top_classes": 2,
      "results_field": "prediction",
      "top_classes_results_field": "probabilities"
    }
  }
}

This configuration specifies a classification inference. The number of categories for which the predicted probabilities are reported is 2 (num_top_classes). The result is written to the prediction field and the top classes to the probabilities field. Both fields are contained in the target_field results object.

Feature importance object mappingedit

Update your index mapping of the feature importance result field as you can see below to get the full benefit of aggregating and searching for feature importance.

"ml.inference.feature_importance": {
  "type": "nested",
  "dynamic": true,
  "properties": {
    "feature_name": {
      "type": "keyword"
    },
    "importance": {
      "type": "double"
    }
  }
}

The mapping field name for feature importance is compounded as follows:

<ml.inference.target_field>.<inference.tag>.feature_importance

If inference.tag is not provided in the processor definition, it is not part of the field path. The <ml.inference.target_field> defaults to ml.inference.

For example, you provide a tag foo in the definition as you can see below:

{
  "tag": "foo",
  ...
}

The feature importance value is written to the ml.inference.foo.feature_importance field.

You can also specify a target field as follows:

{
  "tag": "foo",
  "target_field": "my_field"
}

In this case, feature importance is exposed in the my_field.foo.feature_importance field.