本地英文版地址: ../en/get-inference.html
Get inference trained model APIedit
Retrieves configuration information for a trained inference model.
This functionality is experimental and may be changed or removed completely in a future release. Elastic will take a best effort approach to fix any issues, but experimental features are not subject to the support SLA of official GA features.
Requestedit
GET _ml/inference/
GET _ml/inference/<model_id>
GET _ml/inference/_all
GET _ml/inference/<model_id1>,<model_id2>
GET _ml/inference/<model_id_pattern*>
Prerequisitesedit
Required privileges which should be added to a custom role:
-
cluster:
monitor_ml
For more information, see Security privileges and Built-in roles.
Descriptionedit
You can get information for multiple trained models in a single API request by using a comma-separated list of model IDs or a wildcard expression.
Path parametersedit
-
<model_id>
- (Optional, string) The unique identifier of the trained inference model.
Query parametersedit
-
allow_no_match
-
(Optional, boolean) Specifies what to do when the request:
- Contains wildcard expressions and there are no data frame analytics jobs that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
The default value is
true
, which returns an emptydata_frame_analytics
array when there are no matches and the subset of results when there are partial matches. If this parameter isfalse
, the request returns a404
status code when there are no matches or only partial matches. -
decompress_definition
-
(Optional, boolean)
Specifies whether the included model definition should be returned as a JSON map
(
true
) or in a custom compressed format (false
). Defaults totrue
. -
from
-
(Optional, integer)
Skips the specified number of data frame analytics jobs. The default value is
0
. -
include_model_definition
-
(Optional, boolean)
Specifies if the model definition should be returned in the response. Defaults
to
false
. Whentrue
, only a single model must match the ID patterns provided, otherwise a bad request is returned. -
size
-
(Optional, integer)
Specifies the maximum number of data frame analytics jobs to obtain. The default value
is
100
. -
tags
- (Optional, string) A comma delimited string of tags. A inference model can have many tags, or none. When supplied, only inference models that contain all the supplied tags are returned.
Response bodyedit
-
trained_model_configs
-
(array) An array of trained model resources, which are sorted by the
model_id
value in ascending order.Properties of trained model resources
-
created_by
- (string) Information on the creator of the trained model.
-
create_time
- (time units) The time when the trained model was created.
-
default_field_map
-
(object) A string to string object that contains the default field map to use when inferring against the model. For example, data frame analytics may train the model on a specific multi-field
foo.keyword
. The analytics job would then supply a default field map entry for"foo" : "foo.keyword"
.Any field map described in the inference configuration takes precedence.
-
estimated_heap_memory_usage_bytes
- (integer) The estimated heap usage in bytes to keep the trained model in memory.
-
estimated_operations
- (integer) The estimated number of operations to use the trained model.
-
license_level
- (string) The license level of the trained model.
-
metadata
-
(object)
An object containing metadata about the trained model. For example, models
created by data frame analytics contain
analysis_config
andinput
objects. -
model_id
- (string) Idetifier for the trained model.
-
tags
- (string) A comma delimited string of tags. A inference model can have many tags, or none.
-
version
- (string) The Elasticsearch version number in which the trained model was created.
-
Response codesedit
-
400
-
If
include_model_definition
istrue
, this code indicates that more than one models match the ID pattern. -
404
(Missing resources) -
If
allow_no_match
isfalse
, this code indicates that there are no resources that match the request or only partial matches for the request.
Examplesedit
The following example gets configuration information for all the trained models:
GET _ml/inference/