Get inference trained model statistics APIedit
Retrieves usage information for trained inference models.
This functionality is experimental and may be changed or removed completely in a future release. Elastic will take a best effort approach to fix any issues, but experimental features are not subject to the support SLA of official GA features.
Requestedit
GET _ml/inference/_stats
GET _ml/inference/_all/_stats
GET _ml/inference/<model_id>/_stats
GET _ml/inference/<model_id>,<model_id_2>/_stats
GET _ml/inference/<model_id_pattern*>,<model_id_2>/_stats
Prerequisitesedit
Required privileges which should be added to a custom role:
-
cluster:
monitor_ml
For more information, see Security privileges and Built-in roles.
Descriptionedit
You can get usage information for multiple trained models in a single API request by using a comma-separated list of model IDs or a wildcard expression.
Path parametersedit
-
<model_id>
- (Optional, string) The unique identifier of the trained inference model.
Query parametersedit
-
allow_no_match
-
(Optional, boolean) Specifies what to do when the request:
- Contains wildcard expressions and there are no data frame analytics jobs that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
The default value is
true
, which returns an emptydata_frame_analytics
array when there are no matches and the subset of results when there are partial matches. If this parameter isfalse
, the request returns a404
status code when there are no matches or only partial matches. -
from
-
(Optional, integer)
Skips the specified number of data frame analytics jobs. The default value is
0
. -
size
-
(Optional, integer)
Specifies the maximum number of data frame analytics jobs to obtain. The default value
is
100
.
Response codesedit
-
404
(Missing resources) -
If
allow_no_match
isfalse
, this code indicates that there are no resources that match the request or only partial matches for the request.
Examplesedit
The following example gets usage information for all the trained models:
GET _ml/inference/_stats
The API returns the following results:
{ "count": 2, "trained_model_stats": [ { "model_id": "flight-delay-prediction-1574775339910", "pipeline_count": 0 }, { "model_id": "regression-job-one-1574775307356", "pipeline_count": 1, "ingest": { "total": { "count": 178, "time_in_millis": 8, "current": 0, "failed": 0 }, "pipelines": { "flight-delay": { "count": 178, "time_in_millis": 8, "current": 0, "failed": 0, "processors": [ { "inference": { "type": "inference", "stats": { "count": 178, "time_in_millis": 7, "current": 0, "failed": 0 } } } ] } } } } ] }