原英文版地址: https://www.elastic.co/guide/en/elasticsearch/reference/7.7/indices-synced-flush-api.html, 原文档版权归 www.elastic.co 所有
本地英文版地址: ../en/indices-synced-flush-api.html

Synced flush APIedit

Deprecated in 7.6.

Synced-flush is deprecated and will be removed in 8.0. Use flush instead. A flush has the same effect as a synced flush on Elasticsearch 7.6 or later.

Performs a synced flush on one or more indices.

POST /twitter/_flush/synced

Requestedit

POST /<index>/_flush/synced

GET /<index>/_flush/synced

POST /_flush/synced

GET /_flush/synced

Descriptionedit

Use the synced flush APIedit

Use the synced flush API to manually initiate a synced flush. This can be useful for a planned cluster restart where you can stop indexing but don’t want to wait for 5 minutes until all indices are marked as inactive and automatically sync-flushed.

You can request a synced flush even if there is ongoing indexing activity, and Elasticsearch will perform the synced flush on a "best-effort" basis: shards that do not have any ongoing indexing activity will be successfully sync-flushed, and other shards will fail to sync-flush. The successfully sync-flushed shards will have faster recovery times as long as the sync_id marker is not removed by a subsequent flush.

Synced flush overviewedit

Elasticsearch keeps track of which shards have received indexing activity recently, and considers shards that have not received any indexing operations for 5 minutes to be inactive.

When a shard becomes inactive Elasticsearch performs a special kind of flush known as a synced flush. A synced flush performs a normal flush on each replica of the shard, and then adds a marker known as the sync_id to each replica to indicate that these copies have identical Lucene indices. Comparing the sync_id markers of the two copies is a very efficient way to check whether they have identical contents.

When allocating shard replicas, Elasticsearch must ensure that each replica contains the same data as the primary. If the shard copies have been synced-flushed and the replica shares a sync_id with the primary then Elasticsearch knows that the two copies have identical contents. This means there is no need to copy any segment files from the primary to the replica, which saves a good deal of time during recoveries and restarts.

This is particularly useful for clusters having lots of indices which are very rarely updated, such as with time-based indices. Without the synced flush marker, recovery of this kind of cluster would be much slower.

Check for sync_id markersedit

To check whether a shard has a sync_id marker or not, look for the commit section of the shard stats returned by the indices stats API:

GET /twitter/_stats?filter_path=**.commit&level=shards 

filter_path is used to reduce the verbosity of the response, but is entirely optional

The API returns the following response:

{
   "indices": {
      "twitter": {
         "shards": {
            "0": [
               {
                 "commit" : {
                   "id" : "3M3zkw2GHMo2Y4h4/KFKCg==",
                   "generation" : 3,
                   "user_data" : {
                     "translog_uuid" : "hnOG3xFcTDeoI_kvvvOdNA",
                     "history_uuid" : "XP7KDJGiS1a2fHYiFL5TXQ",
                     "local_checkpoint" : "-1",
                     "translog_generation" : "2",
                     "max_seq_no" : "-1",
                     "sync_id" : "AVvFY-071siAOuFGEO9P", 
                     "max_unsafe_auto_id_timestamp" : "-1",
                     "min_retained_seq_no" : "0"
                   },
                   "num_docs" : 0
                 }
               }
            ]
         }
      }
   }
}

the sync id marker

The sync_id marker is removed as soon as the shard is flushed again, and Elasticsearch may trigger an automatic flush of a shard at any time if there are unflushed operations in the shard’s translog. In practice this means that one should consider any indexing operation on an index as having removed its sync_id markers.

Path parametersedit

<index>

(Optional, string) Comma-separated list or wildcard expression of index names used to limit the request.

To sync-flush all indices, omit this parameter or use a value of _all or *.

Query parametersedit

allow_no_indices

(Optional, boolean) If true, the request does not return an error if a wildcard expression or _all value retrieves only missing or closed indices.

This parameter also applies to index aliases that point to a missing or closed index.

expand_wildcards

(Optional, string) Controls what kind of indices that wildcard expressions can expand to. Multiple values are accepted when separated by a comma, as in open,hidden. Valid values are:

all
Expand to open and closed indices, including hidden indices.
open
Expand only to open indices.
closed
Expand only to closed indices.
hidden
Expansion of wildcards will include hidden indices. Must be combined with open, closed, or both.
none
Wildcard expressions are not accepted.

Defaults to open.

ignore_unavailable
(Optional, boolean) If true, missing or closed indices are not included in the response. Defaults to false.

Response codesedit

200
All shards successfully sync-flushed.
409
A replica shard failed to sync-flush.

Examplesedit

Sync-flush a specific indexedit

POST /kimchy/_flush/synced

Synch-flush several indicesedit

POST /kimchy,elasticsearch/_flush/synced

Sync-flush all indicesedit

POST /_flush/synced

The response contains details about how many shards were successfully sync-flushed and information about any failure.

The following response indicates two shards and one replica shard successfully sync-flushed:

{
   "_shards": {
      "total": 2,
      "successful": 2,
      "failed": 0
   },
   "twitter": {
      "total": 2,
      "successful": 2,
      "failed": 0
   }
}

The following response indicates one shard group failed due to pending operations:

{
   "_shards": {
      "total": 4,
      "successful": 2,
      "failed": 2
   },
   "twitter": {
      "total": 4,
      "successful": 2,
      "failed": 2,
      "failures": [
         {
            "shard": 1,
            "reason": "[2] ongoing operations on primary"
         }
      ]
   }
}

Sometimes the failures are specific to a shard replica. The copies that failed will not be eligible for fast recovery but those that succeeded still will be. This case is reported as follows:

{
   "_shards": {
      "total": 4,
      "successful": 1,
      "failed": 1
   },
   "twitter": {
      "total": 4,
      "successful": 3,
      "failed": 1,
      "failures": [
         {
            "shard": 1,
            "reason": "unexpected error",
            "routing": {
               "state": "STARTED",
               "primary": false,
               "node": "SZNr2J_ORxKTLUCydGX4zA",
               "relocating_node": null,
               "shard": 1,
               "index": "twitter"
            }
         }
      ]
   }
}