本地英文版地址: ../en/high-availability-cluster-design-large-clusters.html
Resilience in larger clustersedit
It is not unusual for nodes to share some common infrastructure, such as a power supply or network router. If so, you should plan for the failure of this infrastructure and ensure that such a failure would not affect too many of your nodes. It is common practice to group all the nodes sharing some infrastructure into zones and to plan for the failure of any whole zone at once.
Your cluster’s zones should all be contained within a single data centre. Elasticsearch expects its node-to-node connections to be reliable and have low latency and high bandwidth. Connections between data centres typically do not meet these expectations. Although Elasticsearch will behave correctly on an unreliable or slow network, it will not necessarily behave optimally. It may take a considerable length of time for a cluster to fully recover from a network partition since it must resynchronize any missing data and rebalance the cluster once the partition heals. If you want your data to be available in multiple data centres, deploy a separate cluster in each data centre and use cross-cluster search or cross-cluster replication to link the clusters together. These features are designed to perform well even if the cluster-to-cluster connections are less reliable or slower than the network within each cluster.
After losing a whole zone’s worth of nodes, a properly-designed cluster may be functional but running with significantly reduced capacity. You may need to provision extra nodes to restore acceptable performance in your cluster when handling such a failure.
For resilience against whole-zone failures, it is important that there is a copy of each shard in more than one zone, which can be achieved by placing data nodes in multiple zones and configuring shard allocation awareness. You should also ensure that client requests are sent to nodes in more than one zone.
You should consider all node roles and ensure that each role is split redundantly across two or more zones. For instance, if you are using ingest pipelines or machine learning, you should have ingest or machine learning nodes in two or more zones. However, the placement of master-eligible nodes requires a little more care because a resilient cluster needs at least two of the three master-eligible nodes in order to function. The following sections explore the options for placing master-eligible nodes across multiple zones.
Two-zone clustersedit
If you have two zones, you should have a different number of master-eligible nodes in each zone so that the zone with more nodes will contain a majority of them and will be able to survive the loss of the other zone. For instance, if you have three master-eligible nodes then you may put all of them in one zone or you may put two in one zone and the third in the other zone. You should not place an equal number of master-eligible nodes in each zone. If you place the same number of master-eligible nodes in each zone, neither zone has a majority of its own. Therefore, the cluster may not survive the loss of either zone.
Two-zone clusters with a tiebreakeredit
The two-zone deployment described above is tolerant to the loss of one of its zones but not to the loss of the other one because master elections are majority-based. You cannot configure a two-zone cluster so that it can tolerate the loss of either zone because this is theoretically impossible. You might expect that if either zone fails then Elasticsearch can elect a node from the remaining zone as the master but it is impossible to tell the difference between the failure of a remote zone and a mere loss of connectivity between the zones. If both zones were capable of running independent elections then a loss of connectivity would lead to a split-brain problem and therefore data loss. Elasticsearch avoids this and protects your data by not electing a node from either zone as master until that node can be sure that it has the latest cluster state and that there is no other master in the cluster. This may mean there is no master at all until connectivity is restored.
You can solve this by placing one master-eligible node in each of your two zones and adding a single extra master-eligible node in an independent third zone. The extra master-eligible node acts as a tiebreaker in cases where the two original zones are disconnected from each other. The extra tiebreaker node should be a dedicated voting-only master-eligible node, also known as a dedicated tiebreaker. A dedicated tiebreaker need not be as powerful as the other two nodes since it has no other roles and will not perform any searches nor coordinate any client requests nor be elected as the master of the cluster.
You should use shard allocation awareness to ensure that there is a copy of each shard in each zone. This means either zone remains fully available if the other zone fails.
All master-eligible nodes, including voting-only nodes, are on the critical path for publishing cluster state updates. Because of this, these nodes require reasonably fast persistent storage and a reliable, low-latency network connection to the rest of the cluster. If you add a tiebreaker node in a third independent zone then you must make sure it has adequate resources and good connectivity to the rest of the cluster.
Clusters with three or more zonesedit
If you have three zones then you should have one master-eligible node in each zone. If you have more than three zones then you should choose three of the zones and put a master-eligible node in each of these three zones. This will mean that the cluster can still elect a master even if one of the zones fails.
As always, your indices should have at least one replica in case a node fails. You should also use shard allocation awareness to limit the number of copies of each shard in each zone. For instance, if you have an index with one or two replicas configured then allocation awareness will ensure that the replicas of the shard are in a different zone from the primary. This means that a copy of every shard will still be available if one zone fails. The availability of this shard will not be affected by such a failure.
Summaryedit
The cluster will be resilient to the loss of any zone as long as:
-
The cluster health status is
green
. - There are at least two zones containing data nodes.
- Every index has at least one replica of each shard, in addition to the primary.
- Shard allocation awareness is configured to avoid concentrating all copies of a shard within a single zone.
- The cluster has at least three master-eligible nodes. At least two of these nodes are not voting-only master-eligible nodes, and they are spread evenly across at least three zones.
- Clients are configured to send their requests to nodes in more than one zone or are configured to use a load balancer that balances the requests across an appropriate set of nodes. The Elastic Cloud service provides such a load balancer.