Transportedit
The transport networking layer is used for internal communication between nodes
within the cluster. Each call that goes from one node to the other uses
the transport layer (for example, when an HTTP GET request is processed
by one node, and should actually be processed by another node that holds
the data). The transport module is also used for the TransportClient
in the
Elasticsearch Java API.
The transport mechanism is completely asynchronous in nature, meaning that there is no blocking thread waiting for a response. The benefit of using asynchronous communication is first solving the C10k problem, as well as being the ideal solution for scatter (broadcast) / gather operations such as search in Elasticsearch.
Transport settingsedit
The internal transport communicates over TCP. You can configure it with the following settings:
Setting | Description |
---|---|
|
A bind port range. Defaults to |
|
The port that other nodes in the cluster
should use when communicating with this node. Useful when a cluster node
is behind a proxy or firewall and the |
|
The host address to bind the transport service to. Defaults to |
|
The host address to publish for nodes in the cluster to connect to. Defaults to |
|
Used to set the |
|
The connect timeout for initiating a new connection (in
time setting format). Defaults to |
|
Set to |
|
Schedule a regular application-level ping message
to ensure that transport connections between nodes are kept alive. Defaults to
|
It also uses the common network settings.
Transport profilesedit
Elasticsearch allows you to bind to multiple ports on different interfaces by the use of transport profiles. See this example configuration
transport.profiles.default.port: 9300-9400 transport.profiles.default.bind_host: 10.0.0.1 transport.profiles.client.port: 9500-9600 transport.profiles.client.bind_host: 192.168.0.1 transport.profiles.dmz.port: 9700-9800 transport.profiles.dmz.bind_host: 172.16.1.2
The default
profile is special. It is used as a fallback for any other
profiles, if those do not have a specific configuration setting set, and is how
this node connects to other nodes in the cluster.
The following parameters can be configured on each transport profile, as in the example above:
-
port
: The port to bind to -
bind_host
: The host to bind -
publish_host
: The host which is published in informational APIs -
tcp.no_delay
: Configures theTCP_NO_DELAY
option for this socket -
tcp.keep_alive
: Configures theSO_KEEPALIVE
option for this socket -
tcp.keep_idle
: Configures theTCP_KEEPIDLE
option for this socket, which determines the time in seconds that a connection must be idle before starting to send TCP keepalive probes. Only applicable on Linux and Mac, and requires JDK 11 or newer. Defaults to -1, which does not set this option at the socket level, but uses default system configuration instead. -
tcp.keep_interval
: Configures theTCP_KEEPINTVL
option for this socket, which determines the time in seconds between sending TCP keepalive probes. Only applicable on Linux and Mac, and requires JDK 11 or newer. Defaults to -1, which does not set this option at the socket level, but uses default system configuration instead. -
tcp.keep_count
: Configures theTCP_KEEPCNT
option for this socket, which determines the number of unacknowledged TCP keepalive probes that may be sent on a connection before it is dropped. Only applicable on Linux and Mac, and requires JDK 11 or newer. Defaults to -1, which does not set this option at the socket level, but uses default system configuration instead. -
tcp.reuse_address
: Configures theSO_REUSEADDR
option for this socket -
tcp.send_buffer_size
: Configures the send buffer size of the socket -
tcp.receive_buffer_size
: Configures the receive buffer size of the socket
Long-lived idle connectionsedit
Elasticsearch opens a number of long-lived TCP connections between each pair of
nodes in the cluster, and some of these connections may be idle for an extended
period of time. Nonetheless, Elasticsearch requires these connections to remain
open, and it can disrupt the operation of the cluster if any inter-node
connections are closed by an external influence such as a firewall. It is
important to configure your network to preserve long-lived idle connections
between Elasticsearch nodes, for instance by leaving tcp.keep_alive
enabled
and ensuring that the keepalive interval is shorter than any timeout that might
cause idle connections to be closed, or by setting transport.ping_schedule
if
keepalives cannot be configured.
Request compressionedit
By default, the transport.compress
setting is false
and network-level
request compression is disabled between nodes in the cluster. This default
normally makes sense for local cluster communication as compression has a
noticeable CPU cost and local clusters tend to be set up with fast network
connections between nodes.
The transport.compress
setting always configures local cluster request
compression and is the fallback setting for remote cluster request compression.
If you want to configure remote request compression differently than local
request compression, you can set it on a per-remote cluster basis using the
cluster.remote.${cluster_alias}.transport.compress
setting.
Response compressionedit
The compression settings do not configure compression for responses. Elasticsearch will compress a response if the inbound request was compressed—even when compression is not enabled. Similarly, Elasticsearch will not compress a response if the inbound request was uncompressed—even when compression is enabled.
Transport traceredit
The transport layer has a dedicated tracer logger which, when activated, logs incoming and out going requests. The log can be dynamically activated
by setting the level of the org.elasticsearch.transport.TransportService.tracer
logger to TRACE
:
PUT _cluster/settings { "transient" : { "logger.org.elasticsearch.transport.TransportService.tracer" : "TRACE" } }
You can also control which actions will be traced, using a set of include and exclude wildcard patterns. By default every request will be traced except for fault detection pings:
PUT _cluster/settings { "transient" : { "transport.tracer.include" : "*", "transport.tracer.exclude" : "internal:coordination/fault_detection/*" } }