All Products
Search
Document Center

:Common kernel network parameters of ECS Linux instances and FAQ

Last Updated:Dec 15, 2020

Disclaimer: This article may contain information about third-party products. Such information is for reference only. Alibaba Cloud does not make any guarantee, express or implied, with respect to the performance and reliability of third-party products, as well as potential impacts of operations on the products.

Overview

This article describes common Linux kernel parameters and the related problem troubleshooting.

Background

Alibaba Cloud reminds you that:

  • Before you perform operations that may cause risks, such as modifying instance configurations or data, we recommend that you check the disaster recovery and fault tolerance capabilities of the instances to ensure data security.
  • If you modify the configurations and data of instances including but not limited to ECS and RDS instances, we recommend that you create snapshots or enable RDS log backup.
  • If you have authorized or submitted security information such as the logon account and password in the Alibaba Cloud Management console, we recommend that you modify such information in a timely manner.

This article mainly introduces the following contents, you can choose according to actual needs.

Query and modify kernel parameters of a Linux instance

Note the following before you modify kernel parameter:

  • We recommend that you only modify kernel parameters as needed.
  • Before you modify kernel parameters, understand the functions of the parameters. Note that kernel parameters can be different in environments of the same type or version.
  • Back up important data in ECS instances. For more information about how to back up data, see Create a snapshot.

This article provides the following methods to modify the kernel parameters of a Linux instance:

Method 1: Query and modify kernel parameters in the /proc/sys /directory

/proc/sys/ is a pseudo directory generated after the Linux kernel is started. The net folder in this directory stores all kernel parameters that have taken effect in the system. The directory tree structure is determined based on complete parameter names. For example, net.ipv4.tcp_tw_recycle corresponds to the /proc/sys/net/ipv4/tcp_tw_recycle file, and the file content is the parameter value. The parameter value modified in method 1 only takes effect during the current running. The system rolls back to the historical value after restarting the system. Generally, the temporary modification is used to verify the historical value. If you need to make permanent changes, see method 2.

  • View kernel parameter: run the cat command to view the content of the files. Run the following command to view the value of net.ipv4.tcp_tw_recycle.
    cat /proc/sys/net/ipv4/tcp_tw_recycle 
  • Modify the kernel parameter: run the echo command to modify the file corresponding to the kernel parameter. Run the following command to change the value of net.ipv4.tcp_tw_recycle to 0.
    echo "0" > /proc/sys/net/ipv4/tcp_tw_recycle 

Method 2: Query and modify kernel parameters by using the sysctl.conf file

  • Query kernel parameters: Run the sysctl -a command to query all parameters that have taken effect in the system. A similar output is displayed:
    net.ipv4.tcp_app_win = 31
    net.ipv4.tcp_adv_win_scale = 2
    net.ipv4.tcp_tw_reuse = 0
    net.ipv4.tcp_frto = 2
    net.ipv4.tcp_frto_response = 0
    net.ipv4.tcp_low_latency = 0
    net.ipv4.tcp_no_metrics_save = 0
    net.ipv4.tcp_moderate_rcvbuf = 1
    net.ipv4.tcp_tso_win_divisor = 3
    net.ipv4.tcp_congestion_control = cubic
    net.ipv4.tcp_abc = 0
    net.ipv4.tcp_mtu_probing = 0
    net.ipv4.tcp_base_mss = 512
    net.ipv4.tcp_workaround_signed_windows = 0
    net.ipv4.tcp_challenge_ack_limit = 1000
    net.ipv4.tcp_limit_output_bytes = 262144
    net.ipv4.tcp_dma_copybreak = 4096
    net.ipv4.tcp_slow_start_after_idle = 1
    net.ipv4.cipso_cache_enable = 1
    net.ipv4.cipso_cache_bucket_size = 10
    net.ipv4.cipso_rbm_optfmt = 0
    net.ipv4.cipso_rbm_strictvalid = 1
  • You can use the following methods to modify kernel parameters.
    Note: the kernel is unstable after you adjust the kernel parameter. Make sure to restart the instance.
    • Run the following command to temporarily modify kernel parameter :
      /sbin/sysctl -w kernel.parameter="[$Example]"
      Note:[$Example] is the parameter value, such as the sysctl-W join command, and change the parameter value to 0.
    • Modify kernel parameters by modifying the configuration file.
      1. Run the following command to modify parameters in the /etc/sysctl.conf file:
        vi /etc/sysctl.conf 
      2. Run the following command to make the configuration take effect.
        /sbin/sysctl -p

Linux kernel parameters FAQ

The following issues are common issues related to Linux kernel parameters:

1. Packet loss occurs on a Linux ECS instance due to a full NAT hash table. 2. The "Time wait bucket table overflow" error message is returned.
3. There are a large number of FIN_WAIT2 TCP connections in a Linux instance. 4. There are a large number of CLOSE_WAIT TCP connections in a Linux instance.
5. The NAT-configured client fails to access ECS or RDS instances. 6. There are a large number of TIME_WAIT connections.
7. After the server closed the connection, the client is still connected. 8. Linux instances cannot be connected by using Secure Shell (SSH) in a local network environment.

1. Packet loss occurs on a Linux ECS instance due to a full NAT hash table.

Note: the kernel parameter involved are as follows.

  • net.netfilter.nf_conntrack_buckets
  • net.nf_conntrack_max
Issue

The Linux instance has intermittent packet loss and cannot be connected. See link test description for ping packet loss or disconnection. Check whether No exception is found in the external network by using tracert or mtr. In addition, a large number of errors similar to the following appear in the system logs.

Feb  6 16:05:07 i-*** kernel: nf_conntrack: table full, dropping packet.
Feb  6 16:05:07 i-*** kernel: nf_conntrack: table full, dropping packet.
Feb  6 16:05:07 i-*** kernel: nf_conntrack: table full, dropping packet.
Feb  6 16:05:07 i-*** kernel: nf_conntrack: table full, dropping packet.
Cause analysis

ip_conntrack is a module that tracks connection entries for network address translation (NAT) on a Linux system. The ip_conntrack module uses a hash table to record the "established connection" record of TCP. When the hash table is full, the "nf_conntrack: table full, dropping packet" error occurs. Linux will create a space to maintain each TCP connection. The size of the space is subject to the values of nf_conntrack_buckets and nf_conntrack_max. The default value of nf_conntrack_max is four times that of nf_conntrack_buckets. We recommend that you set a greater value for the nf_conntrack_max parameter.

Note: connections maintained by the system consume a large amount of memory. We recommend that you decrease the value of the nf_conntrack_max parameter when the system is idle or has sufficient memory.

Fixes
  1. Log on to the Linux instance. For more information, see Connect to a Linux instance by using a management terminal.
  2. Run the following command to edit the kernel configuration:
    vi /etc/sysctl.conf
  3. Modify the parameter in the hash table: set net.netfilter.nf_conntrack_max to 655350.
  4. Modify the timeout parameter: set net.netfilter.nf_conntrack_tcp_timeout_established to 1200. Default value: 432000. Unit: seconds.
  5. Run the sysctl -p command to make the configurations take effect.

2. The " Time wait bucket table overflow " error message is returned.

Note: the kernel parameter involved here are Chinese.

Issue
  • /Var/log/messages for Linux instances
  • Log error message similar to "kernel:TCP:time wait bucket table overflow" is displayed, prompting "time wait bucket table" overflow.
    Feb 18 12:28:38 i-*** kernel: TCP: time wait bucket table overflow
    Feb 18 12:28:44 i-*** kernel: printk: 227 messages suppressed.
    Feb 18 12:28:44 i-*** kernel: TCP: time wait bucket table overflow
    Feb 18 12:28:52 i-*** kernel: printk: 121 messages suppressed.
    Feb 18 12:28:52 i-*** kernel: TCP: time wait bucket table overflow
    Feb 18 12:28:53 i-*** kernel: printk: 351 messages suppressed.
    Feb 18 12:28:53 i-*** kernel: TCP: time wait bucket table overflow
    Feb 18 12:28:59 i-*** kernel: printk: 319 messages suppressed.
  • Run the following command to count the number of TCP connections in the TIME_WAIT status and find that there are many TCP connections in the TIME_WAIT status.
    netstat -ant|grep TIME_WAIT|wc -l
Possible causes

You can configure the net.ipv4.tcp_max_tw_buckets parameter to manage the number of TIME_WAIT connections in the kernel. When the sum of the connections in the TIME_WAIT status and the number of connections that need to be converted to the TIME_WAIT status in the instance exceeds the value of the current parameter, the "time wait bucket table" error is reported in the messages log, and the kernel closes some TCP connections that exceed the parameter value. You must increase the value of the net.ipv4.tcp_max_tw_buckets parameter as needed and improve TCP connections from the service level.

Fixes
  1. Run the following command to count the number of TCP connections.
    netstat -anp |grep tcp |wc -l
  2. Run the following command to query the parameters: If there are a large number of connections, the limit will be exceeded.
    vi /etc/sysctl.conf
  1. Increase the value of the net.ipv4.tcp_max_tw_buckets parameter as needed.
  1. Run the sysctl -p command to make the configurations take effect.

3. There are a large number of FIN_WAIT2 TCP connections in a Linux instance.

Note: the kernel parameter involved is net.ipv4.tcp_fin_timeout.

Issue

There are a large number of TCP connections that are in the FIN_WAIT2 state.

Cause analysis
  • In the HTTP service, the server closes connections in cases such as when keepalive times out. The server that closes the connection will enter the FIN_WAIT2 state.
  • TCP/IP stacks support half-open connections. Connections that are in the FIN_WAIT2 state do not time out. If the client is not shut down, connections will be kept in the FIN_WAIT2 state until the system restarts. Therefore, an increasing number of FIN_WAIT2 connections will crash the kernel.
  • We recommend that you reduce the value of the net.ipv4.tcp_fin_timeout parameter to facilitate the system to close TCP connections that are in the FIN_WAIT2 state.
Fixes
  1. Run the vi /etc/sysctl.conf command to modify or add the following content:
    net.ipv4.tcp_syncookies = 1
    net.ipv4.tcp_fin_timeout = 30
    net.ipv4.tcp_max_syn_backlog = 8192
    net.ipv4.tcp_max_tw_buckets = 5000
  2. Run the sysctl -p command to make the configurations take effect.

    Note: a TCP connection in the FIN_WAIT2 status will enter the TIME_WAIT status. For details, see Question 2: A "Time wait bucket table overflow" error is reported.

4. There are a large number of CLOSE_WAIT TCP connections in a Linux instance.

Issue

Run the following command to verify that the current system has many TCP connections in the CLOSE_WAIT status.

netstat -atn|grep CLOSE_WAIT|wc -l
Cause analysis

Determine whether the number of CLOSE_WAIT TCP connections has exceeded the normal range based on the traffic on the instance. TCP uses a four-way handshake to close a connection. Both ends of the TCP connection can initiate a request to close the connection. If the peer closes the connection, but the local connection is not closed, the connection will be in the CLOSE_WAIT state. Although the connection is already in the half-open state, it is unable to communicate with the peer. You must release the connection. We recommend that you check whether a connection has been closed by the peer by closing the connection in the program logic.

Fixes

Generally, the read and write functions supported by programming languages can detect TCP connections in the CLOSE_WAIT status. You can run the following command to check the number of connections in the CLOSE_WAIT status on the current RDs instance.

netstat -an|grep CLOSE_WAIT|wc -l

The method of closing the connection in Java and C languages is as follows:

Java

  1. Run read to evaluate I/O. When the read function returns -1, the end has been reached.
  2. close the connection with the close method.

C

Check the returned value of read.

  • If the value is 0, close the connection.
  • If the value is less than 0, check the error message. If AGAIN is not displayed, close the connection.

5. The NAT-configured client fails to access ECS or RDS instances.

Note: the kernel parameter involved are as follows.

  • net.ipv4.tcp_tw_recycle
  • net.ipv4.tcp_timestamps
Issue

The NAT-configured client fails to access ECS or RDS instances, including the SNAT-configured ECS instances in VPCs. Other cloud products cannot be accessed. Captured packets show that the remote ECS and RDS instances do not respond to SYN packets sent from the client.

Cause analysis

If the values of the net.ipv4.tcp_tw_recycle and net.ipv4.tcp_timestamps kernel parameters are 1, the remote server checks the timestamp in each packet. If timestamps are not incremental, the remote server will not respond to the current packet. After NAT is configured, the remote server finds that requests from different clients have the same source IP address. The system time of each client before NAT is performed may be different, which cause the timestamps in the packet to be not incremental.

Fixes
  • If the remote server is an ECS instance, set the value of the net.ipv4.tcp_tw_recycle parameter to 0.
  • If the remote server is PaaS service such as a RDS instance, set the values of the net.ipv4.tcp_tw_recycle and net.ipv4.tcp_timestamps parameters to 0 on the client, because you cannot modify kernel parameters for the RDS instances in the console.

6. There are a large number of TIME_WAIT connections.

Note: the kernel parameter involved are as follows:

  • net.ipv4.tcp_syncookies
  • net.ipv4.tcp_tw_reuse
  • net.ipv4.tcp_tw_recycle
  • net.ipv4.tcp_fin_timeout
Issue

There are a large number of connections that are in the TIME_WAIT state in an ECS instance.

Possible causes

Call the close() function to close TCP connections. After the last acknowledge (ACK) message is sent, the server enters the time_wait state. The server returns to the initial state after the 2MSL time period. The MSL is defined as the longest period of time that a packet can remain undelivered in the network. As a result, when a TCP connection is in the TIME_WAIT state, the quadruple that defines the connection, including client IP address and port number as well as server IP address and port number, cannot be used during the 2MSL time period.

Fixes

Run the netstat or ss command to view a large number of connections in the TIME_WAIT status.

  1. Run the following command to check the number of connections in the TIME_WAIT status.
    netstat -n | awk '/^tcp/ {++y[$NF]} END {for(w in y) print w, y[w]}'
    Note: You can also run the ss -tan state time-wait command to check the TIME_WAIT connection information.
  2. Run the following command to edit the kernel configuration:
    vi /etc/sysctl.conf
    Modify or add the following content:
    net.ipv4.tcp_syncookies = 1 
    net.ipv4.tcp_tw_reuse = 1 
    net.ipv4.tcp_tw_recycle = 1
    net.ipv4.tcp_fin_timeout = 30
    Warning: for the server, enabling the net.ipv4.tcp_tw_recycle = 1 configuration in the NAT environment may cause the verification timestamp to increase, which affects the service. We recommend that you do not enable this feature. For more introduction to these four kernel parameter, please refer to the following:
    • net.ipv4.tcp_syncookies=1: enable cookies for SYN. When SYN wait queue overflow occurs, enable cookies for handling.
    • net.ipv4.tcp_tw_reuse=1: allow the socket of TIME-WAIT to be reused for new TCP connections. If the timestamp of the new request is greater than the stored timestamp, the system will select one of the live connections in the TIME_WAIT status and reallocate it to the new request connection.
    • net.ipv4.tcp_tw_recycle=1: enable the sockets fast recycling function of TIME-WAIT connections in TCP. It should be noted that this mechanism also depends on the Timestamp option. The system enables the tcp_timestamps mechanism by default, and when the tcp_timestamps and tcp_tw_recycle mechanisms in the system are enabled at the same time, one behavior of TCP will be activated, that is, the latest timestamp of each connection will be cached. If the timestamp of a subsequent request is smaller than the cached timestamp, the request is considered invalid and packets are discarded. Requests from different clients may be considered to be the same connection after they are forwarded by the server load balancer. If the clients receive different requests each other, the timestamp may be inconsistent among the backend servers. Data packets may be lost and the service may be affected.
    • net.ipv4.tcp_fin_timeout=30: If the socket is closed by the server, this parameter determines how long it stays in the FIN-WAIT-2 state.
  3. Run the following command to make the configuration take effect.
    /sbin/sysctl -p 
  4. Many connections are in TIME_WAIT status, which can lead to various problems. In addition to intuitively reducing connections in TIME_WAIT status, you can also optimize the system performance by expanding the port range and scaling up the TIME_WAIT bucket bucket. For more information, see the discussion summaries of TIME_WAIT and CLOSE_WAIT.

7. After the server closed the connection, the client is still connected.

Note: the kernel parameter involved is net.ipv4.tcp_fin_timeout.

Issue

After Server A closed the TCP connection with Client B, the connection is still established on the Client B.

clienta

Possible causes

This error occurred because the default net.ipv4.tcp_fin_timeout kernel parameter of the server is modified.

Fixes
  1. Run the following command to modify the configuration and set net.ipv4.tcp_fin_timeout=30.
     vi /etc/sysctl.conf
  2. Run the following command to make the configuration take effect.
    sysctl -p

8. Linux instances cannot be connected by using SSH in a local network environment.

Note: the kernel parameter involved are as follows:

  • net.ipv4.tcp_tw_recycle
  • net.ipv4.tcp_timestamps
Issue

A Linux instance cannot be connected by using SSH in a local network environment, or an exception occurs when you access the HTTP service on the Linux instance. The Telnet test will be reset.

Possible causes

If you use NAT to share the Internet, this issue may be caused by the mismatch between the local NAT environment and the configurations of the target Linux-related kernel parameters. You can try to resolve the problem by modifying the kernel parameters of the target Linux instance.

  1. Connect to the Linux ECS instance.
  2. Run the following command to view the configuration.
    cat /proc/sys/net/ipv4/tcp_tw_recycle
    cat /proc/sys/net/ipv4/tcp_timestamps
  3. Check whether the values of the preceding two configurations are 0. If the values are 1, the requests in the NAT environment may cause the preceding problems.
Fixes

Use the following method to change the value of the preceding parameter to 0:

  1. Run the following command to modify the configuration file:
    vi /etc/sysctl.conf
  2. Add the following configurations to the configuration file:
    net.ipv4.tcp_tw_recycle=0
    net.ipv4.tcp_timestamps=0
  3. Run the following command to make the configurations take effect:
    sysctl -p 
  4. Log on to the instance by using SSH again or test whether the service is available.

Linux kernel parameters described in this article

The Linux kernel parameters covered in this article are described as follows. For more information, see the following parameter descriptions.

Parameter Description
net.core.rmem_default The default size of the TCP receive window in bytes.
net.core.rmem_max The maximum size of the TCP receive window in bytes.
net.core.wmem_default The default size of the TCP send window in bytes.
net.core.wmem_max The maximum size of the TCP send window in bytes.
net.core.netdev_max_backlog When the processing speed of the kernel is slower than the receiving speed of the network interface controller (NIC), the extra packets will be stored in the receiving queue of the NIC. This parameter specifies the maximum number of packets that can be stored in a queue. The maximum number of packets allowed to be sent to the queue when each network interface receives packets faster than the kernel processes these packets.
net.core.somaxconn This parameter is a global parameter that defines the maximum length of the listening queue for each port in the system. This parameter is associated with the net.ipv4.tcp_max_syn_backlog parameter, which refers to the maximum number of half-open connections that are still in the three-way handshake. This parameter specifies the maximum number of ESTABLISHED connections. If the business load of your ECS instance is high , you must increase the value of this parameter. The backlog parameter in the listen(2) function also specifies the maximum number of ESTABLISHED connections in listening ports. If the value of backlog is greater than that of net.core.somaxconn, the value of net.core.somaxconn applies.
net.core.optmem_max The maximum buffer size per socket.
net.ipv4.tcp_mem This parameter specifies how the TCP stacks reflect memory usage. The unit of each value is memory pages. The page size is 4 KB in most cases.
The first value is the lower limit of the memory usage.
The second value is the maximum stress that the buffer can bear when you perform stress testing.
The third value is the upper limit of the memory usage. In this case, packets can be discarded to reduce memory usage. These values can be increased for larger BDP (in memory pages instead of bytes).
net.ipv4.tcp_rmem The memory used by the socket for auto configuration.
The first value is the minimum size of the socket receive buffer in bytes.
The second value is the default value, which will be overwritten by the value of rmem_default. You can set the parameter to this value when the business load of the system is low.
The third value is the maximum size of the socket receive buffer in bytes, which will be overwritten by the value of rmem_max.
net.ipv4.tcp_wmem The memory used by the socket for auto configuration.
The first value is the minimum size of the socket receive buffer in bytes.
The second value is the default value, which will be overwritten by the value of wmem_default. You can set the parameter to this value when the business load of the system is not heavy.
The third value is the maximum size of the socket receive buffer in bytes, which will be overwritten by the value of wmem_max.
net.ipv4.tcp_keepalive_time The interval of keepalive messages sent by TCP that are used to confirm whether the TCP connection is valid. Unit: seconds.
net.ipv4.tcp_keepalive_intvl The interval for re-sending a keepalive message when no responses are returned. Units: seconds.
net.ipv4.tcp_keepalive_probes The maximum number of keepalive messages to be sent before the TCP connection is considered invalid.
net.ipv4.tcp_sack This parameter specifies whether to enable TCP selective acknowledgment (SACK). 1 indicates that TCP SACK is enabled. The TCP SACK feature improves performance if multiple packets are lost by allowing the server to send only the lost packets. This parameter must be enabled in a wide area network (WAN), but it will increase the CPU utilization.
net.ipv4.tcp_fack This parameter specifies whether to enable forward acknowledgment. When you enable SACK to control network congestion, this parameter must be enabled.
net.ipv4.tcp_timestamps TCP timestamp (which is increased by 12B in the TCP header), in a more accurate way (see RFC1323) to enable the calculation of RTT, this option should be enabled for better performance.
net.ipv4.tcp_window_scaling This parameter whether to enable window scaling in RFC 1323. To support a TCP window whose size is larger than 64 KB, you must set the value of this parameter to 1 to enable window scaling. The maximum size of a TCP window is 1 GB. This parameter takes effect only when both of the connectors have enabled it.
net.ipv4.tcp_syncookies

This parameter specifies whether to enable TCP SYN cookie (SYN_COOKIES). The kernel must enable and compile CONFIG_SYN_COOKIES. SYN_COOKIES can prevent a socket from being overloaded when there are too many connection attempts. The default value is 0, indicating that the TCP SYN cookie feature is disabled.
When this parameter is set to 1 and the SYN_RECV queue is full, the kernel makes some changes to SYN packets in responses. In the SYN-ACK packet in the response, the initial serial number is calculated base on the source IP address, source port number, destination IP address, destination port number, and time to form an assembled TCP packet. The serial number in the ACK packet is not the previously calculated value. Therefore, attackers cannot respond or misjudge, but the requester can respond based on the received SYN-ACK packet. When net.ipv4.tcp_syncookies is enabled, net.ipv4.tcp_max_syn_backlog will be ignored.

net.ipv4.tcp_tw_reuse This parameter specifies whether a TIME-WAIT socket (TIME-WAIT port) can be used to establish TCP connections.
net.ipv4.tcp_tw_recycle TIME-WAIT sockets can be reclaimed in a quick manner.
net.ipv4.tcp_fin_timeout The time period within which TCP connections remain in the FIN-WAIT-2 state after the local end is disconnected from the socket. Unit: seconds. The peer may close or keep the connection, or the process is killed unexpectedly.
net.ipv4.ip_local_port_range The TCP and UDP port numbers.
net.ipv4.tcp_max_syn_backlog The number of TCP connections that are in the SYN_RECV state. After the server receives a SYN packet from the client, the connection is initialized in the SYN_RECV state until the server receives the last ACK packet in response to the SYN-ACK packet to complete the three-way handshake. This parameter also specifies the maximum number of unconfirmed connection requests that can be stored in the queue. If the server is frequently overloaded, you can increase the value of this parameter. Default value: 1024.
net.ipv4.tcp_low_latency This parameter must be disabled when TCP/IP stacks allow high throughput and low latency.
net.ipv4.tcp_westwood This parameter enables the congestion control algorithm on the client. The congestion control algorithm maintains an evaluation of throughput and attempts to optimize the overall bandwidth usage. We recommend that you enable this parameter for a WAN.
net.ipv4.tcp_bic This parameter enables Binary Increase Congestion for long-distance networks to make full use of connections that are managed in gigabyte. We recommend that you enable this parameter for a WAN.
net.ipv4.tcp_max_tw_buckets The number of TIME_WAIT connections in the system. TIME_WAIT connections outnumbered the default value will be cleared immediately. Default value: 180000.
net.ipv4.tcp_synack_retries The number of times that the SYN-ACK packets are retransmitted when the connection is in the SYN_RECV state.
net.ipv4.tcp_abort_on_overflow When this parameter is set to 1, the system receives a large number of requests within a short period of time, and the relevant application fails to process the requests, the system will send Reset packets to directly close these connections. We recommend that you improve processing capabilities by optimizing the efficiency of the application instead of sending Reset packets. Default value: 0.
net.ipv4.route.max_size The maximum number of routes allowed by the kernel.
net.ipv4.ip_forward The parameter enables packets to be forwarded between NICs of the local host.
net.ipv4.ip_default_ttl The maximum number of hops through which a packet can pass.
net.netfilter.nf_conntrack_tcp_timeout_established If established connections are inactive within the specified time periods, the connections are cleared by using iptables.
net.netfilter.nf_conntrack_max The maximum value of hash values.

 

For more information on kernel parameter, see the following link:

Application scope

  • Elastic Compute Service