Elastic Remote Direct Memory Access (eRDMA) enables high-throughput, low-latency networking. Add eRDMA-capable nodes to your Container Service for Kubernetes (ACK) cluster to accelerate container networking.
Limitations
| Limitation | Description |
|---|---|
| Kubernetes version | ACK clusters must run Kubernetes 1.24 or later |
| NVIDIA driver | GPU-accelerated eRDMA nodes require NVIDIA driver version later than 470.xx.xx |
| Instance types | All eRDMA-enhanced Elastic Compute Service (ECS) instance types are supported. See Use eRDMA |
Prerequisites
Before you begin, make sure that:
The eRDMA environment is configured on the node
An eRDMA interface (ERI) is created and associated with the ECS instance
Add eRDMA nodes to an ACK cluster
Add eRDMA nodes manually or automatically through a node pool.
Manually add existing nodes
Specify the OS image described in Prerequisites as the node OS image.
Follow the steps in Add existing ECS instances to an ACK cluster.
Automatically add nodes through a node pool
Create a node pool and specify the OS image described in Prerequisites.
Follow the steps in Add existing ECS instances to an ACK cluster.
Next steps
Install eRDMA dependencies on the node
Install and configure ACK eRDMA Controller to accelerate container networking. For detailed steps, see Use eRDMA to accelerate container networking.
Install eRDMA dependencies in containers
Install the eRDMA user-space driver package in each container that requires eRDMA access. For details, see Configure eRDMA in a Docker container.
Include the eRDMA dependencies installation in your Dockerfile to ensure consistency across environments. For more information, see the Dockerfile reference.