Elastic Remote Direct Memory Access (eRDMA) lets ECS instances bypass the OS kernel to access remote memory directly over the network, delivering low latency and high throughput without the overhead of the traditional TCP/IP stack. To connect eRDMA with your application, choose one of two adaptation solutions: Shared Memory Communications over RDMA (SMC-R) for kernel mode, or Network Accelerator (NetACC) for user mode.
Choose a solution
The right solution depends on whether your application can be recompiled and whether you want to avoid modifying source code entirely.
| Application | Solution | Code changes required | Recompilation required |
|---|---|---|---|
| Redis | SMC-R | No | Yes |
| HPC workloads (E-HPC NEXT) | SMC-R | No | Yes |
| Spark | jVerbs | Yes | Yes |
| Kafka | — | — | — |
Transparent RDMA support (SMC-R, NetACC): no source code changes needed; you only need to recompile applications to support eRDMA. SMC-R requires only that you run Alibaba Cloud Linux 3. NetACC requires only that you set the LD_PRELOAD environment variable.
Non-transparent RDMA support (jVerbs for Spark): requires modifying the application's communication code.
Adaptation solutions
Kernel mode: SMC-R
IBM open-sourced SMC-R for Linux 4.11 in 2017 and currently maintains it under RFC 7609. On Alibaba Cloud Linux 3, SMC-R transparently replaces TCP without any code changes or functionality loss, offloading the network protocol stack to hardware.
For setup instructions, see Enable and configure SMC.
User mode: NetACC
Network Accelerator (NetACC) is a user-mode network acceleration library. Load it by setting the LD_PRELOAD environment variable — no application code changes required. NetACC uses compatible socket interfaces to accelerate connection establishment for existing TCP applications, delivering kernel bypass, low latency, high throughput, and protocol stack offload.
For setup instructions, see NetACC User Guide.
Use cases
Redis: Use SMC-R for transparent acceleration with no code changes. See Deploy SMC-R and Redis based on eRDMA.
High-performance computing (HPC): Multi-node parallel jobs in E-HPC NEXT clusters — including climate meteorology, industrial simulation, and molecular dynamics — achieve on-premises-level network performance with eRDMA. See Deploy an Elastic High Performance Computing cluster with eRDMA.
Spark: Use jVerbs to modify Spark's communication code for RDMA support. See Deploy a Spark cluster on eRDMA-enhanced instances.
Kafka: Deploy a Kafka cluster on eRDMA-capable ECS instances to accelerate data transfers between nodes. See Deploy a high-network-performance Kafka cluster on eRDMA-enhanced instances.
Next steps
For more best practices, see Use eRDMA to improve network performance.