Community Blog OpenAnolis White Paper: A Software and Hardware Collaborative Protocol Stack for DPU Scenarios

OpenAnolis White Paper: A Software and Hardware Collaborative Protocol Stack for DPU Scenarios

This short article discusses the background, solution, advantages, and application scenarios of Shared Memory Communication (SMC).


By Yang Lu and Wei Shi


With the significant increase in data center network bandwidth and the continuous decline in latency, the traditional Ethernet-based TCP protocol stack is facing new challenges. At this time, the traditional Ethernet network interface controller and TCP protocol stack can no longer meet the requirements for network throughput, transmission latency, efficiency, and cost reduction. At the same time, cloud and hardware providers offer high-performance DPU solutions. Therefore, a high-performance network protocol stack with software collaborating with hardware is required to adapt to DPUs, give full play to hardware performance, and support large-scale cloud application scenarios. This stack is friendly to development, deployment, and O&M and compatible with mainstream business architectures (such as cloud-native).


Shared Memory Communication (SMC) is a high-performance protocol stack for hardware and software collaboration that was contributed by IBM to the Linux community and enhanced and maintained by OpenAnolis. SMC provides the following all-in-one solutions for different scenarios, hardware, and application models.

  • Uses virtual private clouds (VPCs) or data center RDMA to implement high-performance communication in different scenarios and scales to support different business scales and scenarios
  • Compatible with the RDMA verbs ecosystem. SMC offloads protocol stacks to hardware, improves network performance, reduces CPU resource usage, and supports a variety of hardware.
  • Transparently replaces network applications. SMC is fully compatible with TCP socket interfaces and provides fast TCP fallback.
  • Uses a unified and efficient shared memory model. SMC realizes high-performance shared memory communication with hardware offloading.



  1. Transparently accelerates traditional TCP applications. It is non-intrusive for applications, running environment images, and deployment methods and is friendly to DevOps and cloud-native.
  2. A network protocol stack with software collaborating with hardware in DPU. It has higher network performance and lower resource usage.
  3. A standardized, open-source network protocol stack natively supported by Linux. SMC-R is implemented in IETF RFC7609 and maintained by the community.

Application Scenarios

SMC is a general-purpose and high-performance network protocol stack natively supported by the Linux kernel. It supports socket interfaces and fast rollback to TCP. Any TCP application can transparently replace the SMC protocol stack. Due to the difference in the proportion of business logic and network overhead, the acceleration benefits of different applications vary. The following are typical application scenarios and best business practices:

  • For in-memory databases, Redis databases, and some OLAP databases, Redis QPS is up to 50%, and the latency is reduced by 55%.
  • For distributed storage systems, the cloud-native Curve improves performance by 18.5% in the scenario of 3 volume 256 depth randwrite.
  • For the Web service, the persistent connection of NGINX has the highest QPS increase of 49.6% and a latency decrease of 55.48%.

In general, the SMC protocol stack can improve the performance of TCP applications, reduce latency, and improve QPS and does not need to modify the application code. However, the acceleration effect is affected by the proportion of business logic and network overhead, and the acceleration effect varies with different applications. Using an SMC protocol stack can provide significant performance improvement in some specific application scenarios (such as high-performance computing and big data).

Related Articles

1 2 1
Share on


78 posts | 4 followers

You may also like


Dikky Ryan Pratama June 27, 2023 at 12:49 am