Community Blog A New Upgrade for Applicable Scenarios! Extensions for Dragonfly2 as a Distributed Cache System Architecture

A New Upgrade for Applicable Scenarios! Extensions for Dragonfly2 as a Distributed Cache System Architecture

This article introduces Dragonfly2 and some of its new extensions.


By Eryu Guan

An Introduction to Dragonfly2

As the image acceleration standard solution of OpenAnolis, Dragonfly is a P2P-based intelligent image and file distribution tool that aims to improve the efficiency and rate of large-scale file transfer and maximize the use of network bandwidth. It is widely used in application distribution, cache distribution, log distribution, and image distribution.

At this stage, Dragonfly evolves based on Dragonfly1.x. Maintaining the original core capabilities of Dragonfly1.x, Dragonfly has been comprehensively upgraded in several aspects, such as system architecture design, product capabilities, and usage scenarios.

The Dragonfly architecture is mainly divided into three parts: Manager, Scheduler, Seed Peer, all of which form a P2P download network by performing their duties. Dfdaemon can be used as Seed Peer and Peer. Please see the architecture document at the end of this article for more information. The following are the functions of each module:

  • Manager: It maintains the association relationship, dynamic configuration management, user mode, and permission management of each P2P cluster. A frontend console is also included to facilitate visualized operations on clusters.
  • Scheduler: Select the optimal download parent node for the download node. Exceptions Control Dfdaemon Back-to-Origin
  • Seed Peer: Dfdaemon enables the Seed Peer mode to be used as a back-to-origin download node in a P2P cluster, which is the root node for downloading in the entire cluster.
  • Peer: Deployed by Dfdaemon, it provides the dfget command-line download tool based on the C/S architecture, and dfget daemon runs the daemon process to provide task download capabilities.


Please see the Dragonfly website at end of this article for more information.


Although Dragonfly is known as a P2P-based file distribution system, the distributed files must be those that can be downloaded from the network. Whether it is the rpm package or the container image content, there will eventually be an address source. Users can initiate a download request to dfdaemon through the dfget command, and the Dragonfly P2P system downloads it. If the data is not on other Peers, Peer or SeedPeer will return to the source, download data directly from the source, and return it to the user.

However, in some scenarios, the data we need to distribute is generated on a certain node, and there is no remote source address. Dragonfly cannot distribute this data at this time. Therefore, we hope that Dragonfly can add support for this scenario, which means Dragonfly is regarded as a distributed P2P-based cache and arbitrary data distribution system.

Extensions for Dragonfly2

This is what we think the Dragonfly cache system architecture should be:


  • A dfdaemon should be deployed on each compute node (such as X-Dragon) as a peer to join the P2P network.

    • Accept requests from this node
    • Provide upload services for other peers
    • Each peer is only responsible for managing its local cache data. The business process is responsible for back-to-origin.
    • Each cluster can deploy one to multiple ECS-based scheduler nodes
    • Record file information of the P2P network
    • Download scheduling
    • Multiple scheduler nodes to resolve the single-point failure
    • Files in each cache system are mapped to a scheduler through ringhash.
  • One or more managers as cluster managers

    • It is responsible for sending dynamic configurations to scheduler and peer nodes.
    • Collect metrics and other information

Interface Design

dfdaemon Interface

The original daemon interface:

// Daemon Client RPC Service
service Daemon{
  // Trigger client to download file
  rpc Download(DownRequest) returns(stream DownResult);
  // Get piece tasks from other peers
  rpc GetPieceTasks(base.PieceTaskRequest)returns(base.PiecePacket);
  // Check daemon health
  rpc CheckHealth(google.protobuf.Empty)returns(google.protobuf.Empty);

Add 4 interfaces:

service Daemon { 
// Check if given task exists in P2P cache system
rpc StatTask(StatTaskRequest) returns(google.protobuf.Empty);
// Import the given file into P2P cache system
rpc ImportTask(ImportTaskRequest) returns(google.protobuf.Empty);
// Export or download file from P2P cache system
rpc ExportTask(ExportTaskRequest) returns(google.protobuf.Empty);
// Delete file from P2P cache system
rpc DeleteTask(DeleteTaskRequest) returns(google.protobuf.Empty);

scheduler Interface

The original scheduler interface:

// Scheduler System RPC Service
service Scheduler{
// RegisterPeerTask registers a peer into one task.
rpc RegisterPeerTask(PeerTaskRequest)returns(RegisterResult);
// ReportPieceResult reports piece results and receives peer packets.
// when migrating to another scheduler,
// it will send the last piece result to the new scheduler.
rpc ReportPieceResult(stream PieceResult)returns(stream PeerPacket);
// ReportPeerResult reports downloading result for the peer task.
rpc ReportPeerResult(PeerResult)returns(google.protobuf.Empty);
// LeaveTask makes the peer leaving from scheduling overlay for the task.
rpc LeaveTask(PeerTarget)returns(google.protobuf.Empty);

Add 2 interfaces, download the RegisterPeerTask() interface before reuse, and delete the LeaveTask() interface before reuse:

// Scheduler System RPC Service
service Scheduler{
// Checks if any peer has the given task
rpc StatTask(StatTaskRequest)returns(Task);
// A peer announces that it has the announced task to other peers
rpc AnnounceTask(AnnounceTaskRequest) returns(google.protobuf.Empty);

Interface Request Sequence Diagram









Code Implementation

The code has been merged and can be used in Dragonfly v2.0.3.

Upstream PR:


In addition to a new interface, we added a command called dfcache for testing. The usage method is listed below:

- add a file into cache system
dfcache import --cid sha256:xxxxxx --tag testtag /path/to/file
- check if a file exists in cache system
dfcache stat --cid testid --local # only check local cache
dfcache stat --cid testid # check other peers as well
- export/download a file from cache system
dfcache export --cid testid -O /path/to/output
- delete a file from cache system, both local cache and P2P network
dfcache delete -i testid -t testtag

Test and Effect

Test Method

Add files of different sizes to the P2P cache system on a node using the added dfcache command and then query, download, and delete the files on another node. Example:

# dd if=/dev/urandom of=testfile bs=1M count =1024
# dfcache stat -i testid # Check a file that does not exist.
# dfcache import -i testid testfile
# on another node
# dfcache stat -i testid
# dfcache export -i testid testfile.export

Test Effect

With two ECS instances, the network is based on VPC, and the bandwidth is 3.45 Gbits/s (about 440MiB/s):


The bandwidth of the downloaded ECS disk is about 180MiB/s:



Related Links

  1. Dragonfly1.x Address:
  2. Dragonfly Architecture Documentation:
  3. The Dragonfly Official Website:
  4. OpenAnolis Cloud-Native SIG Address:
0 1 0
Share on


69 posts | 4 followers

You may also like



69 posts | 4 followers

Related Products

  • CT Image Analytics Solution

    This technology can assist realizing quantitative analysis, speeding up CT image analytics, avoiding errors caused by fatigue and adjusting treatment plans in time.

    Learn More
  • Offline Visual Intelligence Software Packages

    Offline SDKs for visual production, such as image segmentation, video segmentation, and character recognition, based on deep learning technologies developed by Alibaba Cloud.

    Learn More
  • Image Search

    An intelligent image search service with product search and generic search features to help users resolve image search requests.

    Learn More
  • SOFAStack™

    A one-stop, cloud-native platform that allows financial enterprises to develop and maintain highly available applications that use a distributed architecture.

    Learn More