JindoFS storage strategy and read and write optimization

Data cache scenario

In the traditional big data analysis field, HDFS should be a de facto storage standard. HDFS is a typical deployment of computing resources and storage resources in a set of clusters, that is, the architecture of computing and storage integration is shown on the left of the figure below (causing the problem that cluster computing and storage capacity cannot be expanded asymmetrically). With the trend and development of data cloud in recent years, the architecture of computing and storage separation has gradually emerged in the big data analysis scenario. More and more customers choose this architecture to deploy their clusters. The difference between it and the traditional HDFS-based system architecture is that its computing resources and storage resources are physically isolated. The computing cluster and the back-end storage cluster are connected through the network, as shown on the right side of the figure below. A large number of data read and write operations of the computing cluster interact with the storage cluster through a large number of network requests. In this scenario, network throughput often becomes a performance bottleneck factor in the whole job execution process.

Therefore, in this architecture, it is very necessary to build a cache layer for the back-end storage cluster on the computing side (in the computing cluster), and use the cache layer to cache data to reduce the network access of the computing cluster to the storage back-end, to significantly eliminate the bottleneck caused by network throughput.

JindoFS accelerated cache

JindoFS plays a role in accelerating the data cache on the storage side in the scenario of computing and storage separation. Its architecture and location in the system are shown in the following figure:

First of all, JindoFS consists of the following three parts:

1: JindoFS SDK client: All upper computing engines access the JindoFS file system through the client provided by the JindoFS SDK, thus realizing cache acceleration for back-end storage

2: Namespace Service: JindoFS metadata management and storage service management

3: Storage Service: User data management includes local data management and OSS data management

JindoFS is a cloud native file system, which can provide the performance of local storage and the huge capacity of OSS, and support diverse back-end storage:

The cloud data lake scenario supports the use of object storage as the back-end of the data lake

Accelerating remote HDFS

Deploy HDFS across regions

The offline computing cluster in the hybrid cloud scenario accesses the offline HDFS cluster, etc.

Data read and write strategy and optimization

Write Policy

JindoFS data write strategies are divided into two types, as shown in the following figure:

During the writing process, the client writes the data to the cache block of the corresponding storage service, and the storage service concurrently uploads the cache data block to the back-end storage through multiple threads.

The pass-through method is directly uploaded to the back-end storage through the JindoFS SDK pass-through method. The SDK has made many performance-related optimizations in it. This method is applicable to the data producer environment. It is only responsible for generating data and has no subsequent calculation and reading requirements.

Read strategy

The read strategy is the core of JindoFS. Through caching, a distributed cache service is built in the local cluster based on the storage capacity of JindoFS. The remote data can be saved in the local cluster, making the remote data "localized", so as to speed up multiple read data requests, and use the fastest way or path to read the data in the cache to achieve the best read performance. Based on this principle, the data reading strategy is as follows:

First, read the cache data block from the local node first, such as Block1 and Block2.

If the local node cache does not exist, the client requests the location of the cache data block from the Namespace service. For example, the data to be read Block3 is in Node2, and the Block3 is read from Node2.

If the node does not exist, read the data from the remote OSS storage cluster, and add the data to the local storage service cache to speed up the next reading of the data.

On the basis of the above basic strategies, JindoFS provides a strategy that supports dynamic multiple backups. After the relevant parameters are configured, the data read from other nodes can be achieved at the same time, and the backup effect can be achieved in the cache, so as to further speed up the read access of hot data blocks.

Cache Locality

Similar to HDFS data locality, the so-called Cache locality is that the computing layer first pushes the task to the node where the data block is located for execution. Based on this strategy, the way that task first reads the locally cached data is the most efficient way to read data, so as to achieve the best data reading performance.

Because JindoFS Namespace maintains the location information of all cached data blocks, it provides the relevant API interface to provide the location information of the data blocks to the computing layer, and then the computing layer can push the task to the node where the cached data blocks are located, so that most of the data can read the local cached data in probability, and a small part of the data can be obtained through the network. Based on Cache Locality, most of the data is read locally, thus ensuring the optimal data read performance on the computing job

Use of JindoFS

Basic usage mode:

Block mode: FindoFS is responsible for metadata management, and OSS is purely used as a back-end storage data block.

The cache mode is transparent and insensitive to users

2.1. Cache is not enabled. The cluster size is small and cache is not required.

2.2. Enable cache to solve the problem of insufficient OSS bandwidth through local cache blocks. (Use the configuration item jfs.cache.data-cache.enable to control whether it is on, 0 is off, 1 is on)

Cache data management

As a cache system, JindoFS uses the limited local cache resources to cache the OSS backend with almost unlimited space. Therefore, the main functions of cache data management are as follows:

Local cache block management

Lifecycle maintenance of local data blocks

The storage service implements the management of data access information, that is, all reads and writes are registered with the AccessManager, and provides the storage.watermark.high.ratio, storage.watermark.low.ratio configuration items to manage the cache data. When the cache usage capacity in the local disk reaches the storage.watermark.high.ratio warning level, AccessManager will automatically trigger the cleaning mechanism to clean up some cold data in the local disk until the storage.watermark.low.ratio level, freeing up disk space for hot data.

Automatic cache block cleanup

Currently, it provides automatic elimination and cleaning of cold data blocks based on LRU (Least Recently Used) elimination strategy.

Asynchronous cold data cleaning does not affect reading and writing.

Show specified cache

The specified cache is also provided

Explicitly cache back-end directories or files or release cold data through the cache/uncache command

Best Practices

How to configure a cluster

Try to use longer resident nodes for caching

Disk of cache node

Network bandwidth of cache node

The configuration item is


Datalocality related configuration

spark.locality.wait.rack3s-> 0

spark.locality.wait.node3s -> 0

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us