How to update 100 million level video content in real time?

One: Background
As an online service, the search recommendation system needs to construct the pre-queried data as index data and push it to heterogeneous storage media to provide online query in order to meet the performance requirements of online query. This stage mainly processes and updates real-time entities, offline preprocessing, and algorithm processing data through Offline/Nearline. This includes the offline and online processing of these data by algorithms, and the final data merging (recall, sorting, correlation, etc.) between different business domains. In terms of platform capabilities, the traditional data warehouse model is adopted, that is, built around common resources and common capabilities, forming a layered strategy to separate the data facing the upper layer of the business, and this model is in the realization of business agile iteration, knowledge, and service In terms of cultural characteristics, it can no longer meet the needs very well.

Knowledge graph, as the core technology for structured organization and systematic management of data, can well meet the requirements of knowledge, business, and service in the actual business-oriented application process. Based on the construction of a feature platform based on the content graph system, Centering on content (videos, programs, users, characters, elements, etc.), build a real-time knowledge fusion data update platform.

Two: Design outline
The data processing link based on the search recommendation system generally includes the following steps: receive the full data dumped and the incremental data from the business side from the content production end (media assets, interaction, content intelligence, Baoluo, Granary, Linlang, etc.), and then the business The data obtained by the side is processed layer by layer according to the business domain, and finally enters the engine side through the build index.

Different from other business scenarios, in the Youku scenario, the content production end we receive is not the source production end, and a lot of semi-processed heterogeneous data is mixed in the middle, and the data consistency (logical consistency, functional consistency) is On the user side, there are practical problems, especially real-time and full-scale output data need to maintain a consistent structure, and at the same time, the field structure of the search engine must be consistent. We update and design the index platform from the aspects of data structured organization and business system management.

1 Data Structured Organization
Design the application-oriented middle layer of Entertainment Brain, introduce the knowledge map into the middle layer, and realize the data organization method facing the business field. Integrate the knowledge graph into the data model layer of the middle layer, and use the unified view of the knowledge graph including entities, relationships, events, labels, and indicators to define a domain-oriented data model. The knowledge graph in the video domain is used as the basis of the data organization in the middle layer to realize the transformation of data organization in the business field.

2 Business system management
Encapsulate the logic of the algorithm in a componentized mode, so that the business side only needs to maintain a set of logic, real-time and full set of code, and use a unified UDF to achieve. Use Blink's stream-batch integration architecture to realize a full incremental architecture model, such as performing full data cleaning and correction logic (the real-time engine has a mechanism to ensure that messages are not lost, and full data does not need to be implemented every day), allowing full data to go Logic again.

Three: key modules
1 Feature library
The feature library consists of two layers: the first layer is the full incremental feature calculation, which is connected to different data sources (including real-time and offline). There is no offline full amount in the feature domain calculation, and the full set of stock is used for cold data or corrected data. Go through the stream processing again. Data organization is stored in vertex and edge relational tables. During the real-time update process, in order to reduce the performance pressure caused by the upstream reverse check, the changes of different entity attributes are directly queried through the internal graph, and the DataAPI is uniformly encapsulated to operate on these data, and different types of vertices are calculated using independent blinkjobs.

In terms of offline data organization, because the search engine's online service machine does not persist data. When a new online machine joins the cluster, it needs to pull the full index file from somewhere for data loading. We assemble a full file that is the same as the index model. The full file is just a snapshot of a certain time stamp. The earlier the time stamp of the full file is, the more real-time messages need to be tracked, and the slower the failure recovery speed is. A mechanism is required to produce the latest full file as soon as possible to reduce real-time incremental messages. performance pressure.

Second-level feature calculation, algorithm-oriented access, including search correlation, sorting, and recall. This layer is directly oriented to the business domain. It directly consumes the data in the first-level feature library. The main logic of the business is concentrated on this layer for calculation. The real-time offline logic is mainly completed through the component library.

2 component library
Algorithms of different business lines obtain the data they need from the same data for processing according to their respective businesses, which invisibly leads to code duplication. The component library establishes the main open adaptation interface, allowing the same function code to be reused and reducing repeated development.

The component library abstracts the business logic into a simple UDF-based arithmetic expression to organize, which is simple, concise, and easier to maintain. The feature user only needs to pay attention to the granularity of the feature, not the whole.

3 Trace&Debug module
Each message has a unique signature (uuid), and the source data will flow in each calculation process. In order to facilitate the business to better track and process problems during the processing process, different system data is aggregated by uuid and entity id, and the Trace&Debug service can compare Good understanding of business process information and system processing information.

Four: technical details
The overall computing framework adopts a new generation of real-time computing engine Blink. The main advantage lies in the integration of streams and batches. Business modules are segmented through jobs, and different computing modules can be combined at will; consumption sites are automatically saved, messages are not lost, and process failover automatic recovery mechanism ; Distributed computing can eliminate single-point consumption sources and write performance bottlenecks. The storage engine uses Lindorm for entity data storage, and mainly uses Lindorm secondary indexes to store KV and KKV data structures, which are used to build the underlying data of knowledge graphs.

1 Knowledge map storage and organization
Labeled Property Graph (LPG) is used for modeling, Lindorm is used as the main storage, entity tables (videos, programs, characters, etc.) are used as vertex tables, and the relationship between entities uses the secondary index capability of lindorm as edge tables.

In terms of data access, the data-driven layer is implemented and provided to the external use interface API, and developers use the local API to manipulate Lindorm. As soon as the interface layer receives the call request, it will call the data processing layer to complete specific data processing, shielding the conversion of java code attributes and Lindorm column values and the value mapping of result queries, using annotations for configuration and original mapping, and solving java Objects are directly serialized to Lindorm's row-column storage problem.

2 Calculation and update strategy
The Blink computing platform is used to realize feature calculation and index update. Due to the use of a full incremental architecture, the pressure of upstream service reverse check is reduced during the full update process, and a column update strategy is adopted. The update of different entity attributes or edge table attributes (edge table attributes to reduce the pressure overhead of vertex query in the graph query process) adopts a cascading update strategy, that is, after the attribute is updated, a new message is generated and pushed to the bus link end. Different entities or relationships After subscribing to the message, update its own attributes as needed.

The core requirement of updating a business is consistency, and its essence is not to lose messages and maintain order. We use MetaQ as the main message channel, which itself has no message loss, and more failures at the level of external services, storage, and processing links.

For an entity data or relational data (usually a job), atomic operations are used, and there is a certain retry mechanism inside. For example, when accessing external services, there will be a retry mechanism itself. In order not to affect the overall link performance, we It is called Fast try, which generally deals with network jitter such as timeout, etc. If it fails, it will keep a certain site, write the data into the retry queue, throw an exception and catch the exception by the outermost layer, discard this update and accept the next message, and fail The message will be retried 3 times in 5 minutes, 10 minutes, and 20 minutes. If it still fails, a notification will be sent for human intervention.

3 Unified UDFs
Use the core to solve the business logic of UDF, which can be transplanted between various systems, and ensure that only one set of business logic is maintained through technical means, and each computing platform (offline/real-time) can be reused to solve the consistency and portability of UDF business logic sexual issues.

Five: Summary & Outlook
Based on the structural characteristics of the content graph and the index update platform, it breaks the traditional data warehouse modeling method in terms of structure, and builds a data platform from the perspective of knowledge, business, and service to accumulate content, behavior, and relationship graphs. At present, it is being applied in scenarios such as Youku search, Piaopiao, and Damai.

With the continuous development of graph neural network and representation learning, we will further focus on in-depth optimization for OLTP and OLAP in graph storage and graph computing, and use deep algorithm strategies to supplement the construction of real-time fusion and real-time reasoning.

In terms of index update platform construction, with the challenges brought by the access of multi-party services and the integration of search and push, index updates are advancing towards full incrementalization. In terms of business self-service, further explore abstract DSL to improve the overall access efficiency of services .

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us