The Data Transmission Service (DTS) module of Data Management (DMS) provides GUI features for data integration, data development, and data services. This enhances the data lifecycle management.
The DTS module supports a variety of compute and storage engines. You can use the DTS module to integrate structured, semi-structured, and unstructured data in real time or offline, develop the data, and provide data services. This can meet various requirements for data processing, integration, development, and services.
The data integration feature supports stream-batch unification. You can use this feature to import online data into data warehouses and process the data. Then, you can use the data development feature to develop data in data warehouses by layer and use the data service and data visualization features to provide the data to external environments for use and analysis. You can also integrate these features as basic capabilities into your data platform to build a data platform unique to your enterprise.
- Supports stream-batch unification, and real-time and offline integration of more than 20 types of data.
- Supports low-code development.
- Is fully compatible with Flink and Spark.
- Ensures data security on data chains and during data development.
- Supports minute-level scheduling.
- Supports multi-environment management.
- Data integration
- O&M: disaster recovery, active geo-redundancy, data archiving, data migration, test data generation, O&M metric monitoring, and business metric monitoring
- Development: real-time reports, log analysis, offline wide tables, T+1 data snapshots, data aggregation, data cleansing, and data de-identification
- Data development
- Database development: cross-database development, scheduled tasks, data archiving, data migration, and report development
- Data warehouse development: data warehousing, data cleansing, data processing, data layering, report development, and wide table development
DMS supports real-time and offline integration of heterogeneous data from multiple sources. Data integration in DMS involves the following features: data synchronization, offline integration, and streaming extract, transform, and load (ETL).
- For more information about the data synchronization feature, see Overview of data synchronization solutions.
You can use the data synchronization feature to synchronize data between data sources in real time. This feature is applicable to the following scenarios: active geo-redundancy, geo-disaster recovery, zone-disaster recovery, cross-border data synchronization, cloud business intelligence (BI) systems, and real-time data warehousing.
- For more information about the offline integration feature, see Overview.
The offline integration feature provides a low-code development tool that you can use to develop data processing tasks. You can combine various task nodes to form a task flow and configure periodic scheduling to process or synchronize data. This way, data in online databases and data warehouses can be efficiently processed and synchronized.
- For more information about the streaming ETL feature, see What is ETL?
You can use the streaming ETL feature to configure streaming ETL tasks in the GUI or by using SQL statements that are fully compatible with Flink. Then, you can run the streaming ETL tasks to extract, transform, process, and load streaming data. This feature is applicable to real-time data development scenarios such as real-time processing of logs and online data, and real-time statistical reports.
- For more information about the task orchestration feature, see Overview.
You can use the task orchestration feature to orchestrate and schedule various tasks. You can create a task flow composed of one or more task nodes to implement complex task scheduling and improve data development efficiency.
- For more information about the data warehouse development feature, see Create a workspace.
You can use the data warehouse development feature for immersive data warehouse development. You can create a workspace and select a data warehouse engine and environment. Then, you can create, publish, and run multiple data warehouse development tasks in the workspace to implement complex data warehouse development. This enhances data warehouse development and facilitates data warehouse management.