Serverless workflow coordinates distributed applications and microservices to build complex, multi-step, stateful, and long-running flows.
Transactional flow orchestration
In complex scenarios that involve order management, such as e-commerce websites, hotel booking, and flight reservations, applications need to access multiple remote services, and have high requirements for the operational semantics of transactions. In other words, all steps must succeed or fail without intermediate states. In applications with small traffic and centralized data storage, the atomicity, consistency, isolation, durability (ACID) properties of relational databases can guarantee that transactions are reliably processed. However, in large-traffic scenarios, distributed microservices are usually used for high availability and scalability. To guarantee reliable processing of multi-step transactions, the service providers usually need to introduce queues and persistent messages and display the flow status to the distributed architecture. This brings additional development and O&M costs. Serverless workflow ensures reliable processing of distributed transactions in complex flows, and therefore helps users focus on their own business logic.
For more information about how to use Serverless workflow to orchestrate transactional flows. see Reliably process distributed multi-step transactions .
Multimedia file processing
Serverless workflow helps you orchestrate multiple tasks, such as transcoding, frame capture, face recognition, voice recognition, and review and upload of multimedia files, into a complete flow. You can use Function Compute to submit an Intelligent Media Management (IMM) task or a user-created processor to generate an output that meets your business requirements. Tasks that encounter errors and exceptions can be reliably retried, which significantly improves the multimedia task processing throughput.
Genetic data processing
Serverless workflow can orchestrate multiple distributed batch computing jobs in sequence or in parallel and reliably supports large-scale computing tasks that require long execution time and high concurrency. For example, in genetic data analysis, gene sequences are aligned, variation analysis is performed on all chromosomes in parallel, and finally all chromosome data is aggregated to produce the results. Based on specified dependencies, Serverless workflow submits batch computing jobs with different CPU, memory, and bandwidth specifications to improve execution reliability and resource utilization and reduce costs.
You can use Serverless workflow to build highly available data pipelines. For example, measurement data from different data sources is collected into Log Service. A time-based trigger of Function Compute triggers Serverless workflow each hour. Serverless workflow uses Function Compute to process the measurement data of multiple shards in parallel and write the results back to Log Service. Then, all measurement data of the shards is aggregated and written into Tablestore. Finally, bills are generated for each user. Serverless workflow allows you to retry a failed step in a flow to reduce the failure probability. Serverless workflow supports dynamic parallel execution of tasks to achieve high scalability of data processing capabilities.
Common challenges in automated O&M include cumbersome steps, varying lengths of execution time, low reliability of standalone scripts, complex dependencies, and inability to visualize the progress. The combination of Serverless workflow and Function Compute can perfectly handle these challenges. For example, during automated software deployment, you need to build Docker containers, upload container images, start and track nodes, track images on all nodes, and start the containers with new images. Logs generated by functions in each step are stored in Log Service, so that you can query and use the logs. Compared with standalone O&M scripts, automated tools based on Serverless workflow provides higher availability, built-in error handling, and graphical progress.