Build an automated pipeline that fetches trending financial news on a schedule, analyzes it with the DeepSeek large language model (LLM), and pushes structured summaries to a DingTalk group chat.
How it works
As AI model capabilities grow, many business scenarios benefit from combining scheduled tasks with LLM-powered analysis. Common examples include:
Risk monitoring: Monitor key system metrics on a schedule and use LLM-powered analysis to detect potential threats.
Data analytics: Collect financial data on a schedule and use a large language model to generate intelligent analysis and investment insights.
This guide demonstrates the data analytics scenario. The pipeline combines two components:
XXL-JOB (distributed task scheduler): Triggers tasks on a cron schedule, passes prompts as task parameters, and orchestrates multi-step workflows.
DeepSeek-R1 (large language model): Processes raw news data into structured summaries and investment insights.
The end-to-end flow:
XXL-JOB triggers a scheduled task on your executor application.
The executor fetches raw financial news from external APIs (for example, Sina Finance).
The executor sends the raw data to DeepSeek with a prompt that instructs it to extract, rank, and summarize the top stories.
DeepSeek returns a structured summary.
The executor pushes the summary to a DingTalk group through a webhook.
For a more advanced scenario that uses sharding broadcast and task dependency orchestration to build a multi-stage financial analysis pipeline, see Scale with sharding and task orchestration.
Prerequisites
Before you begin, make sure that you have:
An activated Alibaba Cloud Model Studio account with an API key
A DingTalk group with a custom robot webhook configured
An ACK serverless cluster with SNAT enabled on the VPC
Set up DeepSeek
Deploy DeepSeek locally with Ollama, or access it through Alibaba Cloud Model Studio.
DeepSeek-R1 is optimized for complex logical reasoning, making it a strong fit for data analysis. Its parent company, High-Flyer Quant, specializes in quantitative trading, which gives the model a practical edge in financial scenarios. DeepSeek is also open source and lightweight, so local deployment is straightforward.
Alternative: QwQ, Alibaba Cloud's open-source reasoning model, delivers comparable inference performance to DeepSeek-R1 and also excels at complex data analysis. The following chart compares QwQ-32B with other leading models in mathematical reasoning, programming, and general capabilities:

Option 1: Deploy locally with Ollama
The steps below use DeepSeek as an example. The same process applies to QwQ or other Ollama-supported models.
Install Ollama from https://ollama.com/download.

Pull the DeepSeek-R1 model. The R1 variant focuses on complex logical reasoning and is best suited for data analysis. Choose a model size based on your hardware. For example, if your machine has 16 GB of RAM, use the 7b model: The following table lists hardware requirements for each model size.
Model Size GPU memory RAM deepseek-r1:1.5b 1.1 GB 4 GB+ 8 GB+ deepseek-r1:7b 4.7 GB 8 GB+ 16 GB+ deepseek-r1:8b 4.9 GB 10 GB+ 18 GB+ deepseek-r1:14b 9.0 GB 16 GB+ 32 GB+ deepseek-r1:32b 20 GB 24 GB+ 64 GB+ ollama pull deepseek-r1:7b

Verify the deployment by sending a request to the OpenAI-compatible API. Ollama serves on port
11434by default: The OpenAI-compatible API simplifies integration -- the same client code works for both local and cloud-hosted models.curl http://localhost:11434/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "deepseek-r1:7b", "messages": [{"role": "user", "content": "Hello"}] }'
Option 2: Use Alibaba Cloud Model Studio
Alibaba Cloud Model Studio provides hosted access to DeepSeek, QwQ, and other models. Activate the service to start making API calls immediately -- no infrastructure setup required. A free quota is included, and you can switch between models at any time.
Set up XXL-JOB
XXL-JOB adds four capabilities that are useful for production AI workloads:
| Capability | Description |
|---|---|
| Scheduled execution | Trigger AI tasks on a cron schedule |
| Dynamic prompts | Pass prompt text and response format as task parameters, allowing updates without redeploying code |
| Sharding broadcast | Split a large task into parallel sub-tasks across multiple executors to speed up data collection |
| Task dependency orchestration | Chain tasks into a multi-step pipeline (for example, collect data, then analyze, then generate a report) |
Option 1: Deploy locally
For detailed steps, see the XXL-JOB official documentation. The general process:
Prepare a database and initialize the table schema.

Import the XXL-JOB source code into your IDE and configure
xxl-job-adminin the application configuration file.
Run the
XxlJobAdminApplicationclass, then openhttp://127.0.0.1:8080/xxl-job-adminin your browser. Default credentials: usernameadmin, password123456.
Option 2: Use the managed MSE XXL-JOB service
Alibaba Cloud Microservices Engine (MSE) provides a fully managed XXL-JOB service. To get started, see Create an XXL-JOB instance. A free trial is available.
Push trending financial news
The following steps demonstrate the full workflow using MSE XXL-JOB and DeepSeek-R1 hosted on Alibaba Cloud Model Studio. The same steps apply to a self-hosted setup -- only the connection parameters differ.
For the complete source code, see the xxljob-demo (Spring Boot) repository on GitHub.
Step 1: Deploy the executor application
Log on to the ACK console and create an ACK serverless cluster. To pull the demo image, enable SNAT on the VPC. Skip this step if SNAT is already configured.
In the ACK console, click the cluster name. In the left-side navigation pane, choose Workloads > Stateless. Click Create from YAML and apply the following deployment configuration: Replace the following placeholders with your actual values:
Placeholder Description Example <your-xxljob-admin-address>MSE XXL-JOB instance endpoint http://xxljob-xxxxx.schedulerx.mse.aliyuncs.com<your-app-name>Executor application name registered in XXL-JOB my-news-app<your-access-token>XXL-JOB access token xxxxxxx<your-api-key>Alibaba Cloud Model Studio API key sk-xxx<your-dingtalk-webhook-url>DingTalk group robot webhook URL https://oapi.dingtalk.com/robot/send?access_token=xx
Step 2: Get the startup parameters
XXL-JOB connection parameters: Log on to the MSE XXL-JOB console and select a region. Click the target instance. On the Application Management page, click Access in the Executor Count column for the target application. Click Copy to copy the connection parameters into your YAML configuration:
-Dxxl.job.admin.addresses=http://xxljob-xxxxx.schedulerx.mse.aliyuncs.com -Dxxl.job.executor.appname=xxxxx -Dxxl.job.accessToken=xxxxxxx
Model Studio API key: Log on to Alibaba Cloud Model Studio. Click the profile icon in the upper-right corner and select API-KEY. On the management page, create or copy an API key.
-Ddashscope.api.key=sk-xxxDingTalk webhook URL: In your DingTalk group settings, add a custom robot and copy the webhook URL.
-Dwebhook.url=https://oapi.dingtalk.com/robot/send?access_token=xx
Step 3: Create and run the news task
On the MSE XXL-JOB console
Log on to the MSE XXL-JOB console and select a region. Click the target instance, then click Task Management in the left-side navigation pane.
Click Create Task. Set JobHandler Name to
sinaNews. In the Task Parameter field, enter the following prompt. Keep the default values for all other parameters and save the task.On the Task Management page, find the
sinaNewstask. In the Actions column, click Run Once. After the task completes, the DingTalk group receives the AI-generated news summary.
On a self-hosted XXL-JOB Admin console
Create a task and set JobHandler to
sinaNews. Use the same prompt from the sample above as the task parameter.Run the task once manually. The DingTalk group receives a notification with the AI-analyzed news summary.
Scale with sharding and task orchestration
The single-task approach above pulls news from only one source (Sina Finance). For near-real-time coverage across multiple markets, a single executor is too slow. Combine MSE XXL-JOB sharding broadcast with task dependency orchestration to build a scalable, multi-stage pipeline.
Architecture
Set up a three-task dependency chain in MSE XXL-JOB:
Pull financial data --> Analyze data --> Generate report
How the pipeline runs
Pull financial data (sharding broadcast routing): XXL-JOB dispatches parallel sub-tasks to multiple executors. Each executor pulls data from a different source (equities, forex, commodities, and so on). Results are stored in a shared location such as a database, Redis, or Object Storage Service (OSS).
Analyze data: After all shards complete, this task retrieves the collected data and sends it to DeepSeek for analysis. The results are stored for the next stage.
Generate report: This task compiles the analysis results into a structured investment report and delivers it to stakeholders through DingTalk or email.
This pattern -- parallel data collection, centralized analysis, automated distribution -- applies to any scenario where you need to aggregate data from multiple sources and produce AI-driven insights on a schedule.
What's next
Create an XXL-JOB instance: Set up a managed XXL-JOB instance on MSE.
xxljob-demo source code: Clone and run the complete Spring Boot demo.
Alibaba Cloud Model Studio: Explore available models including DeepSeek-R1 and QwQ.