All Products
Search
Document Center

Microservices Engine:Use XXL-JOB and DeepSeek to push hot news and analyze financial data

Last Updated:Mar 11, 2026

Build an automated pipeline that fetches trending financial news on a schedule, analyzes it with the DeepSeek large language model (LLM), and pushes structured summaries to a DingTalk group chat.

How it works

As AI model capabilities grow, many business scenarios benefit from combining scheduled tasks with LLM-powered analysis. Common examples include:

  • Risk monitoring: Monitor key system metrics on a schedule and use LLM-powered analysis to detect potential threats.

  • Data analytics: Collect financial data on a schedule and use a large language model to generate intelligent analysis and investment insights.

This guide demonstrates the data analytics scenario. The pipeline combines two components:

  • XXL-JOB (distributed task scheduler): Triggers tasks on a cron schedule, passes prompts as task parameters, and orchestrates multi-step workflows.

  • DeepSeek-R1 (large language model): Processes raw news data into structured summaries and investment insights.

The end-to-end flow:

  1. XXL-JOB triggers a scheduled task on your executor application.

  2. The executor fetches raw financial news from external APIs (for example, Sina Finance).

  3. The executor sends the raw data to DeepSeek with a prompt that instructs it to extract, rank, and summarize the top stories.

  4. DeepSeek returns a structured summary.

  5. The executor pushes the summary to a DingTalk group through a webhook.

For a more advanced scenario that uses sharding broadcast and task dependency orchestration to build a multi-stage financial analysis pipeline, see Scale with sharding and task orchestration.

Prerequisites

Before you begin, make sure that you have:

Set up DeepSeek

Deploy DeepSeek locally with Ollama, or access it through Alibaba Cloud Model Studio.

DeepSeek-R1 is optimized for complex logical reasoning, making it a strong fit for data analysis. Its parent company, High-Flyer Quant, specializes in quantitative trading, which gives the model a practical edge in financial scenarios. DeepSeek is also open source and lightweight, so local deployment is straightforward.

Alternative: QwQ, Alibaba Cloud's open-source reasoning model, delivers comparable inference performance to DeepSeek-R1 and also excels at complex data analysis. The following chart compares QwQ-32B with other leading models in mathematical reasoning, programming, and general capabilities:

QwQ-32B comparison chart

Option 1: Deploy locally with Ollama

The steps below use DeepSeek as an example. The same process applies to QwQ or other Ollama-supported models.

  1. Install Ollama from https://ollama.com/download.

    Install Ollama

  2. Pull the DeepSeek-R1 model. The R1 variant focuses on complex logical reasoning and is best suited for data analysis. Choose a model size based on your hardware. For example, if your machine has 16 GB of RAM, use the 7b model: The following table lists hardware requirements for each model size.

    ModelSizeGPU memoryRAM
    deepseek-r1:1.5b1.1 GB4 GB+8 GB+
    deepseek-r1:7b4.7 GB8 GB+16 GB+
    deepseek-r1:8b4.9 GB10 GB+18 GB+
    deepseek-r1:14b9.0 GB16 GB+32 GB+
    deepseek-r1:32b20 GB24 GB+64 GB+
       ollama pull deepseek-r1:7b

    Select DeepSeek-R1

    Pull model

  3. Verify the deployment by sending a request to the OpenAI-compatible API. Ollama serves on port 11434 by default: The OpenAI-compatible API simplifies integration -- the same client code works for both local and cloud-hosted models.

       curl http://localhost:11434/v1/chat/completions \
         -H "Content-Type: application/json" \
         -d '{
           "model": "deepseek-r1:7b",
           "messages": [{"role": "user", "content": "Hello"}]
         }'

    API test

Option 2: Use Alibaba Cloud Model Studio

Alibaba Cloud Model Studio provides hosted access to DeepSeek, QwQ, and other models. Activate the service to start making API calls immediately -- no infrastructure setup required. A free quota is included, and you can switch between models at any time.

Set up XXL-JOB

XXL-JOB adds four capabilities that are useful for production AI workloads:

CapabilityDescription
Scheduled executionTrigger AI tasks on a cron schedule
Dynamic promptsPass prompt text and response format as task parameters, allowing updates without redeploying code
Sharding broadcastSplit a large task into parallel sub-tasks across multiple executors to speed up data collection
Task dependency orchestrationChain tasks into a multi-step pipeline (for example, collect data, then analyze, then generate a report)

Option 1: Deploy locally

For detailed steps, see the XXL-JOB official documentation. The general process:

  1. Prepare a database and initialize the table schema.

    Database schema

  2. Import the XXL-JOB source code into your IDE and configure xxl-job-admin in the application configuration file.

    Configure xxl-job-admin

  3. Run the XxlJobAdminApplication class, then open http://127.0.0.1:8080/xxl-job-admin in your browser. Default credentials: username admin, password 123456.

Option 2: Use the managed MSE XXL-JOB service

Alibaba Cloud Microservices Engine (MSE) provides a fully managed XXL-JOB service. To get started, see Create an XXL-JOB instance. A free trial is available.

Push trending financial news

The following steps demonstrate the full workflow using MSE XXL-JOB and DeepSeek-R1 hosted on Alibaba Cloud Model Studio. The same steps apply to a self-hosted setup -- only the connection parameters differ.

For the complete source code, see the xxljob-demo (Spring Boot) repository on GitHub.

Step 1: Deploy the executor application

  1. Log on to the ACK console and create an ACK serverless cluster. To pull the demo image, enable SNAT on the VPC. Skip this step if SNAT is already configured.

  2. In the ACK console, click the cluster name. In the left-side navigation pane, choose Workloads > Stateless. Click Create from YAML and apply the following deployment configuration: Replace the following placeholders with your actual values:

    PlaceholderDescriptionExample
    <your-xxljob-admin-address>MSE XXL-JOB instance endpointhttp://xxljob-xxxxx.schedulerx.mse.aliyuncs.com
    <your-app-name>Executor application name registered in XXL-JOBmy-news-app
    <your-access-token>XXL-JOB access tokenxxxxxxx
    <your-api-key>Alibaba Cloud Model Studio API keysk-xxx
    <your-dingtalk-webhook-url>DingTalk group robot webhook URLhttps://oapi.dingtalk.com/robot/send?access_token=xx

    The following is a sample YAML configuration for the application deployment:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: xxljob-demo
      labels:
        app: xxljob-demo
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: xxljob-demo
      template:
        metadata:
          labels:
            app: xxljob-demo
        spec:
          containers:
          - name: xxljob-executor
            image: schedulerx-registry.cn-hangzhou.cr.aliyuncs.com/schedulerx3/xxljob-demo:2.4.1
            ports:
            - containerPort: 9999
            env:
              - name: JAVA_OPTS
                value: >-
                  -Dxxl.job.admin.addresses=<your-xxljob-admin-address>
                  -Dxxl.job.executor.appname=<your-app-name>
                  -Dxxl.job.accessToken=<your-access-token>
                  -Ddashscope.api.key=<your-api-key>
                  -Dwebhook.url=<your-dingtalk-webhook-url>

Step 2: Get the startup parameters

  1. XXL-JOB connection parameters: Log on to the MSE XXL-JOB console and select a region. Click the target instance. On the Application Management page, click Access in the Executor Count column for the target application. Click Copy to copy the connection parameters into your YAML configuration:

       -Dxxl.job.admin.addresses=http://xxljob-xxxxx.schedulerx.mse.aliyuncs.com
       -Dxxl.job.executor.appname=xxxxx
       -Dxxl.job.accessToken=xxxxxxx

    Access configuration

  2. Model Studio API key: Log on to Alibaba Cloud Model Studio. Click the profile icon in the upper-right corner and select API-KEY. On the management page, create or copy an API key.

       -Ddashscope.api.key=sk-xxx
  3. DingTalk webhook URL: In your DingTalk group settings, add a custom robot and copy the webhook URL.

       -Dwebhook.url=https://oapi.dingtalk.com/robot/send?access_token=xx

Step 3: Create and run the news task

On the MSE XXL-JOB console

  1. Log on to the MSE XXL-JOB console and select a region. Click the target instance, then click Task Management in the left-side navigation pane.

  2. Click Create Task. Set JobHandler Name to sinaNews. In the Task Parameter field, enter the following prompt. Keep the default values for all other parameters and save the task.

    Sample Task Parameter prompt configuration:

    You are a news assistant. You need to Unicode-decode the content provided by the user, extract the 5 hottest news articles, and finally summarize the content.
    The output format is as follows:
    
    Today's hot financial news (sorted by popularity):
    
    ---
    
    #### 1. [**title**](url)
    
    Popularity: 99,999
    
    Publisher: publisher
    
    ---
    
    #### **Message Summary**
    
    Analyze if there is any latest news related to AI. Briefly summarize today's news content.
  3. On the Task Management page, find the sinaNews task. In the Actions column, click Run Once. After the task completes, the DingTalk group receives the AI-generated news summary.

On a self-hosted XXL-JOB Admin console

  1. Create a task and set JobHandler to sinaNews. Use the same prompt from the sample above as the task parameter.

  2. Run the task once manually. The DingTalk group receives a notification with the AI-analyzed news summary.

Scale with sharding and task orchestration

The single-task approach above pulls news from only one source (Sina Finance). For near-real-time coverage across multiple markets, a single executor is too slow. Combine MSE XXL-JOB sharding broadcast with task dependency orchestration to build a scalable, multi-stage pipeline.

Architecture

Set up a three-task dependency chain in MSE XXL-JOB:

Pull financial data --> Analyze data --> Generate report

How the pipeline runs

  1. Pull financial data (sharding broadcast routing): XXL-JOB dispatches parallel sub-tasks to multiple executors. Each executor pulls data from a different source (equities, forex, commodities, and so on). Results are stored in a shared location such as a database, Redis, or Object Storage Service (OSS).

  2. Analyze data: After all shards complete, this task retrieves the collected data and sends it to DeepSeek for analysis. The results are stored for the next stage.

  3. Generate report: This task compiles the analysis results into a structured investment report and delivers it to stakeholders through DingTalk or email.

This pattern -- parallel data collection, centralized analysis, automated distribution -- applies to any scenario where you need to aggregate data from multiple sources and produce AI-driven insights on a schedule.

What's next