All Products
Search
Document Center

Microservices Engine:Push trending financial news and analyze financial data based on XXL-JOB and DeepSeek

Last Updated:Dec 04, 2025

This topic describes how to use Microservices Engine (MSE) XXL-JOB and DeepSeek to periodically push trending financial news and analyze financial data.

Background information

With continuous capability improvement, AI large language models (LLMs) are applied to more business scenarios. In many business scenarios, jobs can be manually triggered or automatically scheduled in the background. Job capabilities are also enhanced with LLM capabilities. Typical scenarios:

  • Risk monitoring: Periodically monitor key metrics of the system and identify potential risks based on the intelligent analysis capability of LLMs.

  • Data analysis: Periodically collect online financial data, use LLMs to perform intelligent analysis, and then generate investment ideas for investors.

Prerequisites

Environment preparations

Build DeepSeek

DeepSeek is selected as the desired LLM due to the following reasons:

  • DeepSeek stands out for its reasoning ability and is well-suited for data analysis. In addition, DeepSeek's parent company, High-Flyer, is engaged in quantitative trading. It is considered that DeepSeek is advantageous in data analysis.

  • DeepSeek is open source, lightweight, and easy to deploy.

You can also choose the open source QwQ-32B model developed by Alibaba Cloud. It is comparable to DeepSeek-R1 in terms of reasoning and also superior in complex data analysis. The following figure compares QwQ-32B with other leading LLMs in terms of mathematical reasoning, programming capabilities, and general-purpose capabilities.

image

Solution 1: On-premises deployment

You can perform similar steps to deploy DeepSeek, QwQ, or other models in on-premises environments. The following steps are used to deploy DeepSeek in on-premises environments.

  1. Download Ollama from https://ollama.com/download and install it.

    image

  2. Install the reasoning-specialized DeepSeek-R1 model that is more suitable for data analysis.

    image

    Select a model based on the configuration of a machine. For example, your computer has 16 GB of memory. For this configuration, select 7b and enter the following commands in the CLI to install it.

    image

    The following table describes hardware requirements for different models:

    Model name

    Model size

    Video memory

    Memory

    deepseek-r1:1.5b

    1.1 GB

    Larger than 4 GB

    Larger than 8 GB

    deepseek-r1:7b

    4.7 GB

    Larger than 8 GB

    Larger than 16 GB

    deepseek-r1:8b

    4.9 GB

    Larger than 10 GB

    Larger than 18 GB

    deepseek-r1:14b

    9.0 GB

    Larger than 16 GB

    Larger than 32 GB

    deepseek-r1:32b

    20 GB

    Larger than 24 GB

    Larger than 64 GB

  3. After the deployment is completed, use OpenAI-compatible APIs (default port: 11434) to conduct testing. This facilitates subsequent code writing.

    image

Solution 2: Use cloud services

You can also directly use cloud services. For example, if you want to use Alibaba Cloud Model Studio, you only need to activate it before you can use it. After you activate it, you can enjoy large amounts of free quotas. If you use cloud services, you can switch models at any time to use different models.

Build XXL-JOB

XXL-JOB provides the following features:

  • Periodically initiates requests of AI jobs.

  • Uses job parameters to specify the prompt and response format for dynamic modifications.

  • Uses sharding broadcast jobs to split large jobs into small jobs to accelerate the execution of AI jobs.

  • Defines job dependencies and orchestrates jobs to build an AI data analysis workflow.

Solution 1: On-premises deployment

XXL-JOB is easy to deploy. For detailed steps, visit the official website. The following content describes the general steps.

  1. Prepare a database and initialize the database table structure.

    image

  2. Import code into your integrated development environment (IDE) and configure parameters in the xxl-job-admin configuration file.

    image

  3. Run the XxlJobAdminApplication class, and enter http://127.0.0.1:8080/xxl-job-admin in the address bar of your browser to log on to the XXL-JOB console. The default username and password for logon are admin and 123456, respectively.

Solution 2: Use cloud services

You can use the managed Alibaba Cloud MSE XXL-JOB. For information about how to create an Alibaba Cloud MSE XXL-JOB instance, see Create an instance. You can also use it for a free trial.

Push trending financial news

This section describes how to use MSE XXL-JOB or self-managed XXL-JOB with DeepSeek R1 managed by Alibaba Cloud Model Studio to push trending financial news. For demo details, see xxljob-demo (SpringBoot).

Step 1: Connect your application to XXL-JOB

  1. Log on to the Alibaba Cloud Container Service for Kubernetes (ACK) console, and create an ACK Serverless cluster. On the ACK Serverless tab of the buy page, select Configure SNAT for VPC to facilitate demo image pulling. Ignore this operation if SNAT is already configured for the VPC.

  2. On the Clusters page in the ACK console, click the name of the cluster. In the left-side navigation pane, choose Workloads > Deployments. On the Deployments page, click Create from YAML. Then, enter the following YAML code to connect your application to MSE XXL-JOB. For information about how to configure the parameters -Dxxl.job.admin.addresses, -Dxxl.job.executor.appname, -Dxxl.job.accessToken, -Ddashscope.api.key, and -Dwebhook.url, see Configure startup parameters.

    Sample YAML code for application deployment

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: xxljob-demo
      labels:
        app: xxljob-demo
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: xxljob-demo
      template:
        metadata:
          labels:
            app: xxljob-demo
        spec:
          containers:
          - name: xxljob-executor
            image: registry.cn-hangzhou.aliyuncs.com/schedulerx/xxljob-demo:2.4.2
            ports:
            - containerPort: 9999
            env:
              - name: JAVA_OPTS
                value: >-
                  -Dxxl.job.admin.addresses=http://xxljob-xxxxx.schedulerx.mse.aliyuncs.com
                  -Dxxl.job.executor.appname=xxxxx
                  -Dxxl.job.accessToken=xxxxxxx
                  -Ddashscope.api.key=sk-xxx
                  -Dwebhook.url=https://oapi.dingtalk.com/robot/send?access_token=xx

Step 2: Configure startup parameters

  1. Obtain the settings of startup parameters.

    1. Log on to the MSE console, go to the XXL-JOB Version page, and then select a region in the top navigation bar.

    2. Click the ID of the instance. In the left-side navigation pane, click Application Management. Click Access in the Number of actuators column of the target application.

    image

    Replace the parameter settings with those of the destination instance, and click One-click Copy to copy the settings to the YAML code.

    -Dxxl.job.admin.addresses=http://xxljob-xxxxx.schedulerx.mse.aliyuncs.com
    -Dxxl.job.executor.appname=xxxxx
    -Dxxl.job.accessToken=xxxxxxx
  2. Log on to the Alibaba Cloud Model Studio console, click the API-KEY icon in the upper-right corner to go to the API management page, and then create or copy an API key.

    After you replace the API key, copy the parameter settings to the YAML code.

    -Ddashscope.api.key=sk-xxx
  3. Add a custom chatbot in the DingTalk group settings and obtain the webhook URL of the chatbot.

    After you replace the value of access_token, copy the parameter setting to the YAML code.

    -Dwebhook.url=https://oapi.dingtalk.com/robot/send?access_token=xx

Step 3: Create and run AI jobs

MSE XXL-JOB console
  1. Log on to the MSE console, go to the XXL-JOB Version page, and then select a region in the top navigation bar. Click the ID of the instance. In the left-side navigation pane, click Task Management. On the page that appears, click Create Task.

  2. In the Create Task panel, set JobHandler Name to sinaNews, enter the prompt information in the Input field, and then retain the default settings of other parameters.

    Sample prompt information in Input

    Work as a news assistant to decode the provided Unicode content, extract the top 5 trending news, and then summarize the content provided by the user. 
    Sample output format:
    
    Daily trending financial news (sorted by trend ranking)
    
    ---
    
    #### 1. [**title**](url)
    
    Trending value: 99,999
    
    Publisher: publisher
    
    ---
    
    #### **Message summary**
    
    Analyze whether the latest AI-related news exists. Summarize today's news.

  3. On the Task Management page, find the sinaNews job that you created and click Run once in the Operation column. Wait until the job execution is complete. Then, the DingTalk group can receive daily news analyzed and summarized by AI LLMs.

Self-managed XXL-JOB Admin
  1. In the self-managed XXL-JOB Admin console, set JobHandler to sinaNews. For information about job parameters, see Configure startup parameters.

  2. On the Task Management page, manually run a job once. Then, you can receive a DingTalk notification.

Analyze financial data

In Push trending financial news, only news from Sina Finance is pushed. If you want to pull national and international financial news and data in near-real time and make quick decisions, the timeliness of all jobs needs to be considered. To address this, you can use MSE XXL-JOB sharding broadcast jobs to split large jobs into small jobs and use small jobs to pull different data. Then, you can use the job orchestration capabilities provided by MSE XXL-JOB to build a workflow and complete jobs step by step.

  1. Create three jobs on MSE XXL-JOB and establish job dependencies. Among the three jobs, one job is used to pull financial data, one job is used to analyze data, and one job is used to generate reports. The routing policy of the job for pulling financial data is sharding broadcast.

  2. To run the job for pulling financial data, use sharding broadcast to distribute multiple subtasks to different executors to obtain national and international finance news and financial data and store results to storage services, such as databases, Redis instances, or object storage services.

  3. To run the data analysis job, obtain the current financial data, call DeepSeek to analyze the data, and then store the analysis results.

  4. After the data analysis is completed, run the report generation job to generate a report or table for the analyzed data. Then, use DingTalk or emails to deliver the report or table to users to provide investment ideas.