All Products
Search
Document Center

Microservices Engine:Kubernetes jobs

Last Updated:Feb 28, 2026

Deploy the SchedulerX agent in a Kubernetes cluster to schedule Pods and Jobs with built-in monitoring, alerting, log collection, and diagnostics.

How it works

The following diagram shows the workflow of running Kubernetes jobs in SchedulerX.

Workflow of Kubernetes jobs

When to use Kubernetes jobs vs. script jobs

ScenarioScript jobKubernetes job
Infrequent scheduling, high resource consumptionNot recommended. The SchedulerX agent forks a child process for each execution, which occupies machine resources and may cause overload.Recommended. The Kubernetes load balancing policy assigns a new Pod per execution, ensuring high stability.
Frequent scheduling, low resource consumptionRecommended. The SchedulerX agent forks a child process, which accelerates execution and improves resource utilization.Not recommended. Downloading an image and starting a Pod is slow. Frequent API server calls to schedule Pods or Jobs may cause throttling.
Dependency deploymentDeploy dependencies on Elastic Compute Service (ECS) instances manually in advance.Build dependencies into base images. Rebuild images when dependencies change.

Prerequisites

Connect SchedulerX to the target Kubernetes cluster. For more information, see Deploy SchedulerX in a Kubernetes cluster.

Create a Kubernetes job

Go to the Tasks page in the SchedulerX console. Set Task type to K8s, then select a resource type.

Resource type reference

Resource typeDefault imagePod naming pattern
Shell-ScriptBusyBoxschedulerx-shell-{JobId}
Python-ScriptPythonschedulerx-python-{JobId}
Php-Scriptphp:7.4-clischedulerx-php-{JobId}
Node.js-Scriptnode:16schedulerx-node-{JobId}
Job-YAMLUser-defined in YAMLUser-defined in YAML
Pod-YAMLUser-defined in YAMLUser-defined in YAML

For script types (Shell-Script, Python-Script, Php-Script, and Node.js-Script), you write scripts directly in the console. No manual image build is required. For Job-YAML and Pod-YAML, you define the full specification in YAML.

After you complete the configuration, click Run once in the Operation column on the Tasks page. The corresponding Pod starts in the cluster. To view the Pod logs, click Historical records in the Operation column.

The following sections show the configuration and results for each resource type.

Shell script

Set resource to Shell-Script. The default image is BusyBox. You can change the image.

Shell script configuration

A Pod named schedulerx-shell-{JobId} starts in the cluster.

Shell Pod startedShell Pod logs

Python script

Set resource to Python-Script. The default image is Python. You can change the image.

Python script configuration

A Pod named schedulerx-python-{JobId} starts in the cluster.

Python Pod startedPython Pod logs

PHP script

Set resource to Php-Script. The default image is php:7.4-cli. You can change the image.

PHP script configuration

A Pod named schedulerx-php-{JobId} starts in the cluster.

PHP Pod startedPHP Pod logs

Node.js script

Set resource to Node.js-Script. The default image is node:16. You can change the image.

Node.js script configuration

A Pod named schedulerx-node-{JobId} starts in the cluster.

Node.js Pod startedNode.js Pod logs

Job-YAML

Set resource to Job-YAML to schedule a native Kubernetes Job. Define the Job specification in YAML.

Job-YAML configuration

A Pod and the Job start in the cluster.

Job-YAML Pod startedJob-YAML Pod logs
Important

We recommend that you do not set resource to CronJob-YAML. Use SchedulerX scheduling to collect historical records and operational logs of Pods.

Pod-YAML

Set resource to Pod-YAML to run a native Kubernetes Pod. Define the Pod specification in YAML.

Pod-YAML configuration

The native Kubernetes Pod starts in the cluster.

Pod-YAML Pod startedPod-YAML Pod logs
Important

We recommend that you do not start Pods with a long lifecycle, such as Pods that run web applications and never stop. Set the restart policy to Never to prevent repeated restarts.

Environment variable parameters

SchedulerX supports job parameters as environment variables. Scripts, Pods, and native Kubernetes Jobs can read these parameters from the environment variables, regardless of the resource type.

Note

Requires SchedulerX-Agent version 1.10.14 or later.

KeyDescription
SCHEDULERX_JOB_NAMEName of the job.
SCHEDULERX_SCHEDULE_TIMESTAMPTimestamp when the job is scheduled.
SCHEDULERX_DATA_TIMESTAMPTimestamp when the job data is processed.
SCHEDULERX_WORKFLOW_INSTANCE_IDID of the workflow instance.
SCHEDULERX_JOB_PARAMETERSParameters of the job.
SCHEDULERX_INSTANCE_PARAMETERSParameters of the job instance.
SCHEDULERX_JOB_SHARDING_PARAMETERSharding parameters of the job.

The following figure shows sample environment variable values read by SchedulerX.

Environment variable parameters

Advantages over native Kubernetes Jobs

Edit scripts online without rebuilding images

Native Kubernetes Jobs require you to package scripts into images and configure commands in YAML files. Every script change requires an image rebuild and redeployment. For example:

apiVersion: batch/v1
kind: Job
metadata:
  name: hello
spec:
  template:
    spec:
      containers:
      - name: hello
        image: busybox
        command: ["sh",  "/root/hello.sh"]
      restartPolicy: Never
  backoffLimit: 4

SchedulerX eliminates this workflow. The console supports online editing of Shell, Python, PHP, and Node.js scripts. Save your changes and SchedulerX runs the updated script in a new Pod on the next execution. No image rebuild, no YAML editing, no redeployment.

This approach also hides container details, which makes Kubernetes job management accessible to developers unfamiliar with container services.

Online script editing

Orchestrate jobs visually with drag-and-drop

In the Kubernetes ecosystem, Argo Workflows is a popular workflow orchestration tool that relies on YAML definitions. For example:

# Diamond workflow: A -> B,C -> D
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: dag-diamond-
spec:
  entrypoint: diamond
  templates:
  - name: diamond
    dag:
      tasks:
      - name: A
        template: echo
        arguments:
          parameters: [{name: message, value: A}]
      - name: B
        depends: "A"
        template: echo
        arguments:
          parameters: [{name: message, value: B}]
      - name: C
        depends: "A"
        template: echo
        arguments:
          parameters: [{name: message, value: C}]
      - name: D
        depends: "B && C"
        template: echo
        arguments:
          parameters: [{name: message, value: D}]

  - name: echo
    inputs:
      parameters:
      - name: message
    container:
      image: alpine:3.7
      command: [echo, "{{inputs.parameters.message}}"]

SchedulerX replaces this YAML with a visual drag-and-drop interface.

Visual workflow editor

The visual interface also shows real-time execution progress, making it straightforward to identify blocked steps and troubleshoot failures.

Workflow execution tracking

Monitor jobs and receive alerts

SchedulerX provides built-in alerting for scheduled Pods and Jobs.

Alert channels: SMS, phone call, email, and webhook (DingTalk, WeCom, and Lark).

Alert policies: alert on failure and alert on execution timeout.

Collect logs automatically

SchedulerX collects Pod logs automatically. You do not need to activate additional log services such as Simple Log Service. When a Pod fails, view the failure details and troubleshoot directly from the SchedulerX console.

Automatic log collection

Use the built-in monitoring dashboard

SchedulerX includes a built-in monitoring dashboard for job execution metrics. No need to activate Managed Service for Prometheus separately.

Monitoring dashboard

Mix online and offline job deployments

SchedulerX supports both Java and Kubernetes jobs, enabling mixed deployment of online and offline workloads:

  • Online jobs that require high real-time performance (such as order processing) run in-process alongside other online services for seamless integration.

  • Offline jobs that consume heavy resources but tolerate latency (such as financial report generation) run in separate Pods.

This mixed deployment model provides flexible scheduling across diverse business requirements.