All Products
Search
Document Center

SchedulerX:Kubernetes jobs

Last Updated:Mar 11, 2026

SchedulerX schedules scripts and native Kubernetes workloads with built-in monitoring, alerting, log collection, and diagnostics. Define jobs directly in the SchedulerX console instead of packaging scripts into container images and managing CronJob YAML -- SchedulerX handles pod lifecycle, execution history, and failure notifications.

How it works

Architecture of SchedulerX Kubernetes job scheduling
  1. SchedulerX connects to your Kubernetes cluster through schedulerx-agent.

  2. When a job triggers, SchedulerX creates a pod (or a native Kubernetes Job) in the cluster.

  3. The pod runs the configured script or YAML workload, and SchedulerX collects its logs and exit status.

  4. After execution, SchedulerX records the result and triggers alerts if the job fails or times out.

Choose between Kubernetes jobs and script jobs

ScenarioRecommended typeReason
Infrequent tasks that consume significant resources (for example, nightly data processing or financial report generation)Kubernetes jobEach execution runs in a dynamically provisioned pod. Kubernetes load balancing distributes the workload, keeping the agent host stable.
Frequent, lightweight tasks (for example, a health check every 30 seconds)Script jobForking a child process is fast and avoids the overhead of pulling images and starting pods. Frequent pod creation can also trigger API server throttling.

Dependency management:

Job typeHow to manage dependencies
Script jobDeploy dependencies to ECS instances in advance.
Kubernetes jobPackage dependencies into a base image. Rebuild the image when dependencies change.

Prerequisites

Connect SchedulerX to the target Kubernetes cluster. For instructions, see Deploy SchedulerX in a Kubernetes cluster.

Create a Kubernetes job

All Kubernetes job types are created in the Task Management module of the SchedulerX console. Set Task type to K8s, then choose a resource type based on what you want to run.

Script jobs (Shell, Python, PHP, Node.js)

For script-based jobs, SchedulerX provides a built-in editor -- write and update scripts directly in the console without building images or editing YAML files.

Resource typeDefault imagePod naming pattern
Shell-Scriptbusyboxschedulerx-shell-{JobId}
Python-ScriptPythonschedulerx-python-{JobId}
Php-Scriptphp:7.4-clischedulerx-php-{JobId}
Node.js-Scriptnode:16schedulerx-node-{JobId}

Replace the default image with a custom one if needed.

Create a script job:

  1. In the SchedulerX console, go to Task Management.

  2. Create a job with Task type set to K8s.

  3. Set resource to the script type you need (for example, Shell-Script).

  4. Write or paste your script in the built-in editor.

  5. Save the job.

The following screenshots show the configuration page for each script type:

  • Shell script configuration: Shell script configuration

  • Python script configuration: Python script configuration

  • PHP script configuration: PHP script configuration

  • Node.js script configuration: Node.js script configuration

Run and verify:

  1. On the Task Management page, find the job and click Run once in the Actions column.

  2. A pod starts in the cluster. For example, a Shell script job creates a pod named schedulerx-shell-{JobId}. Pod started in cluster

  3. To view execution logs, click Historical records in the Actions column. Operational logs

Job-YAML

Use Job-YAML to run a native Kubernetes Job defined in YAML.

  1. Create a job with Task type set to K8s and resource set to Job-YAML.

  2. Enter your Job YAML definition.

    Job-YAML configuration

  3. Click Run once in the Actions column. A pod and the corresponding Kubernetes Job start in the cluster.

    Job-YAML pod started

  4. Click Historical records to view execution logs.

    Job-YAML logs

Important

Do not set resource to CronJob-YAML. Use SchedulerX scheduling instead so that historical records and operational logs are properly collected.

Pod-YAML

Use Pod-YAML to run a native Kubernetes pod defined in YAML.

  1. Create a job with Task type set to K8s and resource set to Pod-YAML.

  2. Enter your pod YAML definition.

    Pod-YAML configuration

  3. Click Run once in the Actions column. The pod starts in the cluster.

    Pod-YAML pod started

  4. Click Historical records to view execution logs.

    Pod-YAML logs

Important

Do not start pods with a long lifecycle (for example, web applications that run indefinitely). Set the restart policy to Never to prevent repeated restarts.

Access job parameters through environment variables

SchedulerX injects job metadata into the pod as environment variables. Read these variables in scripts or applications to access scheduling context -- no additional configuration required.

Note

Requires schedulerx-agent version 1.10.14 or later.

Environment variableDescription
SCHEDULERX_JOB_NAMEName of the job
SCHEDULERX_SCHEDULE_TIMESTAMPTimestamp when the job was scheduled
SCHEDULERX_DATA_TIMESTAMPTimestamp when the job data is processed
SCHEDULERX_WORKFLOW_INSTANCE_IDID of the workflow instance
SCHEDULERX_JOB_PARAMETERSJob-level parameters
SCHEDULERX_INSTANCE_PARAMETERSInstance-level parameters
SCHEDULERX_JOB_SHARDING_PARAMETERSharding parameters for the job

The following screenshot shows sample environment variable values read by a job:

Environment variable sample

Advantages over native Kubernetes jobs

Online script editing

Native Kubernetes jobs require packaging scripts into images and configuring commands in YAML. Every script change means rebuilding and redeploying the image:

apiVersion: batch/v1
kind: Job
metadata:
  name: hello
spec:
  template:
    spec:
      containers:
      - name: hello
        image: busybox
        command: ["sh",  "/root/hello.sh"]
      restartPolicy: Never
  backoffLimit: 4

With SchedulerX, edit Shell, Python, PHP, and Node.js scripts directly in the console. Changes take effect on the next scheduled run without an image rebuild. Developers unfamiliar with containers can skip container details entirely.

Online script editing in the SchedulerX console

Visual workflow orchestration

Instead of writing Argo-style YAML to define job dependencies:

# Diamond workflow: A -> B,C -> D
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: dag-diamond-
spec:
  entrypoint: diamond
  templates:
  - name: diamond
    dag:
      tasks:
      - name: A
        template: echo
        arguments:
          parameters: [{name: message, value: A}]
      - name: B
        depends: "A"
        template: echo
        arguments:
          parameters: [{name: message, value: B}]
      - name: C
        depends: "A"
        template: echo
        arguments:
          parameters: [{name: message, value: C}]
      - name: D
        depends: "B && C"
        template: echo
        arguments:
          parameters: [{name: message, value: D}]

  - name: echo
    inputs:
      parameters:
      - name: message
    container:
      image: alpine:3.7
      command: [echo, "{{inputs.parameters.message}}"]

SchedulerX provides a drag-and-drop visual interface to build job workflows. Track progress and identify bottlenecks at a glance.

Visual workflow orchestrationWorkflow progress tracking

Built-in alerting and monitoring

SchedulerX includes alerting with no additional setup:

  • Notification channels: text message, phone call, email, and webhook (DingTalk, WeCom, Lark)

  • Alert policies: alert on failure, alert on execution timeout

Automatic log collection

SchedulerX collects pod logs automatically. View and analyze failure details in the console without activating Simple Log Service or other log services.

Log viewer in the SchedulerX console

Built-in monitoring dashboard

View job execution metrics in a built-in dashboard without activating Managed Service for Prometheus.

Monitoring dashboard

Mixed deployment of online and offline jobs

SchedulerX supports both Java and Kubernetes jobs, enabling mixed scheduling of online and offline workloads:

  • Online jobs that require low latency (for example, order processing): run as method calls within the same Java process.

  • Offline jobs that consume significant resources (for example, financial report generation): run as scripts in separate pods.