All Products
Search
Document Center

Lindorm:JAR job development practice

Last Updated:Jul 29, 2025

This topic describes the steps for developing JAR jobs for the Lindorm compute engine.

Prerequisites

  • A Lindorm instance is created and LindormTable is enabled for the instance. For more information, see Create an instance.

  • The compute engine service is enabled for the Lindorm instance. For more information, see Enable, upgrade, or downgrade the service.

  • A Java environment is installed. JDK 1.8 or a later version is required.

Step 1: Configure dependencies

Lindorm compute engine JAR jobs depend on the community edition of Spark 3.3.1. You must set the scope field to provided. The following code provides an example:

<!-- Example -->
<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.12</artifactId>
  <version>3.3.1</version>
  <scope>provided</scope>
</dependency>

Step 2: Configure permissions

If you want to use Spark SQL to access data in LindormTable, you must configure a username and password. For more information about how to access the data, see Access data in LindormTable. The following code provides an example:

SparkConf conf = new SparkConf();
conf.set("spark.sql.catalog.lindorm_table.username", "root");
conf.set("spark.sql.catalog.lindorm_table.password", "root");

Parameter

Value

Description

spark.sql.catalog.lindorm_table.username

The default username is root.

The username that is used to access LindormTable.

spark.sql.catalog.lindorm_table.password

The default password is root.

The password that is used to access LindormTable.

Step 3: Configure parameters

For more information about the configuration items and methods that the Lindorm compute engine provides, see Job configuration.

Step 4: Code example

JAR job development is fully compatible with the community edition of Spark 3.3.1. For a code example, see Spark job example.

Step 5: Submit the job

The Lindorm compute engine lets you submit and manage jobs in the following two ways.