Obtain an SDK

You can obtain the latest Data Lake Analytics (DLA) SDK package from the Maven repository.
<dependency>
    <groupId>com.aliyun</groupId>
    <artifactId>aliyun-java-sdk-openanalytics-open</artifactId>
    <version>1.0.10</version>
</dependency>
<dependency>
    <groupId>com.aliyun</groupId>
    <artifactId>aliyun-java-sdk-core</artifactId> 
    <version>4.4.6</version>
</dependency>

Use the SDK to submit a Spark job

  1. Obtain the AccessKey pair of your Alibaba Cloud account. For more information, see Obtain an AccessKey pair.
  2. Obtain the ID of the region where DLA is deployed. For more information about region IDs, see Regions and zones.
  3. Determine the name of the virtual cluster (VC) and the content of the JSON file for running the Spark job. You can submit the Spark job in the DLA console and then determine the input and output content.
    Sample code for using an SDK to submit a Spark job:
    /**
     * Submit a job to the serverless Spark engine of DLA.
     *
     * @param region             The ID of the region where DLA is deployed.
     * @param accessKeyId        The AccessKey ID of your Alibaba Cloud account.
     * @param accessKeySecret    The AccessKey secret of your Alibaba Cloud account.
     * @param virtualClusterName The name of the VC in DLA.
     * @param jobConfig         The JSON file that describes the Spark job that you want to submit.
     * @return Spark JobId, The ID of the job that is returned after the job is submitted. The ID is used to monitor the status of the job.
     * @throws ClientException An exception is returned due to issues such as network errors.
     */
    public String submitSparkJob(String regionId,
                                 String accessKeyId, 
                                 String accessKeySecret, 
                                 String virtualClusterName, 
                                 String jobConfig) throws ClientException {
        // Initialize the Alibaba Cloud development client.
        DefaultProfile profile = DefaultProfile.getProfile(regionId, accessKeyId, accessKeySecret);
        IAcsClient client = new DefaultAcsClient(profile);
    
        // Initialize the request and specify the VC name and job content.
        SubmitSparkJobRequest request = new SubmitSparkJobRequest();
        request.setVcName(virtualClusterName);
        request.setConfigJson(jobConfig);
    
        // Submit the Spark job and return the ID of the job.
        SubmitSparkJobResponse response = client.getAcsResponse(request);
        return response.getJobId();
    }  
    Notice JobConfig is a valid JSON string. We recommend that you run a small number of jobs in the DLA console, and then use the SDK to automate the submission of core services.