You can view and modify virtual cluster configurations in the Data Lake Analytics console.
Log on to the Data Lake Analytics console.
In the top navigation bar, select the region where Data Lake Analytics is deployed.
The Serverless Spark feature is available only to China(Hong Kong),Singapore,US(Silicon Valley).
In the left-side navigation pane, choose Serverless Spark > Virtual Cluster management to view the status, creation time, and other information of virtual clusters.
Find the target cluster and click Modify in the Actions column. In the Modify a virtual cluster pane, modify the configuration of the cluster as required.
Parameter Description Name The cluster name that was specified when the cluster was created, which cannot be modified. Resource upper limit The upper limit of the number of CPUs and the memory that the Spark jobs of a virtual cluster can use.
You can select cluster specifications from the drop-down list. Alternatively, you can click Custom to enter the upper limit.
If the total resources that are used by a single Spark job exceed the upper limit, the system rejects the Spark job.
Version The version number of the Serverless Spark engine. Version Description The version description of the Serverless Spark engine.
You can click Show to set the default parameters for a Spark job.
executor default resource The default resource specification of the executors in a Spark job, which corresponds to spark.executor.resourceSpec in the command line. executor default quantity The default number of executors in a Spark job, which corresponds to spark.executor.instances in the command line. driver default resources The default resource specification of the driver in a Spark job, which corresponds to spark.driver.resourceSpec in the command line.