Call ModifyFlowJob to modify a data development job.
Debugging
Request parameters
Parameter | Type | Required | Example | Description |
---|---|---|---|---|
Action | String | Yes | ModifyFlowJob |
The operation that you want to perform. Set the value to ModifyFlowJob. |
Id | String | Yes | FJ-BCCAE48B90CC**** |
The job ID. You can call ListFlowJob View the scaling group ID. |
ProjectId | String | Yes | FP-257A173659F5**** |
The project ID. You can call ListFlowProject View the ID of the project. |
RegionId | String | Yes | cn-hangzhou |
The region ID. You can call DescribeRegions View the latest list of Alibaba Cloud regions. |
ResourceList.N.Path | String | Yes | oss://path/demo.jar |
The path of OSS or HDFS. |
Name | String | No | my_shell_job |
The name of the modified job. |
Description | String | No | This is the description of a job |
The description of the modified job. |
FailAct | String | No | CONTINUE |
The processing method when the job fails. The value is as follows:
|
MaxRetry | Integer | No | 5 |
The maximum number of retries. The value ranges from 0 to 5. |
RetryPolicy | String | No | None. |
retry policy, retain parameters. |
MaxRunningTimeSec | Long | No | 0 |
Retain parameters. |
RetryInterval | Long | No | 200 |
The retry interval, which ranges from 0 to 300 (seconds). |
Params | String | No | ls -l |
The content of the job. |
ParamConf | String | No | {"date":"${yyyy-MM-dd}"} |
The configuration parameters of the job. |
CustomVariables | String | No | {\"scope\":\"PROJECT\",\"entityId\":\"FP-80C2FDDBF35D9CC5\",\"variables\":[{\"name\":\"v1\",\"value\":\"1\",\"properties\":{\"password\":true}}]} |
The custom variables configured for the job. |
EnvConf | String | No | {"key":"value"} |
The environment variables configured for the job. Note The maximum length of the entire JSON string is 1024 bytes.
|
RunConf | String | No | {"priority":1,"userName":"hadoop","memory":2048,"cores":1} |
Run the configuration, the value is as follows:
|
MonitorConf | String | No | {"inputs":[{"type":"KAFKA","clusterId":"C-1234567","topics":"kafka_topic","consumer.group":"kafka_consumer_group"}],"outputs":[{"type":"KAFKA","clusterId":"C-1234567","topics":"kafka_topic"}]} |
Monitoring configuration, only SPARK_STREAMING Type jobs support monitoring configurations. |
Mode | String | No | YARN |
Model mode, with the following values:
|
ResourceList.N.Alias | String | No | demo.jar |
The alias of the resource. |
ClusterId | String | No | C-A23BD131A862**** |
The ID of a cluster. |
AlertConf | String | No | None. |
Retain parameters. |
Response parameters
Parameter | Type | Example | Description |
---|---|---|---|
Data | Boolean | true |
Indicates whether SQL audit was disabled for the DRDS database. Valid values:
|
RequestId | String | 549175a-6d14-4c8a-89f9-5e28300f6d7e |
The ID of the request. |
Examples
Sample requests
http(s)://[Endpoint]/?Action=ModifyFlowJob
&Id=FJ-BBCAE48B90CC****
&ProjectId=FP-257A173659F5****
&RegionId=cn-hangzhou
&ClusterId=C-A23BD131A862****
&Description=This is a data development job Description
&EnvConf={"key":"value"}
&FailAct=CONTINUE
&MaxRetry=5
&Mode=YARN
&MonitorConf={"inputs":[{"type":"KAFKA","clusterId":"C-1234567","topics":"kafka_topic","consumer.group":"kafka_consumer_group"}],"outputs":[{"type":"KAFKA","clusterId":"C-1234567","topics":"kafka_topic"}]}
&Name=my_shell_job
&ParamConf={"date":"${yyyy-MM-dd}"}
&Params=ls -l
&ResourceList.1.Alias=demo.jar
&ResourceList.1.Path=oss://path/demo.jar
&RetryInterval=200
&RunConf={"priority":1,"userName":"hadoop","memory":2048,"cores":1}
&<common request parameters>
Sample success responses
XML
format
<RequestId>ECC2D0D1-B6D5-468D-B698-30E8805EB574</RequestId>
<Data>true</Data>
JSON
Syntax
{
"RequestId":"ECC2D0D1-B6D5-468D-B698-30E8805EB574",
"Data":true
}