Parameter | Description |
Deployment Type | Select PYTHON. |
Deployment Mode | Select Stream Mode or Batch Mode. |
Deployment Name | Enter the name of the deployment that you want to create. |
Engine Version | For more information about engine versions, see Engine version and Lifecycle policies. We recommend that you use a recommended version or a stable version. Recommended: the latest minor version of the latest major version. Stable: the latest minor version of a major version that is still in the service period of the product. Defects in previous versions are fixed in such a version. Normal: other minor versions that are still in the service period of the product. Deprecated: the versions that exceed the service period of the product.
Note In VVR 3.0.3 and later, VVP allows you to run Python deployments that use different engine versions at the same time. The version of the Flink engine that uses VVR 3.0.3 is Flink 1.12. If the engine version of your deployment is Flink 1.12 or earlier, you can perform the following operations to update the engine version based on the engine version that your deployment uses: Flink 1.12: Stop and then restart your deployment. Then, the system automatically updates the engine version of your deployment to vvr-3.0.3-flink-1.12. Flink 1.11 or Flink 1.10: Manually update the engine version of your deployment to vvr-3.0.3-flink-1.12 or vvr-4.0.8-flink-1.13, and then restart the deployment. Otherwise, a timeout error occurs when you start the deployment.
|
Python Uri | The Uniform Resource Identifier (URI) used to access the Python deployment file that you want to upload. Python deployment files can be .py files or .zip files. |
Entry Module | The entry point class of the program. If the Python deployment file that you select is a .py file, you do not need to configure this parameter. If the Python deployment file that you select is a .zip file, you must configure this parameter. For example, you can enter example.word_count in the Entry Module field. |
Entry Point Main Arguments | The parameters of the deployment. |
Python Libraries | A third-party Python package. The third-party Python package that you uploaded is added to PYTHONPATH of the Python worker process so that the package can be directly accessed in Python user-defined functions (UDFs). For more information about how to use third-party Python packages, see Use a third-party Python package. |
Python Archives | Archive files. Only ZIP files such as .zip, .jar, .whl, and .egg are supported. Archive files are decompressed to the working directory of the Python worker process. For example, if the name of the compressed file where the archive files are located is mydata.zip, the following code can be written in Python UDFs to access the mydata.zip archive file. def map():
with open("mydata.zip/mydata/data.txt") as f:
...
For more information about Python Archives, see Use a custom Python virtual environment and Use data files. |
Additional Dependencies | You can upload a file for the deployment, such as a Python deployment file and a data file that the deployment requires. For more information about Python dependencies, see Manage Python dependencies. By default, the dependency file that you upload is downloaded to the /flink/usrlib/ directory of the node on which the deployment runs.
Note Session clusters do not support the configuration of the Additional Dependencies parameter. Only per-job clusters support the configuration of the Additional Dependencies parameter. |
Deployment Target | Select the desired queue or session cluster from the drop-down list. We recommend that you do not use session clusters in the production environment. For more information, see Manage queues and Step 1: Create a session cluster.
Note Metrics of deployments that are deployed in session clusters cannot be displayed. Session clusters do not support the monitoring and alerting feature and the Autopilot feature. Session clusters are suitable for development and test environments. We recommend that you do not use session clusters in the production environment. For more information, see Debug a deployment. |
Description | Optional. Enter the description for the deployment. |
Label | After you specify labels for a deployment, you can search for the deployment by label key and label value on the Deployments page. You can specify a maximum of three labels for a deployment. |
More Setting | If you turn on the switch, you must configure the following parameters: Kerberos Name: Select a Hive cluster that supports Kerberos authentication from the drop-down list. For more information about how to create a Hive cluster that supports Kerberos authentication, see Register a Hive cluster that supports Kerberos authentication. principal: a Kerberos principal, which can be a user or a service. A Kerberos principal is used to uniquely identify an identity in the Kerberos encryption system.
|