You can use the EASCMD client to manage services in Elastic Algorithm Service (EAS) of Machine Learning Platform for AI (PAI). This topic describes the commands that are supported by the EASCMD client. You can run the commands to upload files, create services, modify service configurations, switch between service versions, delete services, view services, view service details, view service processes, and enable a direct connection to a resource group.

Operation commands

The following table describes the features that the EASCMD command-line tool provides for you to manage services.
Type Feature Operation method
Service-related You can use one of the following methods to get access to the EASCMD command-line tool:
Resource group-related

Uploads a file

  • Feature

    EAS provides an Object Storage Service (OSS) bucket for each user. You can run the upload command on the EASCMD client to upload a model or processor package and obtain the OSS URL of the uploaded package.

  • Syntax
    eascmd upload <filename> [--inner]
  • Parameters
    • <filename>: the name of the file to be uploaded.
    • [--inner]: If you run the upload command in DSW, you must add this parameter.
  • Example
    In DSW, upload the savedmodel_example/savedmodel_example.tar.gz package of the SavedModel model to OSS.
    eascmd upload savedmodel_example/savedmodel_example.tar.gz --inner
    The system displays information similar to the following output:
    [OK] oss endpoint:    [http://oss-cn-shanghai.aliyuncs.com]
    [OK] oss target path: [oss://eas-model-shanghai/182848887922****/savedmodel_example/savedmodel_example.tar.gz]
    Succeed: Total num: 1, size: 33,013. OK num: 1(upload 1 files).
    In the preceding example, oss://eas-model-shanghai/182848887922****/savedmodel_example/savedmodel_example.tar.gz is the OSS URL of the package. The OSS path can be used to deploy services.

Create a Service

  • Feature

    You can run the create command to create a service. When you create a service, you must provide the HTTP or OSS URL of the required resource, which can be a model or processor package. You can upload the resource to OSS and obtain the OSS URL of the uploaded package.

  • Syntax
    eascmd create <service_desc_json>
  • Parameters
    service_desc_json: a JSON file that describes the service information, such as the path in which the input model package is stored and the resource specifications. The following information provides an example of the JSON file:
    {
      "name": "mnist_saved_model_example",
      "model_path": "http://eas-data.oss-cn-shanghai.aliyuncs.com/models%2Fmnist_saved_model.tar.gz",
      "processor": "tensorflow_cpu_1.12",
      "metadata": {
        "instance": 1,
        "cpu": 1
      }
    }
    The following table describes the parameters in a JSON file that describes the service information.
    Parameter Required Description
    name Yes The name of the service, which must be unique in a region.
    token No The token that is used for authentication. If this parameter is not specified, the system automatically generates a token.
    model_path Yes The model_path parameter specifies the path in which the input model package is stored. The processor_path parameter specifies the path in which the input processor package is stored. You can set the parameters in the following formats:
    • HTTP URL: If an HTTP URL is used, the input package must be in the TAR.GZ, TAR, BZ2, or ZIP format.
    • OSS path: You can specify the path of a specific object or folder in OSS. You must also set the oss_endpoint parameter. Example:
      "model_path":"oss://wowei-beijing-tiyan/alink/",
      "oss_endpoint":"oss-cn-beijing.aliyuncs.com",
    • On-premises path: If you want to run the test command to perform on-premises debugging, you can use an on-premises path.
    oss_endpoint No The OSS endpoint, for example, oss-cn-beijing.aliyuncs.com. For more information about supported endpoints, see Regions and endpoints. If you set the model_path parameter to an OSS path, you must set this parameter.
    model_entry No The entry file of the model package, which can be an arbitrary file. If this parameter is not specified, the file name in the value of the model_path parameter is used. The path of the main file is passed to the Load() function in the processor.
    model_config No The model configuration. The value is of the TEXT type. The value of this parameter is passed to the second parameter of the LoadWithConfig() function in the processor.
    processor No

    If you use a built-in processor, you can directly specify the processor code here. For more information about the processor codes that are supported by the EASCMD client, see Built-in processors.

    If you use a custom processor, you do not need to set this parameter. Instead, you need only to set the processor_path, processor_entry, processor_mainclass, and processor_type parameters.

    processor_path No The path of the processor package. For more information, see the description of the model_path parameter.
    processor_entry No The main file of the processor package, for example, libprocessor.so or app.py. The main file contains the implementations of the initialize() and process() functions that are required for prediction.

    You must set this parameter if the processor_type parameter is set to cpp or python.

    processor_mainclass No The main class in the JAR package of the processor, for example, com.aliyun.TestProcessor.

    You must set this parameter if the processor_type parameter is set to java.

    processor_type No The language that is used to implement the processor. Valid values:
    • cpp
    • java
    • python
    metadata Yes The metadata of the service. For more information, see Table 1.
    Table 1. Parameters in the metadata
    Parameter Required Description
    Regular parameters workers No The number of threads that are used to concurrently process requests on each instance. Default value: 5.
    instance Yes The number of instances that are required to run the service.
    cpu No The number of CPUs that are required by each instance.
    memory No The size of memory that is required by each instance. The value must be an integer. Unit: MB. For example, "memory":4096 indicates that each instance requires 4 GB of memory.

    You can set this parameter only if you specify the ID of a dedicated resource group as the value of the resource parameter. For a shared resource group, the memory size and the number of CPUs must be at the ratio of 1 to 4. For example, if one CPU is required by each instance, the memory parameter is set to 4000.

    gpu No The number of GPUs that are required by each instance.
    gpu_memory No The size of video memory that is required by each instance. The value must be an integer. Unit: GB.
    PAI can allocate GPU resources by video memory to instances. In this case, the video memory of a GPU can be allocated to multiple instances. If the video memory is allocated to multiple instances, set the gpu parameter to 0. If you set the gpu parameter to 1, each instance occupies a GPU and the gpu_memory parameter does not take effect.
    Notice PAI does not enable the strict isolation of video memory. You must control the video memory usage of each instance. To prevent the out of memory (OOM) error, make sure that the video memory used by each instance does not exceed the requested video memory.
    qos No The quality of service (QoS) level of each instance. You can leave the parameter empty or set the parameter to BestEffort. If you set the qos parameter to BestEffort, CPU resources are shared by instances. The qos parameter must be set in pairs with the gpu_memory parameter. In this case, resources are allocated by video memory rather than the number of CPUs. The cpu parameter specifies the maximum number of CPUs that can be allocated to each instance.
    Note If you set the gpu_memory to a value other than 0, GPU resources are shared. In this case, you can set the qos parameter to BestEffort. If you set the gpu_memory parameter to 0 and the qos parameter to BestEffort, the system may allocate more resources than the requested resources to instances and specific instances may preempt the resources. In this case, the service performance and stability may deteriorate.
    resource No The ID of the resource group. Set this parameter based on the following rules:
    • If the service is deployed in a shared resource group, you can ignore this parameter. In this case, the service is billed on a pay-as-you-go basis.
    • If the service is deployed in a dedicated resource group, set this parameter to the ID of the resource group. Example: eas-r-6dbzve8ip0xnzte5rp.
    Advanced parameters (modify them with caution) rpc.batching No Specifies whether batch processing is enabled on the server to improve the speed of a GPU-based model. Valid values:
    • false: Batch processing is disabled on the server. This is the default value.
    • true: Batch processing is enabled on the server.
    rpc.keepalive No The maximum processing time for a single request. If the request processing time exceeds this value, the server returns the timeout error code 408 and closes the connection. Default value: 5000. Unit: milliseconds.
    rpc.io_threads No The number of threads that are used to concurrently process the input and output network data on each instance. Default value: 4.
    rpc.max_batch_size No The maximum size of each batch. Default value: 16. This parameter takes effect only if the rpc.batching parameter is set to true.
    rpc.max_batch_timeout No The maximum timeout period of each batch. Default value: 50. Unit: milliseconds. This parameter takes effect only if the rpc.batching parameter is set to true.
    rpc.max_queue_size No The size of the request queue. Default value: 64. When the queue is full, the server returns the error code 450 and closes the connection. To prevent the server from being overloaded, the request queue instructs the client to send requests to other instances when the queue is full. If the response time is too long, set this parameter to a smaller value to prevent a request from timing out.
    rpc.worker_threads No The number of threads that are used to concurrently process requests on each instance. This parameter works in the same way as the workers parameter. Default value: 5.
  • Example (In this example, the JSON file that describes the service information is named pmml.json.)
    eascmd create pmml.json
    The system displays information similar to the following output:
    [RequestId]: 1651567F-8F8D-4A2B-933D-F8D3E2DD****
    +-------------------+----------------------------------------------------------------------------+
    | Intranet Endpoint | http://pai-eas-vpc.cn-shanghai.aliyuncs.com/api/predict/savedmodel_exanple |
    |             Token | YjQxZDYzZTBiZTZjMzQ5ZmE0MzczZjIxMGZiNzZmMDBkY2VjMDg4****                   |
    +-------------------+----------------------------------------------------------------------------+
    [OK] Creating api gateway
    [OK] Building image [registry-vpc.cn-shanghai.aliyuncs.com/eas/savedmodel_exanple_cn-shanghai:v0.0.1-20190224001315]
    [OK] Pushing image [registry-vpc.cn-shanghai.aliyuncs.com/eas/savedmodel_exanple_cn-shanghai:v0.0.1-20190224001315]
    [OK] Waiting [Total: 1, Pending: 1, Running: 0]
    [OK] Waiting [Total: 1, Pending: 1, Running: 0]
    [OK] Service is running

Modify service configurations

  • Feature

    You can use the -D option in the modify command to modify the metadata such as the number of instances and CPU quota.

  • Syntax
    eascmd modify <service_name> -Dmetadata.<attr_name>=<attr_value>
    You can set multiple parameters at a time. For more information, see the following example.
  • Parameters
    • <service_name>: the name of the service.
    • <attr_name>: the name of the parameter.
    • <attr_value>: the value of the parameter.
  • Example
    Set the number of instances to 10 and the quota for each instance to 5, including 5 CPU cores and 20 GB of memory.
    eascmd modify service_test -Dmetadata.instance=10 -Dmetadata.cpu=5
    When you scale in or out resources, you can modify only the metadata.instance parameter of the service. If the specified number of instances is greater than the number of existing instances for the service, the system starts new instances to reach the specified number. The execution of existing instances is not affected. If the specified number of instances is smaller than the number of existing instances, the system stops specific instances to reach the specified number. The execution of the other instances is not affected.
    Note Different from a full update operation, an update operation that modifies only the number of instances does not trigger a rolling update for the service.

Modify a service

  • Feature

    You can run the modify command to modify a deployed service.

  • Syntax
    eascmd modify <service_name> -s <service_desc_json>
  • Parameters
    • <service_name>: the name of the service.
    • <service_desc_json>: the description file of the service.
      Note When you modify a service, you can modify only the parameters that need to be modified in the description file of the service and leave other parameters unchanged. If you modify the model file information and processor information, a new service version is added.

Add a version for a service

  • Feature

    You can run the modify command to add a version for a deployed service.

  • Syntax
    eascmd modify <service_name> -s <service_desc_json>
  • Parameters
    • <service_name>: the name of the service.
    • <service_desc_json>: the description file of the service.
      Note To add a version for a service, you must specify the model file information and processor information in the description file of the service.

Stop a service

  • Feature

    You can run the stop command to stop a service that is running.

  • Syntax
    eascmd stop <service_name>
  • Parameters

    <service_name>: the name of the service to be stopped.

Start a service

  • Feature

    You can run the start command to restart a stopped service.

  • Syntax
    eascmd start <service_name>
  • Parameters

    <service_name>: the name of the service to be started.

Delete a service

  • Feature

    You can run the delete command to delete a service. However, you can delete only services in the current region.

  • Syntax
    eascmd delete <service_name>
  • Parameters

    <service_name>: the name of the service to be deleted.

  • Example
    In this example, the service name is savedmodel_exanple. To delete the service, perform the following steps:
    1. Run the following command to delete the service:
      eascmd delete savedmodel_exanple
      The system displays information similar to the following output:
      Are you sure to delete the service [savedmodel_exanple] in [cn-shanghai]? [Y/n]
    2. Press the Y key. The system displays information similar to the following output:
      [RequestId]: 1651567F-8F8D-4A2B-933D-F8D3E2DD****
      [OK] Service [savedmodel_exanple] in region [cn-shanghai] is terminating
      [OK] Service is terminating
      [OK] Service is terminating
      [OK] Service was deleted successfully

Implement a blue-green release

  • Feature

    You can run the create -r command to create an associated service for an existing service. Then, you can run the release command to adjust the traffic proportion of the services as needed at any time. This way, you can implement a blue-green release. The value of the name parameter in the JSON file that describes the information about the new service must be the same as that of the existing service. You can set the other parameters based on your requirements. The system automatically adds a random suffix to the existing service name to generate a new service name.

    If you delete the new service, all traffic goes to the existing service. If you delete the existing service, all traffic goes to the new service. The endpoint of the existing service before a blue-green release serves as the traffic entrance for all subsequent blue-green release iterations. No matter how many subsequent blue-green release iterations are conducted, the endpoint, such as {domain}/api/predict/savedmodel_example in the following example, remains unchanged. You do not need to modify the client-side code.
    Note The blue-green release is not applicable to scenarios where a direct connection is used.
  • Syntax
    1. Create an associated service.
      eascmd create <service_desc_json> -r
    2. Adjust the traffic proportion of the services in a blue-green release.
      eascmd release <service_name> -w <weight>
  • Parameters
    • <service_desc_json>: the JSON file that describes the information about the new service.
    • <service_name>: the name of the new service.
    • <weight>: the proportion of traffic that goes to the new service.
  • Example (In this example, the JSON file that describes the service information is named pmml.json.)
    1. Create an associated service.
      eascmd create pmml.json -r
      The system displays information similar to the following output:
      [RequestId]: 1651567F-8F8D-4A2B-933D-F8D3E2DD****
      +-------------------+----------------------------------------------------------------------------+
      | Intranet Endpoint | http://xxx.cn-shanghai.pai-eas.aliyuncs.com/api/predict/savedmodel_example_9c16a222 |
      |             Token | YjQxZDYzZTBiZTZjMzQ5ZmE0MzczZjIxMGZiNzZmMDBkY2VjMDg4****                   |
      +-------------------+----------------------------------------------------------------------------+
      [OK] Building image [registry-vpc.cn-shanghai.aliyuncs.com/eas/savedmodel_exanple_9c16a222_cn-shanghai:v0.0.1-20190224001315]
      [OK] Pushing image [registry-vpc.cn-shanghai.aliyuncs.com/eas/savedmodel_exanple_9c16a222_cn-shanghai:v0.0.1-20190224001315]
      [OK] Waiting [Total: 1, Pending: 1, Running: 0]
      [OK] Waiting [Total: 1, Pending: 1, Running: 0]
      [OK] Service is running
      The preceding output indicates that a service named savedmodel_example_9c16a222 is created. The new service and the existing service have independent traffic entrances and can be separately called. You can independently test the new service without affecting the online execution of the existing service. After the test is complete, you can adjust the traffic proportion of the services.
    2. Adjust the traffic proportion of the services in a blue-green release.
      eascmd release savedmodel_example_9c16a222 -w 20

      The preceding command indicates that 20% of the traffic goes to the new service savedmodel_example_9c16a222. The remaining 80% of the traffic goes to the existing service savedmodel_example. In this case, the independent endpoint {domain}/api/predict/savedmodel_example_9c16a222 of the new service is disabled. 20% of the traffic that flows into the endpoint {domain}/api/predict/savedmodel_example of the existing service goes to the new service and 80% to the existing service.

      The system displays information similar to the following output:
      Confirmed to release this service at weight [20%]? [Y/n]
    3. Press the Y key and then the ENTER key. The system displays information similar to the following output:
      [RequestId]: 9258EEDE-6F99-4C3B-841B-B6E9774F****
      [OK] Service [savedmodel_example_9c16a222] is weighted to 20% now

Switch between service versions

  • Feature

    You can run the desc command to view the current version and the latest version of a service. In addition, you can run the version command to switch to a version earlier than the latest version.

  • Syntax
    eascmd version <service_name> <version_id>
  • Parameters
    • <service_name>: the name of the service.
    • <version_id>: the ID of the version to which you want to switch.

View the service list

  • Feature

    You can run the list or ls command to view the services that have been deployed within the current account.

  • Syntax
    eascmd ls
  • Parameters

    N/A

  • Example
    eascmd ls
    The system displays information similar to the following output:
    [RequestId]: 83945D4E-ED3E-4D35-A989-831E36BB****
    +---------------------------+-------------+----------+---------------------+---------------------+---------+--------+----------------------------------------+
    |        SERVICENAME        |   REGION    | INSTANCE |     CREATETIME      |     UPDATETIME      | STATUS  | WEIGHT |              SERVICEPATH               |
    +---------------------------+-------------+----------+---------------------+---------------------+---------+--------+----------------------------------------+
    | mnist_saved_model_example | cn-shanghai |        1 | 2019-02-21 16:35:41 | 2019-02-21 16:35:41 | Running |      0 | /api/predict/mnist_saved_model_example |
    +---------------------------+-------------+----------+---------------------+---------------------+---------+--------+----------------------------------------+

View the details of a service

  • Feature

    You can run the desc command to view the details of a deployed service.

  • Syntax
    eascmd desc <service_name>
  • Parameters

    <service_name>: the name of the service.

  • Example
    eascmd desc mnist_saved_model_example
    The system displays information similar to the following output:
    $ eascmd desc mnist_saved_model_example
    +---------------------+--------------------------------------------------------------------------------------------------------------+
    |              Status | Running                                                                                                      |
    |         ServiceName | mnist_saved_model_example                                                                                    |
    |              Region | cn-shanghai                                                                                                  |
    |          CreateTime | 2019-02-21 16:35:41                                                                                          |
    |          UpdateTime | 2019-02-21 16:35:41                                                                                          |
    |         AccessToken |                                                                                                              |
    |        PrivateToken | ZWNjMTNkNDExMmExNjZkYTM4YWQ5YTY0YmFjNjk3YWYzZTRjM2Y2****                                                     |
    |       TotalInstance | 1                                                                                                            |
    |     RunningInstance | 1                                                                                                            |
    |     PendingInstance | 0                                                                                                            |
    |                 CPU | 1                                                                                                            |
    |                 GPU | 0                                                                                                            |
    |              Memory | 1000M                                                                                                        |
    |               Image | registry-vpc.cn-shanghai.aliyuncs.com/eas/mnist_saved_model_example_cn-shanghai:v0.0.1-20190221163541        |
    |              Weight | 0                                                                                                            |
    |       LatestVersion | 1                                                                                                            |
    |      CurrentVersion | 1                                                                                                            |
    |             Message | Service start successfully                                                                                   |
    |       APIGatewayUrl | 1c3b37ea83c047efa0dc6df0cacb****-cn-shanghai.alicloudapi.com/EAPI_182848887922****_mnist_saved_model_example |
    |    APIGatewayAppKey | 2564****                                                                                                     |
    | APIGatewayAppSecret | 12562a7b8858bbba2c2e9c4517ff****                                                                             |
    |    IntranetEndpoint | http://pai-eas-vpc.cn-shanghai.aliyuncs.com/api/predict/mnist_saved_model_example                            |
    |       ServiceConfig | {                                                                                                            |
    |                     |   "generate_token": "false",                                                                                 |
    |                     |   "metadata": {                                                                                              |
    |                     |     "cpu": 1,                                                                                                |
    |                     |     "instance": 1,                                                                                           |                                                                                  |
    |                     |     "region": "cn-shanghai"                                                                                  |
    |                     |   },                                                                                                         |
    |                     |   "model_path":                                                                                              |
    |                     | "http://eas-data.oss-cn-shanghai.aliyuncs.com/models%2Fmnist_saved_model.tar.gz",                            |
    |                     |   "name":                                                                                                    |
    |                     | "mnist_saved_model_example",                                                                                 |
    |                     |   "processor":                                                                                               |
    |                     | "tensorflow_cpu"                                                                                             |
    |                     | }                                                                                                            |
    +---------------------+--------------------------------------------------------------------------------------------------------------+

View service processes

  • Feature

    You can run the showworkers(w) or w command to view the status of the processes that are running in a service.

  • Syntax
    eascmd w <service_name>
  • Parameters

    <service_name>: the name of the service.

  • Example
    eascmd w mnist_saved_model_example
    The system displays information similar to the following output:
    [RequestId]: B23BA8AC-CDEC-5704-935F-3CEC6606****
    +-------------------------------+-------------+--------------+---------------------+----------+---------+-------+--------+----------------------------------------------------------------------------------------------------------+
    |         INSTANCENAME          |   INNERIP   |    HOSTIP    |       STARTAT       | RESTARTS | STATUS  | READY | REASON |                                                LASTSTATE                                                 |
    +-------------------------------+-------------+--------------+---------------------+----------+---------+-------+--------+----------------------------------------------------------------------------------------------------------+
    | network-test-69cf5dd6c7-5**** | 10.240.XX.XX | 10.224.XX.XX | 2021-09-27 15:04:22 |        1 | Running | [1/1] |        | {"exitCode":247,"finishedAt":"2021-09-27T07:04:21Z","reason":"Error","startedAt":"2021-09-27T05:36:56Z"} |
    +-------------------------------+-------------+--------------+---------------------+----------+---------+-------+--------+----------------------------------------------------------------------------------------------------------+
    The following table describes the parameters in the output.
    Parameter Description
    INSTANCENAME The name of the instance that runs the service.
    INNERIP The internal IP address of the instance.
    HOSTIP The IP address of the node to which the instance belongs.
    STARTAT The start time of the instance.
    RESTARTS The number of times that the instance restarted. The instance automatically restarts each time an OOM error or a crash caused by code occurs. Therefore, the value of the RESTARTS parameter increases by one each time.
    STATUS The status of the instance. A value of Pending indicates that the instance is waiting for resources. If the instance remains in the Pending state for a large amount of time, the instance cannot be used to run the service because of insufficient resources.
    READY The status of the containers in the instance. The value is in the format of [Number of containers that are ready/Total number of containers in the instance].

    If the value of the READY parameter is [0/1], the container is starting or the container failed the health check. In this case, traffic does not go to the instance.

    If the parameter value is [0/1] for all instances of a service, the service is unavailable and a 5xx error code is returned when the service is requested.

    REASON The description of the instance status.
    LASTSTATE The status of the instance after the last restart. If the value of reason in the LASTSTATE parameter is OOMKIlled, an OOM error occurs on the instance.

Delete or restart one or more instances

  • Feature

    You can run the deleteworkers or dw command to delete one or more instances for a service. After one or more instances are deleted, the system automatically starts new instances to reach the specified number of allocated instances. Therefore, you can also run this command to restart instances.

  • Syntax
    eascmd dw <service_name> <instance_names>
  • Parameters
    • <service_name>: the name of the service.
    • <instance_names>: indicates the name of the instance to be deleted for a service. You can specify multiple instance names. Separate multiple instance names with commas (,).
  • Example
    eascmd dw mnist_saved_model_example mnist-saved-model-example-69cf5dd6c7-5****
    The system displays information similar to the following output:
    Are you sure to delete the instances [mnist-saved-model-example-69cf5dd6c7-5****] of service [mnist_saved_model_example] in [cn-shenzhen]? [Y/n]
    Enter the Y key. The system displays information similar to the following output:
    [RequestId]: 564C8F56-D97A-555E-9E0B-22BE140A****
    [OK] Instance(s) [mnist-saved-model-example-69cf5dd6c7-5****] for service [mnist_saved_model_example] in region [cn-shenzhen] was deleted successfully

View resource groups

  • Feature

    You can run the resource list or resource ls command to view the resource groups within the current account.

  • Syntax
    eascmd resource ls
  • Parameters

    N/A

  • Example
    eascmd resource ls
    The system displays information similar to the following output:
    +--------------------------+-------------+---------------+----------+----------+------------------+---------------------+---------------+
    |       RESOURCENAME       |  CLUSTERID  | INSTANCECOUNT | GPUCOUNT | CPUCOUNT |     OWNERUID     |     CREATETIME      |    STATUS     |
    +--------------------------+-------------+---------------+----------+----------+------------------+---------------------+---------------+
    | eas-r-lzo32vrdbtukr7te3i | cn-shanghai |             1 |        0 |       16 | 182848887922**** | 2020-03-18 13:09:24 | ResourceReady |
    +--------------------------+-------------+---------------+----------+----------+------------------+---------------------+---------------+

View the details of a resource group

  • Feature

    You can run the resource desc command to view the details of a resource group.

  • Syntax
    eascmd resource desc <resource_id>
  • Parameters

    <resource_id>: the ID of the resource group that you want to view. You can obtain the ID from the RESOURCENAME parameter in the output of the resource list or resource ls command.

  • Example
    eascmd -c ~/.eas/shanghai2.conf resource desc eas-r-lzo32vrdbtukr7te3i
    The system displays information similar to the following output:
    +---------+-----------------+----------------------------------------------------------+
    |   Basic | ResourceName    | eas-r-lzo32vrdbtukr7te3i                                 |
    |         | Region          | cn-shanghai                                              |
    |         | CpuCount        | 16                                                       |
    |         | GpuCount        | 0                                                        |
    |         | instanceCount   | 1                                                        |
    |         | CreateTime      | 2020-03-18 13:09:24                                      |
    |         | LastStatus      | ResourceReady                                            |
    |         | Message         | Resource is ready                                        |
    |         | RoleArn         | acs:ram::xxx:role/AliyunPAIAccessingENIRole              |
    | Network | VpcId           | vpc-uf6s9pv47nu03srne****                                |
    |         | VSwitchId       | vsw-uf6voq53e893k56ws****                                |
    |         | SecurityGroupId | sg-uf6c5twkfar8l06c****                                  |
    |         | DestinationCIDR |                                                          |
    |         | AuxVSwitchList  | []                                                       |
    +---------+-----------------+----------------------------------------------------------+

View the instances in a resource group

  • Feature

    You can run the resource list_instance or resource li command to view the instances in a resource group and the resource usage of each instance.

  • Syntax
    eascmd resource list_instance <resource_id>
  • Parameters

    <resource_id>: the ID of the resource group that you want to view. You can obtain the ID from the RESOURCENAME parameter in the output of the resource list or resource ls command.

  • Example
    eascmd resource li eas-r-lzo32vrdbtukr7te3i
    The system displays information similar to the following output:
    +------------------------------------+--------------+--------+----------------+----------------+-------------------+---------------------+----------------+------------+
    |            INSTANCENAME            |  INSTANCEIP  | STATUS | TOTAL/USED CPU | TOTAL/USED GPU | TOTAL/USED MEMORY |     CREATETIME      |  INSTANCETYPE  | CHARGETYPE |
    +------------------------------------+--------------+--------+----------------+----------------+-------------------+---------------------+----------------+------------+
    | cn-shanghai.i-uf6dj71ir6mh3gjmaz3a | 10.224.XX.XX | Ready  | 16/6           | 0/0            | 62240M/4200M      | 2020-03-18 13:09:34 | ecs.g6.4xlarge | PostPaid   |
    +------------------------------------+--------------+--------+----------------+----------------+-------------------+---------------------+----------------+------------+

Enable a direct connection to a resource group

  • Feature

    You can run the resource network command to enable a direct connection to a resource group. This way, you can establish a connection between the EAS virtual private cloud (VPC) and your VPC. You can call services in EAS from your VPC by establishing a direct connection to a software-based load balancer. You can also call internal network resources, such as ApsaraDB RDS and ApsaraDB for Redis, in your VPC by using a processor in EAS.

  • Syntax
    eascmd resource network <resource_id> -s <network_cfg.json>
  • Parameters
    • <resource_id>: the ID of the resource group that you want to view. You can obtain the ID from the RESOURCENAME parameter in the output of the resource list or resource ls command.
    • <network_cfg.json>: the network configuration file. The following information shows the content format of the file.
      {
        "Action":"create",
        "VSwitchId": "vsw-8vbsunr5bkcbyxh94****",
        "SecurityGroupId": "sg-8vbhwowdxzx5fjcx****",
        "VSwitchIdList": ["vsw-8xbsunr5abcbyqh93****", "vsw-8xbs1y7gu6cxbvqzw****"],
        "DestinationCIDR": "192.XX.XX.XX/16"
      }
      The following table describes the parameters in the file.
      Parameter Description Required Default value
      Action Specifies whether to enable a direct connection for a resource group. Valid values:
      • create: enables a direct connection.
      • delete: disables a direct connection. In this case, you do not need to set the other parameters.
      Yes N/A
      VSwitchId The ID of the primary vSwitch for which you want to enable a direct connection. EAS automatically creates an elastic network interface (ENI) in the vSwitch. Do not delete the ENI. Otherwise, network connections may fail. Yes N/A
      SecurityGroupId The ID of the security group to which the Elastic Compute Service (ECS) instances where the clients are deployed belong.
      Note The ECS instances must belong to this security group. Otherwise, network connections may fail.
      Yes N/A
      VSwitchIdList The IDs of the secondary vSwitches for which you want to enable a direct connection. The secondary vSwitches must belong to the same VPC as the primary vSwitch. The system automatically adds the classless inter-domain routing (CIDR) blocks of the secondary vSwitches to the routing rules in EAS. No Empty array ([])
      DestinationCIDR The CIDR block of the clients for which you want to enable a direct connection. The clients must belong to the same VPC as the primary vSwitch. The system automatically adds the CIDR block to the routing rules in EAS. No Empty string ("")
      Note The VSwitchIdList parameter works in the same way as the DestinationCIDR parameter. Both parameters are used to establish a connection between the EAS cluster and a specific CIDR block of your VPC. If you want to connect to multiple specified vSwitches, set the VSwitchIdList parameter. If you want to connect to a large CIDR block, such as the CIDR block of the entire VPC, set the DestinationCIDR parameter. We recommend that you do not use the 10.0.0.0/8, 10.224.0.0/16, or 10.240.0.0/16 CIDR block. Otherwise, network conflicts may occur. If you have other needs, you can submit a ticket.

Prediction

You can use the HTTP URL that is generated when you create a service to call the service. The input and output formats of the prediction service are defined by the processor that you use. For more information, see the topics in Construct requests for services based on a universal processor of the Model service call chapter.