All Products
Search
Document Center

Platform For AI:Secure encrypted inference service

Last Updated:Jan 26, 2026

EAS provides a secure encryption environment for deploying and inferencing encrypted models, ensuring data and model safety throughout the lifecycle. Suitable for high-security scenarios like financial services and enterprise applications.

Workflow

This solution involves the following steps:

  1. Prepare an encrypted model

  2. Step 2: Deploy the encrypted model using PAI-EAS

    When you deploy an encrypted model, EAS connects to the Trustee service to verify the environment. After verification, it uses the KMS key to decrypt and mount the model as an EAS service.

  3. Step 3: Call the service for secure inference

    After deployment, connect to the service and send inference requests.

Step 1: Prepare an encrypted model

Before deploying a model to the cloud, encrypt it and upload it to cloud storage. The decryption key is managed by KMS and controlled by the remote attestation service. Perform model encryption in a local or trusted environment. This example uses the Qwen2.5-3B-Instruct LLM.

1. Prepare a model (optional)

Note

If you have your own model, you can skip this section and go to 2. Encrypt the model.

The Qwen2.5-3B-Instruct model requires Python 3.9 or later. To download the model using the ModelScope tool, run the following command in the terminal.

pip3 install modelscope importlib-metadata
modelscope download --model Qwen/Qwen2.5-3B-Instruct

After success, the model downloads to ~/.cache/modelscope/hub/models/Qwen/Qwen2.5-3B-Instruct/.

2. Encrypt the model

PAI-EAS supports two encryption methods. This example uses Sam.

  • Gocryptfs: An encryption mode that is based on AES-256-GCM and complies with the open source Gocryptfs standard.

  • Sam: An Alibaba Cloud trusted AI model encryption format that protects model confidentiality and prevents license content from being tampered with or used illegally.

Option 1: Perform Sam encryption

  1. Download the Sam encryption module package RAI_SAM_SDK_2.1.0-20240731.tgz. Then, run the following command to decompress the package.

    # Decompress the Sam encryption module.
    tar xvf RAI_SAM_SDK_2.1.0-20240731.tgz
  2. Use the Sam encryption module to encrypt the model.

    # Go to the encryption directory of the Sam encryption module.
    cd RAI_SAM_SDK_2.1.0-20240731/tools
    
    # Encrypt the model.
    ./do_content_packager.sh <model_directory> <plaintext_key> <key_ID>

    Where:

    • <model_directory>: The directory of the model to be encrypted. You can specify a relative or absolute path, such as ~/.cache/modelscope/hub/models/Qwen/Qwen2.5-3B-Instruct/.

    • <plaintext_key>: A custom encryption key that is 4 to 128 bytes in length. Example: 0Bn4Q1wwY9fN3P. This plaintext key is the model decryption key that you must upload to the Trustee remote attestation service.

    • <key_ID>: A custom key identifier that is 8 to 48 bytes in length. Example: LD_Demo_0001.

    After the encryption is complete, the model is stored as ciphertext in the <key_ID> directory within the current path.image

Option 2: Perform Gocryptfs encryption

  1. Install the Gocryptfs tool to encrypt models. Currently, only Gocryptfs v2.4.0 that uses default encryption parameters is supported. You can choose one of the following installation methods:

    Method 1: (Recommended) Install from a yum source

    If you use the Alinux 3 or AnolisOS 23 operating system, you can use a yum source to install Gocryptfs.

    Alinux 3
    sudo yum install gocryptfs -y
    AnolisOS 23
    sudo yum install anolis-epao-release -y
    sudo yum install gocryptfs -y

    Method 2: Directly download the precompiled binary file

    # Download the precompiled Gocryptfs package.
    wget https://github.jobcher.com/gh/https://github.com/rfjakob/gocryptfs/releases/download/v2.4.0/gocryptfs_v2.4.0_linux-static_amd64.tar.gz
    
    # Decompress and install the package.
    tar xf gocryptfs_v2.4.0_linux-static_amd64.tar.gz
    sudo install -m 0755 ./gocryptfs /usr/local/bin
  2. Create a Gocryptfs key file to use as the model encryption key. You must upload this key to the Trustee remote attestation service for management in a subsequent step.

    In this topic, 0Bn4Q1wwY9fN3P is used as the key to encrypt the model. The key content is stored in the cachefs-password file. You can also customize the key. In practice, we recommend that you use a randomly generated strong key.

    cat << EOF > ~/cachefs-password
    0Bn4Q1wwY9fN3P
    EOF
  3. Use the created key to encrypt the model.

    1. Configure the path of the plaintext model.

      Note

      Specify the path where the plaintext model you just downloaded is located. If you have other models, replace the path with the actual path of your target model.

      PLAINTEXT_MODEL_PATH=~/.cache/modelscope/hub/models/Qwen/Qwen2.5-3B-Instruct/
    2. Use Gocryptfs to encrypt the model directory tree.

      After the encryption is complete, the model is stored as ciphertext in the ./cipher directory.

      mkdir -p ~/mount
      cd ~/mount
      mkdir -p cipher plain
      
      # Install Gocryptfs runtime dependencies.
      sudo yum install -y fuse
      
      # Initialize Gocryptfs.
      cat ~/cachefs-password | gocryptfs -init cipher
      
      # Mount to plain.
      cat ~/cachefs-password | gocryptfs cipher plain
      
      # Move the AI model to ~/mount/plain.
      cp -r ${PLAINTEXT_MODEL_PATH}/. ~/mount/plain

3. Upload the model

EAS supports various storage backends for encrypted models. Mount decrypted models into service instances during deployment. For more information, see Mount storage.

This example uses OSS. Create a bucket and directory (e.g., oss://examplebucket/qwen-encrypted/). See Quick Start. For large model files, use ossbrowser to upload.

Note

If you use the ossutil command-line tool, we recommend that you use multipart upload.

The following figure shows the result after uploading the Sam-encrypted model. If you use Gocryptfs, filenames appear as encrypted garbled characters.image

4. Set up a remote attestation service and upload the key

The decryption key is managed by the remote attestation service, which verifies the runtime environment of the model and inference service. The key is injected only when the EAS environment meets the expected trust conditions.

Deploy the Trustee remote attestation service in an ACK serverless cluster. Use Alibaba Cloud KMS for secure key storage.

Important
  • The region of the ACK cluster does not need to be the same as the destination region where the EAS service is deployed.

  • The Alibaba Cloud KMS instance must be in the same region as the ACK cluster where the Alibaba Cloud Trustee remote attestation service is deployed.

  • Before you create a KMS instance and an ACK cluster, you must create a VPC and two vSwitches. For more information, see Create and manage a VPC.

  1. Create an Alibaba Cloud KMS instance as the key storage backend.

    1. Go to the Key Management Service console. In the left pane, choose Resource > Instance Management. On the Software Key Management tab, create and start an instance. When you start the instance, select the same VPC as the ACK cluster. For more information, see Purchase and enable a KMS instance.

      Wait about 10 minutes for the instance to start.

    2. After the instance starts, in the left pane, choose Resource > Key Management. On the Key Management page, create a customer master key (CMK) for the instance. For more information, see Step 1: Create a software-protected key.

    3. In the left pane, choose Application Access > Access Point. On the Access Point page, create an application access point for the instance. Set Scope to the created KMS instance. For other parameters, see Method 1: Quick creation.

      After the application access point is created, the browser automatically downloads a ClientKey***.zip file. After you decompress the .zip file, it contains the following files:

      • Client Key content (ClientKeyContent): The default filename is clientKey_****.json.

      • Credential password (ClientKeyPassword): The default filename is clientKey_****_Password.txt.

    4. On the Resource > Instance Management page, click the name of the KMS instance. In the Basic Information section, click Download next to Instance CA Certificate to export the public key certificate file PrivateKmsCA_***.pem of the KMS instance.

  2. You can create an ACK cluster and install the csi-provisioner component.

    1. Go to the Create Cluster page to create an ACK serverless cluster. The following table describes the key parameters. For other parameters, see Create an ACK cluster.

      1. Cluster Configurations: Configure the following parameters. Then, click Next: Component Configurations.

        Key configuration

        Description

        VPC

        Select Use Existing and select Configure SNAT For VPC. Otherwise, you cannot pull the Trustee image.

        VSwitch

        Make sure that at least two virtual switches are created in the existing VPC. Otherwise, you cannot expose the public ALB Ingress.

      2. Component Configurations: Configure the following parameters. Then, click Next: Confirm Configurations.

        Key configuration

        Description

        Service Discovery

        Select CoreDNS.

        Ingress

        Select ALB Ingress. For the source of the ALB cloud-native gateway instance, select New and select two virtual switches.

      3. Confirm Configurations: Confirm the configuration information and terms of service. Then, click Create Cluster.

    2. After the cluster is created, install the csi-provisioner (Managed) component. For more information, see Manage components.

  3. Deploy the Trustee remote attestation service in the ACK cluster.

    1. Connect to the cluster over the Internet or an internal network. For more information, see Connect to a cluster.

    2. Upload the downloaded KMS instance application identity credential (clientKey_****.json), credential password (clientKey_****_Password.txt), and CA certificate (PrivateKmsCA_***.pem) to the environment that is connected to the ACK serverless cluster. Run the following command to deploy the Trustee remote attestation service and use Alibaba Cloud KMS as the key storage backend.

      # Install the plugin.
      helm plugin install https://github.com/AliyunContainerService/helm-acr
      
      helm repo add trustee acr://trustee-chart.cn-hangzhou.cr.aliyuncs.com/trustee/trustee
      helm repo update
      
      export DEPLOY_RELEASE_NAME=trustee
      export DEPLOY_NAMESPACE=default
      export TRUSTEE_CHART_VERSION=1.7.6
      
      # Set the region of the ACK cluster, for example, cn-hangzhou.
      export REGION_ID=cn-hangzhou
      
      # Information about the exported KMS instance. 
      # Replace with your KMS instance ID.
      export KMS_INSTANCE_ID=kst-hzz66a0*******e16pckc
      # Replace with the path to your KMS instance application identity credential. 
      export KMS_CLIENT_KEY_FILE=/path/to/clientKey_KAAP.***.json
      # Replace with the path to your KMS instance credential password. 
      export KMS_PASSWORD_FILE=/path/to/clientKey_KAAP.***_Password.txt
      # Replace with the path to your KMS instance CA certificate.
      export KMS_CERT_FILE=/path/to/PrivateKmsCA_kst-***.pem
      
      helm install ${DEPLOY_RELEASE_NAME} trustee/trustee \
        --version ${TRUSTEE_CHART_VERSION} \
        --set regionId=${REGION_ID} \
        --set kbs.aliyunKms.enabled=true \
        --set kbs.aliyunKms.kmsIntanceId=${KMS_INSTANCE_ID} \
        --set-file kbs.aliyunKms.clientKey=${KMS_CLIENT_KEY_FILE} \
        --set-file kbs.aliyunKms.password=${KMS_PASSWORD_FILE} \
        --set-file kbs.aliyunKms.certPem=${KMS_CERT_FILE} \
        --namespace ${DEPLOY_NAMESPACE}
      Note

      The first command to install the plugin (helm plugin install...) may take a long time to run. If the installation fails, you can run the helm plugin uninstall cm-push command to uninstall the plugin and then run the plugin installation command again.

      The following example shows a sample output:

      NAME: trustee
      LAST DEPLOYED: Tue Feb 25 18:55:33 2025
      NAMESPACE: default
      STATUS: deployed
      REVISION: 1
      TEST SUITE: None
    3. In the environment that is connected to the ACK serverless cluster, run the following command to obtain the endpoint of Trustee.

      export TRUSTEE_URL=http://$(kubectl get AlbConfig alb-$DEPLOY_RELEASE_NAME -o jsonpath='{.status.loadBalancer.dnsname}')/api
      echo ${TRUSTEE_URL}

      A sample output is http://alb-ppams74szbwg2f****.cn-shanghai.alb.aliyuncsslb.com/api.

    4. In the environment that is connected to the ACK serverless cluster, run the following command to test the connectivity of the Trustee service.

      cat << EOF | curl -k -X POST ${TRUSTEE_URL}/kbs/v0/auth -H 'Content-Type: application/json' -d @-
      {
          "version":"0.4.0",
          "tee": "tdx",
          "extra-params": "foo"
      }
      EOF

      If the Trustee service is running normally, the expected output is as follows:

      {"nonce":"PIDUjUxQdBMIXz***********IEysXFfUKgSwk=","extra-params":""}
  4. Configure a network whitelist for Trustee.

    Note

    This configuration allows the PAI-EAS model deployment environment to access the remote attestation service for environment security checks.

    1. Go to the Alibaba Cloud Application Load Balancer (ALB) console, create an access control policy group, and add the IP addresses or CIDR blocks that have permissions to access Trustee as IP entries. For more information, see Access control. The CIDR blocks that need to be added are as follows:

      • The public IP address of the VPC to which the EAS service is bound during deployment.

      • The egress IP address of the inference client.

    2. Run the following command to obtain the ID of the Server Load Balancer (SLB) instance that is used by the Trustee instance on the cluster.

      kubectl get ing --namespace ${DEPLOY_NAMESPACE} frontend-ingress -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | cut -d'.' -f1 | sed 's/[^a-zA-Z0-9-]//g'

      The expected output is as follows:

      alb-llcdzbw0qivhk0****
    3. In the navigation pane on the left of the Alibaba Cloud ALB console, choose Application Load Balancer > Instances. In the region where the cluster is located, search for the ALB instance that you obtained in the preceding step and click the instance ID to go to the instance details page. At the bottom of the page, in the Instance Attributes section, click Disable Configuration Read-only Mode.

    4. Switch to the Listeners tab. In the access control column of the target listener, click **Enable** and configure the whitelist as the access control policy group that you created in the preceding step.

  5. Create a secret to store the model decryption key.

    The model decryption key managed by Trustee is stored in KMS. The key can be accessed only after the remote attestation service verifies the target environment.

    Go to the Key Management Service console. In the left pane, choose Resource > Secrets Manager. On the Generic Secret tab, click Create Secret. The following table describes the key parameters.

    • Secret Name: A custom secret name that is used to index the key. Example: model-decryption-key.

    • Set Secret Value: The key that is used to encrypt the model. Example: `0Bn4Q1wwY9fN3P`. Use your actual key.

    • Encryption Master Key: Select the master key that you created in the preceding step.

  6. Log on to the Trustee management interface to view historical key access records.

    1. Obtain the address of the Trustee frontend management interface.

      kubectl get AlbConfig alb-$DEPLOY_RELEASE_NAME -o jsonpath='{.status.loadBalancer.dnsname}'

      Access the address in a browser.

    2. Obtain the private key for logging on to the Trustee frontend management interface.

      kubectl get secret kbs-auth-keypair -o jsonpath="{.data.private\.key}" | base64 -d

      Paste the key into the frontend interface to complete the logon.

    3. On the Trustee Gateway Management Platform, click Audit Log in the navigation pane on the left. Click Resource Audit Interface in the middle of the page to view historical records of model decryption key access.

Step 2: Deploy the encrypted model using PAI-EAS

  1. Log on to the PAI console. Select a region on the top of the page. Then, select the desired workspace and click Elastic Algorithm Service (EAS).

  2. On the Elastic Algorithm Service (EAS) page, click Deploy Service. In the Custom Model Deployment section, click Custom Deployment.

  3. On the Custom Deployment page, configure the following key parameters. For other parameters, see Custom deployment.

    Parameter

    Description

    Environment Context

    Deployment Method

    Select Image-based Deployment.

    Image Configuration

    Select an image. This example uses Official Image > chat-llm-webui:3.0-vllm.

    Storage Mount

    Click the +OSS button and configure the following parameters:

    • Uri: Select the directory where the model ciphertext is located. Example: oss://examplebucket/qwen-encrypted/. Use your actual path.

    • Mount Path: The directory where the plaintext model is mounted. Example: /mnt/model.

    Run Command

    An example configuration is python webui/webui_server.py --port=8000 --model-path=/mnt/model --backend=vllm. Note that the value of --model-path must be the same as the mount path to read the decrypted model.

    Port Number

    For this solution, set the port number to 8000.

    Environment Variable

    You can add environment variables for verification by the Trustee remote attestation service. For this solution, add an environment variable with the key eas-test and the value 123.

    Resource Deployment

    Deployment Resources

    For this solution, select the resource specification ml.gu7i.c32m188.1-gu30.

    VPC

    VPC

    Configure a VPC and set the SNAT public network egress IP for the VPC to enable public network access for EAS. This allows EAS to access the remote attestation service and perform environment security checks.

    vSwitch

    Security Group Name

    Service Features

    Configure Secure Encryption Environment

    Turn on the Configure Secure Encryption Environment switch and configure the following parameters:

    • File Encryption Method: The method used to encrypt the model. Sam and Gocryptfs are supported. For this solution, select Sam.

    • System Trust Management Service Address: The address of the deployed Trustee service. Example: http://alb-ppams74szbwg2f****.cn-shanghai.alb.aliyuncsslb.com/trustee.

    • KBS URI of Decryption Key: The KBS URI of the model decryption key. The format is kbs:///default/aliyun/<secret_name_of_the_key>. Replace <secret_name_of_the_key> with the name of the secret that you created in the preceding step.

    The following example shows the final JSON configurations:

    Click here to view the JSON configurations

    {
        "cloud": {
            "computing": {
                "instances": [
                    {
                        "type": "ecs.gn7i-c8g1.2xlarge"
                    }
                ]
            },
            "networking": {
                "security_group_id": "sg-2vcbmhs3puagy23k****",
                "vpc_id": "vpc-2vctxcm4qncgriz0j****",
                "vswitch_id": "vsw-2vc0tdylfux849zb7****"
            }
        },
        "confidential": {
            "decryption_key": "kbs:///default/aliyun/model-decryption-key",
            "trustee_endpoint": "http://alb-ppams74szbwg2f****.cn-shanghai.alb.aliyuncsslb.com/api"
        },
        "containers": [
            {
                "env": [
                    {
                        "name": "eas-test",
                        "value": "1234"
                    }
                ],
                "image": "eas-registry-vpc.cn-hangzhou.cr.aliyuncs.com/pai-eas/chat-llm-webui:3.0.5-vllm",
                "port": 8000,
                "script": "python webui/webui_server.py --port=8000 --model-path=--model-path=/mnt/data/model --backend=vllm"
            }
        ],
        "metadata": {
            "cpu": 8,
            "gpu": 1,
            "instance": 1,
            "memory": 30000,
            "name": "xynnn_eas_test_gpu"
        },
        "storage": [
            {
                "encryption": {
                    "method": "sam"
                },
                "mount_path": "/mnt/data/model",
                "oss": {
                    "path": "oss://examplebucket/qwen-encrypted/"
                },
                "properties": {
                    "resource_type": "model"
                }
            }
        ]
    }
  4. After you configure the parameters, click Deploy.

    When the Service Status is Running, the service is deployed. You can then log on to the Trustee management interface to view historical key access records. This lets you check whether the Trustee remote attestation service has verified the model deployment environment and view detailed information about the execution environment.

Step 3: Call the service for secure inference

1. View the service endpoint

On the Inference Service tab, click the name of the target service to open the Overview page. In the Basic Information area, click View Endpoint Information.

2. Call the EAS service

Run the following cURL command to send an inference request:

curl <Service_URL> \
  -H "Content-type: application/json" \
  --data-binary @openai_chat_body.json \
  -v \
  -H "Connection: close" \
  -H "Authorization: <Token>"

The following table describes the parameters.

  • <Service_URL>: The endpoint of the EAS service.

  • <Token>: The token of the EAS service.

  • openai_chat_body.json is the original inference request. The following code shows the sample request content:

    {
        "max_new_tokens": 4096,
        "use_stream_chat": false,
        "prompt": "What is the capital of Canada?",
        "system_prompt": "Act like you are a knowledgeable assistant who can provide information on geography and related topics.",
        "history": [
            [
                "Can you tell me what's the capital of France?",
                "The capital of France is Paris."
            ]
        ],
        "temperature": 0.8,
        "top_k": 10,
        "top_p": 0.8,
        "do_sample": true,
        "use_cache": true
    }

The following example shows a sample response:

{
    "response": "The capital of Canada is Ottawa.",
    "history": [
        [
            "Can you tell me what's the capital of France?",
            "The capital of France is Paris."
        ],
        [
            "What is the capital of Canada?",
            "The capital of Canada is Ottawa."
        ]
    ]
}