After the TensorFlow Serving inference service starts to run in the virtual Software
Guard Extensions (vSGX) encrypted computing environment, you can configure your client
to send data to the inference service. After inference is complete, the inference
service returns the inference results.
Procedure
- Log on to the Elastic Compute Service (ECS) instance on which a client is deployed.
For more information, see
Connection methods.
Note In this example, the client is used as a remote end to initiate an access request.
- Install the required mesa-libGL packages.
yum install -y python3-pip mesa-libGL
python3 -m pip install --user -U pip
python3 -m pip install --user virtualenv
# Create an isolated Python environment to prevent existing Python dependencies from being contaminated.
# python3 -m virtualenv venv
source venv/bin/activate
python3 -m pip install multidict
- Configure a TensorFlow Serving domain name.
# Set vSGX_ip_addr to the actual IP address of the vSGX instance. If the client and the vSGX client are deployed on the same ECS instance, set vSGX_ip_addr to the internal IP address of the instance.
sudo sh -c 'echo "${vSGX_ip_addr} grpc.tf-serving.service.com" >> /etc/hosts'
- Configure the client to send a remote access request.
The remote access request carries data to the inference service that runs in the vSGX
encrypted computing environment. After inferences are complete, the inference service
returns the inference results.
export CC_DIR=$(realpath ./confidential-computing)
cd ${CC_DIR}/Tensorflow_Serving/client
python3 ./resnet_client_grpc.py -batch 1 -cnum 1 -loop 50 -url grpc.tf-serving.service.com:8500 -ca `pwd -P`/ssl_configure/ca_cert.pem -crt `pwd -P`/ssl_configure/client/cert.pem -key `pwd -P`/ssl_configure/client/key.pem