All Products
Search
Document Center

Platform For AI:Install PAI-Blade

Last Updated:Mar 06, 2026

PAI-Blade accelerates deep learning inference on Platform for AI (PAI). The installation package includes a Python wheel package for model optimization and a C++ SDK for model inference.

Prerequisites

Ensure the following requirements are met:

  • Linux operating system

  • Python 3.6, 3.7, or 3.8

  • Supported deep learning framework installed (TensorFlow or PyTorch)

  • GPU environments only: CUDA 10.0 to 11.3

Compatibility

Category Supported versions
Operating system Linux
Python 3.6, 3.7, 3.8
CUDA 10.0 to 11.3 (GPU environments)
TensorFlow 1.15, 2.4, 2.7
PyTorch 1.6.0 and later
TensorRT 8.0 and later
C++ SDK ABI cxx11, pre-cxx11
C++ SDK formats RPM, DEB, TGZ
Device types GPU, CPU, terminal devices (Mobile Neural Network (MNN))

Installation paths

Scenario Components When to use
GPU or CPU server Framework + TensorRT + wheel package + C++ SDK + access token Model optimization and inference on servers
On-premises device TensorFlow + MNN + wheel package Model optimization for mobile or embedded devices

Install on a GPU or CPU server

Step 1: Install the framework

Install TensorFlow or PyTorch before installing PAI-Blade. PAI-Blade does not install frameworks automatically.

TensorFlow

PAI-Blade supports TensorFlow 1.15, 2.4, and 2.7. Install from the TensorFlow community:

# GPU
pip3 install tensorflow-gpu==1.15.0
pip3 install tensorflow-gpu==2.4.0

# CPU
pip3 install tensorflow==1.15.0
pip3 install tensorflow==2.4.0

For TensorFlow with integrated TensorRT, use the precompiled package from PAI instead of the community package.

PyTorch

PAI-Blade supports PyTorch 1.6.0 and later. Install from the PyTorch official website based on your device type and CUDA version.

Example: Install PyTorch 1.7.1 with CUDA 11.0:

pip3 install torch==1.7.1+cu110 torchvision==0.8.2+cu110 \
    -f https://download.pytorch.org/whl/torch/

Official PyTorch 1.6.0 does not support CUDA 10.0. Use the precompiled PyTorch package from PAI for this combination.

Step 2: Install the wheel package

Select the pip3 install command matching your framework, framework version, device type, and CUDA version.

For earlier versions, see Installation commands and SDK download URLs for PAI-Blade of earlier versions.

CPU

TensorFlow 1.15.0 and PyTorch 1.6.0

pip3 install pai_blade_cpu==3.27.0+1.15.0.1.6.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
pip3 install tensorflow_blade_cpu==3.27.0+1.15.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
pip3 install torch_blade_cpu==3.27.0+1.6.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html

TensorFlow 2.4.0 and PyTorch 1.7.1

pip3 install pai_blade_cpu==3.27.0+2.4.0.1.7.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
pip3 install tensorflow_blade_cpu==3.27.0+2.4.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
pip3 install torch_blade_cpu==3.27.0+1.7.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html

PyTorch 1.8.1

pip3 install pai_blade_cpu==3.27.0+1.8.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
pip3 install torch_blade_cpu==3.27.0+1.8.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html

PyTorch 1.9.0

pip3 install pai_blade_cpu==3.27.0+1.9.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
pip3 install torch_blade_cpu==3.27.0+1.9.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html

TensorFlow 2.7.0 and PyTorch 1.10.0

pip3 install pai_blade_cpu==3.27.0+2.7.0.1.10.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
pip3 install tensorflow_blade_cpu==3.27.0+2.7.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
pip3 install torch_blade_cpu==3.27.0+1.10.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html

CUDA 11.0

TensorFlow 2.4.0 and PyTorch 1.7.1

pip3 install pai_blade_gpu==3.27.0+cu110.2.4.0.1.7.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
pip3 install tensorflow_blade_gpu==3.27.0+cu110.2.4.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
pip3 install torch_blade==3.27.0+1.7.1.cu110 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html

CUDA 11.1

PyTorch 1.8.1

pip3 install pai_blade_gpu==3.27.0+cu111.1.8.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
pip3 install torch_blade==3.27.0+1.8.1.cu111 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html

PyTorch 1.9.0

pip3 install pai_blade_gpu==3.27.0+cu111.1.9.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
pip3 install torch_blade==3.27.0+1.9.0.cu111 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html

PyTorch 1.10.0

pip3 install pai_blade_gpu==3.27.0+cu111.1.10.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
pip3 install torch_blade==3.27.0+1.10.0.cu111 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html

CUDA 11.2

TensorFlow 2.7.0

pip3 install pai_blade_gpu==3.27.0+cu112.2.7.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
pip3 install tensorflow_blade_gpu==3.27.0+cu112.2.7.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html

CUDA 11.3

PyTorch 1.11.0

pip3 install pai_blade_gpu==3.27.0+cu113.1.11.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
pip3 install torch_blade==3.27.0+1.11.0.cu113 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html

PyTorch 1.12.1

pip3 install pai_blade_gpu==3.27.0+cu113.1.12.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
pip3 install torch_blade==3.27.0+1.12.1.cu113 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html

Step 3: Install the C++ SDK

The C++ SDK supports only the GNU Compiler Collection (GCC) on Linux. Blade provides packages with two Application Binary Interface (ABI) variants. For details, see the GCC ABI documentation.

Select your ABI variant

ABI variant When to use
pre-cxx11 GCC earlier than 5.1, or _GLIBCXX_USE_CXX11_ABI=0 is configured
cxx11 GCC 5.1 or later without _GLIBCXX_USE_CXX11_ABI=0

Select your package format

Format Distribution Install command Root required
RPM CentOS, Red Hat rpm -ivh Yes
DEB Ubuntu, Debian dpkg -i Yes
TGZ Any Linux Extract and use No

Example: Install the pre-cxx11 ABI SDK for CUDA 11.0 (v3.23.0)

RPM

rpm -ivh https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/3.23.0/py3.6.8_cu110_tf2.4.0_torch1.7.1_abiprecxx11/blade_cpp_sdk_gpu-3.23.0-Linux.rpm

DEB

wget https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/3.23.0/py3.6.8_cu110_tf2.4.0_torch1.7.1_abiprecxx11/blade_cpp_sdk_gpu-3.23.0-Linux.deb
dpkg -i blade_cpp_sdk_gpu-3.23.0-Linux.deb

SDK directory structure

RPM and DEB packages install to /usr/local/ by default. Directory structure after installation:

/usr/local/
├── bin
│   ├── disc_compiler_main
│   └── tao_compiler_main
└── lib
    ├── libral_base_context.so
    ├── libtao_ops.so
    ├── libtf_blade.so
    ├── libtorch_blade.so
    └── mlir_disc_builder.so

Use the dynamic-link libraries in /usr/local/lib for model deployment.

SDK download URLs (v3.27.0)

For earlier versions, see Installation commands and SDK download URLs for PAI-Blade of earlier versions.

cxx11 ABI

Device Framework versions DEB RPM TGZ
CPU TensorFlow 1.15.0 + PyTorch 1.6.0 DEB RPM TGZ

pre-cxx11 ABI

Device Framework versions DEB RPM TGZ
CPU TensorFlow 1.15.0 + PyTorch 1.6.0 DEB RPM TGZ
CPU TensorFlow 2.4.0 + PyTorch 1.7.1 DEB RPM TGZ
CPU PyTorch 1.8.1 DEB RPM TGZ
CPU PyTorch 1.9.0 DEB RPM TGZ
CPU TensorFlow 2.7.0 + PyTorch 1.10.0 DEB RPM TGZ
CUDA 11.0 TensorFlow 2.4.0 + PyTorch 1.7.1 DEB RPM TGZ
CUDA 11.1 PyTorch 1.8.1 DEB RPM TGZ
CUDA 11.1 PyTorch 1.9.0 DEB RPM TGZ
CUDA 11.1 PyTorch 1.10.0 DEB RPM TGZ
CUDA 11.2 TensorFlow 2.7.0 DEB RPM TGZ
CUDA 11.3 PyTorch 1.11.0 DEB RPM TGZ
CUDA 11.3 PyTorch 1.12.1 DEB RPM TGZ

Step 4: Get an access token

The C++ SDK requires an access token. Contact your sales manager (SA or PDSA) or join the DingTalk group for Blade users (Group ID: 21946131).

Install for on-premises devices

Blade converts TensorFlow models to Mobile Neural Network (MNN) format for on-device inference.

  1. Install the required frameworks:

       pip3 install tensorflow==1.15 MNN==1.1.0
  2. Install the Blade wheel package for your environment: GPU CPU

       pip3 install pai-blade-gpu \
         -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
       pip3 install pai-blade-cpu \
         -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html

Important notes

  • Blade does not automatically install TensorFlow or PyTorch. Install a supported framework before installing Blade.

  • Ensure the wheel package matches your device type and CUDA version.

  • Official PyTorch 1.6.0 does not support CUDA 10.0. Use the PAI-provided wheel package for this combination.