All Products
Search
Document Center

Alibaba Cloud Linux:Release notes for PyTorch-Intel images

Last Updated:Jun 23, 2025

PyTorch is a highly flexible, extensible, and open source machine learning framework. You can run PyTorch to perform various deep learning tasks, such as image classification, object detection, natural language processing, and generative adversarial network (GAN) tasks. Intel® Extension for PyTorch (IPEX) improves the performance of Intel hardware. IPEX optimizes the Immediate-Mode Rendering (IMR) and Tile-Based Rendering (TBR) GPUs. In PyTorch, TBR GPUs can outperform IMR GPUs by using optimization techniques such as operation fusion. IPEX provides the GPUs with more comprehensive graph optimization, uses the optimized AVX-512 instructions and Advanced Matrix Extensions (AMX) on Intel CPUs, and uses Intel Xe matrix extension (XMX) AI engines on Intel discrete GPUs. AVX-512 is short for 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions. In particular, the performance of deep learning training and inference is greatly improved on the fourth-generation Intel Xeon Sapphire Rapids scalable processors that run on instances of the g8i instance family. PyTorch-Intel images are optimized images that include PyTorch and IPEX dedicated to Intel CPUs. The out-of-the-box high-performance PyTorch-Intel images can be used in deep learning research and practice.

Images

Image

Image address

pytorch-intel

ac2-registry.cn-hangzhou.cr.aliyuncs.com/ac2/pytorch-intel:2.2.0.1-alinux3.2304

Image content

pytorch-intel:2.2.0.1-alinux3.2304

  • BaseOS: Alinux 3.2304

  • Python: 3.10.13

  • aiodns: 3.0.0

  • aiohttp: 3.9.3

  • aiosignal: 1.3.1

  • appdirs: 1.4.4

  • async-timeout: 4.0.2

  • attrs: 22.2.0

  • bcrypt: 3.2.2

  • Brotli: 1.0.9

  • certifi: 2023.7.22

  • cffi: 1.15.1

  • charset-normalizer: 3.1.0

  • contourpy: 1.0.7

  • coverage: 7.2.1

  • cryptography: 41.0.7

  • cycler: 0.11.0

  • exceptiongroup: 1.1.1

  • filelock: 3.9.0

  • fonttools: 4.47.0

  • frozenlist: 1.3.3

  • fs: 2.4.16

  • fsspec: 2023.6.0

  • idna: 3.4

  • iniconfig: 1.1.1

  • intel-extension-for-pytorch: 2.2.0+cpu

  • Jinja2: 3.1.2

  • kiwisolver: 1.4.4

  • libcomps: 0.1.19

  • lxml: 4.9.2

  • MarkupSafe: 2.1.2

  • matplotlib: 3.7.1

  • mpmath: 1.3.0

  • multidict: 6.0.4

  • networkx: 2.8.8

  • numpy: 1.24.2

  • olefile: 0.46

  • packaging: 23.0

  • paramiko: 2.12.0

  • Pillow: 10.1.0

  • pip: 23.3.1

  • pluggy: 1.0.0

  • ply: 3.11

  • pyarrow: 14.0.2

  • pyasn1: 0.4.8

  • pycairo: 1.23.0

  • pycares: 4.3.0

  • pycparser: 2.21

  • PyNaCl: 1.4.0

  • pyparsing: 3.0.7

  • PySocks: 1.7.1

  • pytest: 7.3.1

  • pytest-cov: 4.0.0

  • python-dateutil: 2.8.2

  • requests: 2.31.0

  • SciPy: 1.10.1

  • setuptools: 65.5.1

  • six: 1.16.0

  • sympy: 1.11.1

  • tomli: 2.0.1

  • torch: 2.2.0.1

  • torchaudio: 2.2.0.1

  • torchvision: 0.17.0.1

  • tqdm: 4.65.0

  • typing_extensions: 4.9.0

  • urllib3: 1.26.18

  • yarl: 1.8.2

Operational requirements

PyTorch-Intel images use Intel CPU AVX-512 and AMX instructions and must run on platforms that support the instructions.

Important features

  • The performance is optimized for PyTorch-Intel images.

In this example, PyTorch 2.2.0.1 is used on a g8i 2xlarge instance. The following table compares the inference performance of the PyTorch-Intel and PyTorch images that run the Residual Network 50 (ResNet50) model and use different precisions in image processing scenarios. A smaller latency value of image processing indicates higher inference performance. Statistics in the following table indicate that the PyTorch-Intel image has significantly improved performance.

Image

fp32

bf16

PyTorch image

45.48 ms

16.15 ms

PyTorch-Intel image

27.99 ms

10.14 ms

Updates

2024.05: Released the PyTorch-Intel 2.2.0.1 image.