AIACC 2.0-AIACC Communication Speeding (AIACC-ACSpeed) is a general-purpose training accelerator. AIACC-ACSpeed can be used to accelerate training tasks without requiring code changes to your training model. AIACC-ACSpeed can be applied to all AI training scenarios based on the PyTorch framework and can implement customized optimization for the PyTorch framework. AIACC-ACSpeed can also leverage the capabilities of the nccl-plugin component to accelerate AI training tasks based on the frameworks aside from PyTorch, such as TensorFlow and MXNet.

The following table describes the typical scenarios of AIACC-ACSpeed for AI training.
Scenario Applicable model Common storage
Image classification and image recognition ResNet and VGG-16 models Cloud Paralleled File System (CPFS)
CTR prediction Wide&Deep model Hadoop Distributed File System (HDFS)
Natural Language Processing (NLP) Transformer and BERT models CPFS