×
Community Blog Accelerate Deep Neural Network with AI Chips

Accelerate Deep Neural Network with AI Chips

In this article, we will introduce some AI chips which can improve your experience to run deep neural network models.

The current advanced AI development strategy is deep learning with a learning process divided into two parts: training and inference.

Training usually requires a significant amount of data input, or involves the use of unsupervised learning methods, such as enhanced learning, to create a complex deep neural network model. Due to the massive training data required and the complicated structures of a deep neural network, the training process requires a vast amount of computation and usually requires GPU clusters to train for several days or even weeks. For now, the GPU plays an irreplaceable role in the training phase.

On the other hand, "Inference" means to take an already well-trained model and, by use of new data, to "infer" a variety of conclusions. A prime example would be video surveillance equipment which runs a deep neural network model in the backend to determine whether the person associated with a face is on a blacklist. Although inference uses less computational power than training, it still involves a large number of matrix operations.

Currently, mainstream artificial intelligence chips include GPU, FPGA, ASIC, and chips that mimic the human brain's basic learning mechanics.

For more information about different FPGA, ASIC, GPU and "Human Brain" AI Chips, please go to Understanding Mainstream Chips Used in Artificial Intelligence.

Related Blog Posts

Announcing Hanguang 800: Alibaba's First AI-Inference Chip

Hanguang 800 is Alibaba's first AI inference chip and the result of a culmination of both software and hardware development. In terms of hardware, it uses an in-house designed chip that takes advantage of such technologies as inference acceleration, which can help to resolve traditional performance bottlenecks. Next, on the software side, the chip is integrated with several algorithms developed at the DAMO Academy, which are specifically optimized for convolutional neural network (CNN) and computer vision algorithms, granting the tiny neural processing unit (NPU) the capacity to complete the computing operations of a large neural network.

Heterogeneous Computing: Dominated by GPU, FPGA, and ASIC Chips

Alibaba Cloud's virtualization platform director Zhang Xiantao said the heterogeneous computing is dominated by a trio of GPU, FPGA, and ASIC Chips in Hangzhou's Yunqi township, October 2017.

GPU processors are the mainstream in the heterogeneous computing field. Going forward, as the FPGA ecosystem came into existence and continues to grow, and ASIC chip technology gradually matures, the world of heterogeneous computing will present a tripartite division among GPU, FPGA, and ASIC chip technologies. These technologies have their respective unique advantages and applications, as well as their very own customer base. In the future, Alibaba Cloud will release more products to expand the heterogeneous computing product family. These will include 8-card/16-card GPU products, next-gen Volta architecture GPU products, and a new generation of FPGA products. In addition, R&D is underway for cloud-based ASIC chip products.

Related Products

Machine Learning Platform for AI

Machine Learning Platform for AI provides end-to-end machine learning services, including data processing, feature engineering, model training, model prediction, and model evaluation. Machine Learning Platform for AI combines all of these services to make AI more accessible than ever.

MaxCompute

MaxCompute is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.

0 0 0
Share on

Alibaba Clouder

2,603 posts | 747 followers

You may also like

Comments