Exploring the Potential of Low-bit Training of Convolutional Neural Networks

Kai Zhong, Xuefei Ning, Zhenhua Zhu, Tianchen Zhao, Shulin Zeng, Kaiyuan Guo, Yu Wang, Huazhong Yang

In this work, we propose a low-bit training framework for convolutional neural networks, which is built around a novel multi-level scaling (MLS) tensor format. Our framework focuses on reducing the energy consumption of convolution operations by quantizing all the convolution operands to low bit-width format. Specifically, we propose the MLS tensor format, in which the element-wise bit-width can be largely reduced. Then, we describethe dynamic quantization and the low-bit tensor convolution arithmetic to leverage the MLS tensor format efficiently. Experiments show that our framework achieves a superior trade-off between the accuracy and the bit-width than previous low-bit training frameworks. For training a variety of models on CIFAR-10, using 1-bit mantissa and 2-bit exponent is adequate to keep the accuracy loss within 1%. And on larger datasets like ImageNet, using 4-bit mantissa and 2-bit exponent is adequate to keep the accuracy loss within 1%. Through the energy consumption simulation of the computing units, we can estimate that training a variety of models with our framework could achieve 8.3~10.2X and 1.9~2.3X higher energy efficiency than training with full-precision and 8-bit floating-point arithmetic, respectively.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment