Frequency Regularization: Restricting Information Redundancy of Convolutional Neural Networks

Chenqiu Zhao, Guanfang Dong, Shupei Zhang, Zijie Tan, Anup Basu

Convolutional neural networks have demonstrated impressive results in many computer vision tasks. However, the increasing size of these networks raises concerns about the information redundancy within the network parameters. In this paper, we proposed the Frequency Regularization to restrict the non-zero elements of the network parameters in frequency domain. The proposed approach operates at the tensor level, and can be applied to almost any kind of network architectures. Specifically, the tensors of parameters are maintained in the frequency domain, where high frequency components can be eliminated by zigzag setting tensor elements to zero. Then, the inverse discrete cosine transform (IDCT) is used to reconstruct the spatial tensors for matrix operations during network training. Since high frequency components of images are known to be non-critical, a large proportion of the parameters can be set to zero when networks are trained with proposed frequency regularization. Comprehensive evaluations on various state-of-the-art network architectures, including LeNet, Alexnet, VGG, Resnet, UNet, GAN, and VAE, demonstrate the effectiveness of the proposed frequency regularization. Under the condition with tiny accuracy decrease (less than 2\%), a LeNet5 with 0.4M parameters can be represented by 776 float16 numbers(over 1100$\times$), a UNet with 34M parameters can be represented by 2936 float16 numbers (over 20000$\times$).

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment