Beyond BatchNorm: Towards a General Understanding of Normalization in Deep Learning

Ekdeep Singh Lubana, Robert P. Dick, Hidenori Tanaka

Inspired by BatchNorm, there has been an explosion of normalization layers in deep learning. Recent works have identified a multitude of beneficial properties in BatchNorm to explain its success. However, given the pursuit of alternative normalization techniques, these properties need to be generalized so that any given layer's success/failure can be accurately predicted. In this work, we take a first step towards this goal by extending known properties of BatchNorm in randomly initialized deep neural networks (DNNs) to nine recently proposed normalization layers. Our primary findings follow: (i) Similar to BatchNorm, activations-based normalization layers can avoid exploding activations in ResNets; (ii) Use of GroupNorm ensures rank of activations is at least $\Omega(\sqrt{\frac{\text{width}}{\text{Group Size}}})$, thus explaining why LayerNorm witnesses slow optimization speed; (iii) Small group sizes result in large gradient norm in earlier layers, hence justifying training instability issues in Instance Normalization and illustrating a speed-stability tradeoff in GroupNorm. Overall, our analysis reveals several general mechanisms that explain the success of normalization techniques in deep learning, providing us with a compass to systematically explore the vast design space of DNN normalization layers.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment