RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening

Sungha Choi, Sanghun Jung, Huiwon Yun, Joanne Kim, Seungryong Kim, Jaegul Choo

Enhancing the generalization performance of deep neural networks in the real world (i.e., unseen domains) is crucial for safety-critical applications such as autonomous driving. To address this issue, this paper proposes a novel instance selective whitening loss to improve the robustness of the segmentation networks for unseen domains. Our approach disentangles the domain-specific style and domain-invariant content encoded in higher-order statistics (i.e., feature covariance) of the feature representations and selectively removes only the style information causing domain shift. As shown in Fig. 1, our method provides reasonable predictions for (a) low-illuminated, (b) rainy, and (c) unexpected new scene images. These types of images are not included in the training dataset that the baseline shows a significant performance drop, contrary to ours. Being simple but effective, our approach improves the robustness of various backbone networks without additional computational cost. We conduct extensive experiments in urban-scene segmentation and show the superiority of our approach over existing work. Our code is available at https://github.com/shachoi/RobustNet.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment