Content-Augmented Feature Pyramid Network with Light Linear Spatial Transformers for Object Detection

Yongxiang Gu, Xiaolin Qin, Yuncong Peng, Lu Li

As one of the prevalent components, Feature Pyramid Network (FPN) is widely used in the current object detection models to improve the performance of multi-scale detection. However, its interaction is still in a local and lossy manner, thus limiting the representation power. In this paper, to simulate a global view of human vision in object detection and address the inherent defects of interaction mode in FPN, we construct a novel architecture termed Content-Augmented Feature Pyramid Network (CA-FPN). Unlike the vanilla FPN, which fuses features within a local receptive field, CA-FPN can adaptively aggregate similar features from a global view. It is equipped with a global content extraction module and light linear spatial transformers. The former allows to extract multi-scale context information and the latter can deeply combine the global content extraction module with the vanilla FPN using the linearized attention function, which is designed to reduce model complexity. Furthermore, CA-FPN can be readily plugged into existing FPN-based models. Extensive experiments on the challenging COCO and PASCAL VOC object detection datasets demonstrated that our CA-FPN significantly outperforms competitive FPN-based detectors without bells and whistles. When plugging CA-FPN into Cascade R-CNN framework built upon a standard ResNet-50 backbone, our method can achieve 44.8 AP on COCO mini-val. Its performance surpasses the previous state-of-the-art by 1.5 AP, demonstrating the potentiality of application.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment