RaP-Net: A Region-wise and Point-wise Weighting Network to Extract Robust Features for Indoor Localization

Dongjiang Li, Jinyu Miao, Xuesong Shi, Yuxin Tian, Qiwei Long, Tianyu Cai, Ping Guo, Hongfei Yu, Wei Yang, Haosong Yue, Qi Wei, Fei Qiao

Feature extraction plays an important role in visual localization. Unreliable features on dynamic objects or repetitive regions will disturb robust feature matching and thus, challenge indoor localization greatly. To conquer such an issue, we propose a novel network, RaP-Net, to simultaneously predict region-wise invariability and point-wise reliability, and then extract features by considering both of them. We also introduce a new dataset, named OpenLORIS-Location, to train proposed network. The dataset contains 1553 indoor images from 93 indoor locations. Various appearance changes between images of the same location are included and they can help to learn the invariability in typical indoor scenes. Experimental results show that the proposed RaP-Net trained with the OpenLORIS-Location dataset achieves an excellent performance in the feature matching task and significantly outperforms state-of-the-arts feature algorithms in indoor localization. The RaP-Net code and dataset are available at https://github.com/ivipsourcecode/RaP-Net.

Knowledge Graph



Sign up or login to leave a comment