SaNet: Scale-aware neural Network for Parsing Multiple Spatial Resolution Aerial Images

Libo Wang

Assigning the geospatial objects of aerial images with categorical information at the pixel-level is a basic task in urban scene understanding. However, the huge differencc in remote sensing sensors makes the acqured aerial images in multiple spatial resolution (MSR), which brings two issues: the increased scale variation of geospatial objects and informative feature loss as spatial resolution drops. To address the two issues, we propose a novel scale-aware neural network (SaNet) for parsing MSR aerial images. For coping with the imbalanced segmentation quality between larger and smaller objects caused by the scale variation, the SaNet deploys a densely connected feature network (DCFPN) module to capture quality multi-scale context with large receptive fields. To alleviate the informative feature loss, a SFR module is incorporated into the network to learn scale-invariant features with spatial relation enhancement. Extensive experimental results on the ISPRS Vaihingen 2D Dataset and ISPRS Potsdam 2D Dataset demonstrate the outstanding cross-resolution segmentation ability of the proposed SaNet compared to other state-of-the-art networks.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment