Multistage Attention ResU-Net for Semantic Segmentation of Fine-Resolution Remote Sensing Images

Rui Li, Jianlin Su, Chenxi Duan, Shunyi Zheng

The memory and computation costs of the dot-product attention mechanism widely used in vision and language tasks increase quadratically with the spatio-temporal size of the input. Such growth hinders the usage of attention mechanisms in application scenarios with large inputs. In this Letter, to remedy this deficiency, we propose a Linear Attention Mechanism (LAM) which is approximate to dot-product attention with dramatically less memory and computation costs. The efficient design makes the incorporation between attention mechanisms and neural networks more flexible and versatile. Based on the proposed LAM, we refactor the skip connections in the raw U-Net and design a Multistage Attention ResU-Net (MAResU-Net) for semantic segmentation using fine-resolution remote sensing images. Experiments conducted on the Vaihingen dataset demonstrated the effectiveness of our MAResU-Net. Code is available at https://github.com/lironui/Multistage-Attention-ResU-Net.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment