Deep Single Image Deraining using An Asymetric Cycle Generative and Adversarial Framework

Wei Liu, Rui Jiang, Cheng Chen, Tao Lu, Zixiang Xiong

In reality, rain and fog are often present at the same time, which can greatly reduce the clarity and quality of the scene image. However, most unsupervised single image deraining methods mainly focus on rain streak removal by disregarding the fog, which leads to low-quality deraining performance. In addition, the samples are rather homogeneous generated by these methods and lack diversity, resulting in poor results in the face of complex rain scenes. To address the above issues, we propose a novel Asymetric Cycle Generative and Adversarial framework (ACGF) for single image deraining that trains on both synthetic and real rainy images while simultaneously capturing both rain streaks and fog features. ACGF consists of a Rain-fog2Clean (R2C) transformation block and a Clean2Rain-fog (C2R) transformation block. The former consists of parallel rain removal path and rain-fog feature extraction path by the rain and derain-fog network and the attention rain-fog feature extraction network (ARFE) , while the latter only contains a synthetic rain transformation path. In rain-fog feature extraction path, to better characterize the rain-fog fusion feature, we employ an ARFE to exploit the self-similarity of global and local rain-fog information by learning the spatial feature correlations. Moreover, to improve the translational capacity of C2R and the diversity of models, we design a rain-fog feature decoupling and reorganization network (RFDR) by embedding a rainy image degradation model and a mixed discriminator to preserve richer texture details in synthetic rain conversion path. Extensive experiments on benchmark rain-fog and rain datasets show that ACGF outperforms state-of-the-art deraining methods. We also conduct defogging performance evaluation experiments to further demonstrate the effectiveness of ACGF.

Knowledge Graph



Sign up or login to leave a comment