SLIC: Self-Conditioned Adaptive Transform with Large-Scale Receptive Fields for Learned Image Compression

Wei Jiang, Peirong Ning, Ronggang Wang

Learned image compression has achieved remarkable performance. Transform, plays an important role in boosting the RD performance. Analysis transform converts the input image to a compact latent representation. The more compact the latent representation is, the fewer bits we need to compress it. When designing better transform, some previous works adopt Swin-Transformer. The success of the Swin-Transformer in image compression can be attributed to the dynamic weights and large receptive field.However,the LayerNorm adopted in transformers is not suitable for image compression.We find CNN-based modules can also be dynamic and have large receptive-fields. The CNN-based modules can also work with GDN/IGDN. To make the CNN-based modules dynamic, we generate the weights of kernels conditioned on the input feature. We scale up the size of each kernel for larger receptive fields. To reduce complexity, we make the CNN-module channel-wise connected. We call this module Dynamic Depth-wise convolution. We replace the self-attention module with the proposed Dynamic Depth-wise convolution, replace the embedding layer with a depth-wise residual bottleneck for non-linearity and replace the FFN layer with an inverted residual bottleneck for more interactions in the spatial domain. The interactions among channels of dynamic depth-wise convolution are limited. We design the other block, which replaces the dynamic depth-wise convolution with channel attention. We equip the proposed modules in the analysis and synthesis transform and receive a more compact latent representation and propose the learned image compression model SLIC, meaning Self-Conditioned Adaptive Transform with Large-Scale Receptive Fields for Learned Image Compression Learned Image Compression. Thanks to the proposed transform modules, our proposed SLIC achieves 6.35% BD-rate reduction over VVC when measured in PSNR on Kodak dataset.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment