Unsupervised Many-to-Many Image-to-Image Translation Across Multiple Domains

Ye Lin, Keren Fu, Shenggui Ling, Cheng Peng

Unsupervised multi-domain image-to-image translation aims to synthesis images among multiple domains without labeled data, which is more general and complicated than one-to-one image mapping. However, existing methods mainly focus on reducing the large costs of modeling and do not pay enough attention to the quality of generated images. In some target domains, their translation results may not be expected or even it has model collapse. To improve the image quality, we propose an effective many-to-many mapping framework for unsupervised multi-domain image-to-image translation. There are two key aspects in our method. The first is a proposed many-to-many architecture with only one domain-shared encoder and several domain-specialized decoders to effectively and simultaneously translate images across multiple domains. The second is two proposed constraints extended from one-to-one mappings to further help improve the generation. All the evaluations demonstrate our framework is superior to existing methods and provides an effective solution for multi-domain image-to-image translation.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment