Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search

Federico A. Galatolo, Mario G. C. A. Cimino, Gigliola Vaglini

In this research work we present GLaSS, a novel zero-shot framework to generate an image(or a caption) corresponding to a given caption(or image). GLaSS is based on the CLIP neural network which given an image and a descriptive caption provides similar embeddings. Differently, GLaSS takes a caption (or an image) as an input, and generates the image (or the caption) whose CLIP embedding is most similar to the input one. This optimal image (or caption) is produced via a generative network after an exploration by a genetic algorithm. Promising results are shown, based on the experimentation of the image generators BigGAN and StyleGAN2, and of the text generator GPT2.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment