Sparsifying Transformer Models with Trainable Representation Pooling

Michał Pietruszka, Łukasz Borchmann, Łukasz Garncarek

We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on task-specific parts of the input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable top-k operator. For example, our experiments on a challenging summarization task of long documents show that our method is over 3 times faster and up to 16 times more memory efficient while significantly outperforming both dense and state-of-the-art sparse transformer models. The method can be effortlessly applied to many models used in NLP and CV, simultaneously with other improvements.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment