Beyond Nystr\"omformer -- Approximation of self-attention by Spectral Shifting

Madhusudan Verma

Transformer is a powerful tool for many natural language tasks which is based on self-attention, a mechanism that encodes the dependence of other tokens on each specific token, but the computation of self-attention is a bottleneck due to its quadratic time complexity. There are various approaches to reduce the time complexity and approximation of matrix is one such. In Nystr\"omformer, the authors used Nystr\"om based method for approximation of softmax. The Nystr\"om method generates a fast approximation to any large-scale symmetric positive semidefinite (SPSD) matrix using only a few columns of the SPSD matrix. However, since the Nystr\"om approximation is low-rank when the spectrum of the SPSD matrix decays slowly, the Nystr\"om approximation is of low accuracy. Here an alternative method is proposed for approximation which has a much stronger error bound than the Nystr\"om method. The time complexity of this same as Nystr\"omformer which is $O\left({n}\right)$.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment