Low Rank Approximation Directed by Leverage Scores and Computed at Sublinear Cost

Qi Luan, Victor Y. Pan, John Svadlenka

Low rank approximation (hereafter (LRA) of a matrix is a major subject of matrix and tensor computations and data mining and analysis. It is highly desired, particularly in applications to Big Data, to compute LRA at sublinear cost, involving much fewer memory cells and arithmetic operations than an input matrix has entries. Unfortunately any sublinear cost algorithm fails to compute accurate LRA for a worst case input and even for a small matrix family of our Appendix. Some known randomized algorithms compute with a high probability (whp) accurate and even nearly optimal LRA of any matrix by means of random sampling of sufficiently many rows and columns of an input matrix. These algorithms run at superlinear cost at the stage of computing sampling probabilities, called leverage scores, which direct proper random sampling of rows and columns. We trivialize that stage and prove that whp the resulting sublinear cost algorithm still outputs accurate LRA of a random input matrix admitting LRA.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment