Why Do Networks Need Negative Weights?

Qingyang Wang, Michael A. Powell, Ali Geisa, Eric Bridgeford, Joshua T. Vogelstein

Why do networks have negative weights at all? The answer is: to learn more functions. We mathematically prove that deep neural networks with all non-negative weights are not universal approximators. This fundamental result is assumed by much of the deep learning literature without previously proving the result and demonstrating its necessity.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment