FoCM 2014 conference

Workshop C3 - Learning Theory

December 19, 17:30 ~ 18:00 - Room B23

Neurally Plausible Algorithms Find Globally Optimal Sparse Codes

Ankur Moitra

Massachusetts Institute of Technology, USA   -

We prove that neurally plausible algorithms --- including the classic one identified by Olshausen and Field --- can efficiently find a basis that enables sparse representation of a dataset, a foundational problem in neural computation and machine learning. This problem involves non-convex optimization. However, under plausible conditions where the global optimum is unique, we show that the algorithms converge rapidly and with near-optimal sample complexity, to the global optimum. This suggests that non-convexity need not be a hurdle to a rigorous mathematical and algorithmic theory of neural computation.

Joint work with Sanjeev Arora (Princeton University), Rong Ge (Microsoft Research) and Tengyu Ma (Princeton University).

View abstract PDF