704 Thackeray Hall
Abstract or Additional Information
Despite their recent successes in various tasks, most modern machine learning algorithms lack theoretical guarantees, which are crucial to further development towards delicate tasks such as designing self-driving cars. One mysterious phenomenon is that, among infinitely many possible ways to fit data, the algorithms always find the "good" ones, even when the definition of "good" is not specified by the designers. In this talk I will cover the empirical and theoretical study of the connection between the good solutions in neural networks and the sparse solutions in compressed sensing with four questions in mind: What happens? When does it happen? Why does it happen? How can we improve it? The key concepts are implicit bias/regularization, Bregman divergence, neural collapse, and neural tangent kernel