427 Thackeray Hall
Abstract or Additional Information
Despite their recent successes in various tasks, most modern machine learning algorithms lack theoretical guarantees, which are crucial to further development towards delicate tasks. One mysterious phenomenon is that, among infinitely many possible ways to fit data, the algorithms always find the "good" ones, even when the definition of "good" is not specified by the designers. In this talk I will cover the empirical and theoretical study of the connection between the good solutions in neural networks and the sparse solutions in compressed sensing with four questions in mind: What happens? When does it happen? Why does it happen? How can we improve it? The key concepts are implicit regularization, Bregman divergence, neural tangent kernel, and neural collapse.
Zoom link: https://pitt.zoom.us/j/98829193622 (Meeting ID: 988 2919 3622).