Mark Schmidt, Ph.D.
Simon Fraser University
Tuesday, February 18, 2014
Abstract:
The way we solve optimization problems in machine learning is changing. Data sets are growing at a tremendous rate, models are becoming increasingly complex, and non-smooth objective functions are now common. This is particularly true for structured prediction problems where we analyze objects like sequences, images, and graphs (unlike traditional machine learning methods like support vector machine that focus on single class labels). Unfortunately, traditional black-box optimization techniques are increasingly unable to cope with these challenges. My research focuses on `opening up the black box', by developing methods that take advantage of the structures present in machine learning optimization problems. This simple idea can lead to enormous reductions in computation, and in extreme cases a polynomial-time algorithm was developed in a context where all black-box algorithms provably require exponential time. In this talk I will present (i) my work on inexact proximal-gradient methods for optimization with complex non-smooth sparsity-inducing regularizers, (ii) my work on projected quasi-Newton methods for optimizing costly objective functions with simple constraints, and (iii) my work on the first polynomial-time stochastic gradient method for optimization when the number of data points is enormous. The large gains achieved by these methods make it practical to analyze significantly larger data sets, and also allow us to fit more complicated models. These types of advances have proven useful for a wide variety of applications in machine learning, but the work has broader implications since these same problem structures tend to arise in most data-driven science and engineering applications.
Bio:
Mark Schmidt is a post-doc in the Natural Language Laboratory at Simon Fraser University. From 2011 through 2013 he worked at the École normale supérieure in Paris on inexact and stochastic convex optimization methods. He finished his M.Sc. in 2005 at the University of Alberta working as part of the Brain Tumor Analysis Project, and his Ph.D. in 2010 at the University of British Columbia working on graphical model structure learning with L1-regularization. He has also worked at Siemens Medical Solutions on heart motion abnormality detection, and with Michael Friedlander in the Scientific Computing Laboratory at the University of British Columbia on semi-stochastic optimization methods.
Hosted by Elizabeth Jessup.