1. Approximate low-rank factorizations pervade matrix data analysis, often interpreted in terms of latent factor models. After discussing the ubiquitous singular value decomposition (aka PCA), we turn to factorizations such as the interpolative decomposition and the CUR factorization that offer advantages in terms of interpretability and ease of computation. We then discuss constrained approximate factorizations, particularly non-negative matrix factorizations and topic models, which are often particularly useful for decomposing data into sparse parts. Unfortunately, these decompositions may be very expensive to compute, at least in principal. But in many practical applications one can make a separability assumption that allows for relatively inexpensive algorithms. In particular, we show how to the separability assumption enables efficient linear-algebra-based algorithms for topic modeling, and how linear algebraic preprocessing can be used to “clean up” the data and improve the quality of the resulting topics.

    # vimeo.com/342818588 Uploaded 27 Views 0 Comments
  2. Approximate low-rank factorizations pervade matrix data analysis, often interpreted in terms of latent factor models. After discussing the ubiquitous singular value decomposition (aka PCA), we turn to factorizations such as the interpolative decomposition and the CUR factorization that offer advantages in terms of interpretability and ease of computation. We then discuss constrained approximate factorizations, particularly non-negative matrix factorizations and topic models, which are often particularly useful for decomposing data into sparse parts. Unfortunately, these decompositions may be very expensive to compute, at least in principal. But in many practical applications one can make a separability assumption that allows for relatively inexpensive algorithms. In particular, we show how to the separability assumption enables efficient linear-algebra-based algorithms for topic modeling, and how linear algebraic preprocessing can be used to “clean up” the data and improve the quality of the resulting topics.

    # vimeo.com/342836111 Uploaded 4 Views 0 Comments
  3. # vimeo.com/342836488 Uploaded 6 Views 0 Comments
  4. Recently, methods based on empirical risk minimization (ERM) over deep neural network hypothesis classes have been applied to the numerical solution of PDEs with great success. We consider under which conditions ERM over a neural network hypothesis class approximates, with high probability, the solution of a d-dimensional Kolmogorov PDE with affine drift and diffusion coefficients up to error e. We establish that such an approximation can be achieved with both the size of the hypothesis class and the number of training samples scaling only polynomially in d and 1/e.

    # vimeo.com/342849514 Uploaded 32 Views 0 Comments
  5. # vimeo.com/342850412 Uploaded 31 Views 0 Comments

RTG Summer 2019 Lecture Series

Michael Jehlik Premium

All Video Copyright 2019 University of Chicago. All Rights are Reserved.

Browse This Channel

Shout Box

Heads up: the shoutbox will be retiring soon. It’s tired of working, and can’t wait to relax. You can still send a message to the channel owner, though!

Channels are a simple, beautiful way to showcase and watch videos. Browse more Channels.