[~] Three minicourses on Signal Analysis and Big Data

The workshop consists of three courses on applied harmonic analysis and machine learning, given by leading experts. Graduate students in Mathematics, Physics, Computer Science and Engineering, as well as postdoctoral fellows and young researchers, are welcome. Financial support is available to cover accommodation expenses, on a first-come first-served basis. There is no registration fee, but participants are required to register before July 31. Those applying for financial support or to present their work should register by July 1.

Rima Alaifari
Ill-posed problems: from linear to non-linear and beyond

Reconstruction problems in which the solution does not depend stably on the input arise in many different applications. When the forward problem is modelled by a linear operator, classical regularization theory can provide effective approaches to extract stable information from such unstable systems. The situation is already different if the problem is inherently non-linear. While regularization theory has been developed in the non-linear setting, the stability statements are far weaker. To motivate our study of regularization theory in the linear and non-linear case, we will consider concrete problems in both settings. Finally, we conclude this course by studying ill-posedness in a more recent application that cannot be covered by existing regularization theory: while deep neural networks are very effective for classification tasks, they suffer from the existence of so-called adversarial examples.

Lecture notes

Gabriel Peyré
Computational Optimal Transport

Optimal transport (OT) is a fundamental mathematical theory at the interface between optimization, partial differential equations and probability. It has recently emerged as an important tool to tackle a surprisingly large range of problems in data sciences, such as shape registration in medical imaging, structured prediction problems in supervised learning and training deep generative networks. This course will interleave the description of the mathematical theory with the recent developments of scalable numerical solvers. This will highlight the importance of recent advances in regularized approaches for OT which allow one to tackle high dimensional learning problems. Material for the course (including a small book, slides and computational resources) can be found online at optimaltransport.github.io.

José Luis Romero
Time-frequency analysis

The goal of time-frequency analysis is to study a signal simultaneously in the spatial and frequency domains. The task is very challenging because time and frequency are not truly independent variables, but only approximately. The limit scale where simultaneous time-frequency analysis is possible is given by the so-called uncertainty principle, and the various tools used in signal analysis aim at achieving this theoretical limit. I will present some basic tools from time-frequency analysis, comparing their merits, performance, and shortcomings. Specific topics include: The Wigner distribution, Spectrograms and their geometry, Sharpening of spectrograms (reassignment and synchrosqueezing), multi-taper estimation, time-frequency analysis of randomness.

Workshop

On Wednesday 11th there will be a short workshop with invited speakers, contributed talks of the participants and a poster session. Here you can find the abstracts of the selected contributed talks.

Massimo Fornasier
Robust and efficient identification of neural networks

Identifying a neural network is an NP-hard problem in general. In this talk we address conditions of exact identification of one and two hidden layer totally connected feed forward neural networks by means of a number of samples, which scales polynomially with the dimension of the input and network size. The exact identification is obtained by computing second order approximate strong or weak differentials of the network and their unique and stable decomposition into nonorthogonal rank-1 terms. The procedure combines several novel matrix optimization algorithms over the space of second order differentials. As a byproduct we introduce a new whitening procedure for matrices, which allows the stable decomposition of symmetric matrices into nonorthogonal rank-1 decompositions, by reducing the problem to the standard orthonormal decomposition case. We show that this algorithm practically achieve information theoretical recovery bounds. We illustrate the results by several numerical experiments.

This is a joint work with Ingrid Daubechies, Timo Klock, Michael Rauchensteiner, and Jan Vybíral.

Anders Hansen
How intelligent is artificial intelligence (AI)? - On the instability phenomenon in deep learning

AI and deep learning (DL) is changing the world in front of our eyes with applications ranging from self-driving cars, via automated diagnosis in medicine, to new methods in imaging sciences and inverse problems. A timely question is therefore: how intelligent is modern AI, and can it be trusted? Indeed, despite the success, DL methods have a serious achilles heel; they are universally unstable. This causes non-human-like behaviour when used in human problem solving tasks, as well as unstable algorithms in the sciences, for example in imaging. This has serious ramifications, and Science recently published a paper warning about the potentially fatal consequences in medicine. The question is: why do AI algorithms based on deep learning become unstable and perform so differently compared to humans? The answer to this question is highly complex, and the reasons vary with the applications. For example, instabilities in DL methods for classification are caused by completely different reasons compared to why DL methods are unstable in image reconstruction. We will discuss these issues from a mathematical point of view and demonstrate why it is highly unlikely that the instability phenomenon will be cured in modern AI unless new and different methods are invented.