Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence
print


Breadcrumb Navigation


Content

AI and Inverse Problems

Welcome to the course webpage!

Course Description

In many applications, we are interested in studying an object of interest x. However, its properties are often not directly measurable. Instead, we only observe y = F(x), where the function F is known. To infer about x from the observation y, we seek an (approximate) inverse mapping x ≈ F-1(y). This process is known as solving the inverse problem.

In this course, we will cover three topics. The first is classical regularization theory, which highlights the difficulties of solving inverse problems related to noisy data. In the second part, we move to the variational formulation of inverse problems and discuss optimization strategies. Finally, the last part focuses on explainable data-driven and network-based approaches for inverse problems.

Target Audience:

MSc Math students

Credited Modules (9 ECTS):

Master in Mathematics: WP35 Fortgeschrittene Themen aus der künstlichen Intelligenz und Data Science
Master in Financial and Insurance Mathematics: WP12 Advanced Topics in Mathematics A or WP22 Advanced Topics in Computer and Data Science A
Master Statistics and Data Science: WP 31 Advanced Research Methods in Applied Statistics or WP 52 Advanced Research Methods in Machine Learning
Master Data Science: WP 3 Theory of Selected Methods in Data Science

Moodle Enrollment

Key: N*OLC,1s+EGI96rqDea-

Content

Below you find an outline of topics we plan to cover in our course.

Part 1: Classical inverse problems

1.1. Forward mapping, inverse problem, uniqueness, stability, preliminaries on Hilbert spaces and compact operators.

1.2. Ill-posed integral operators and singular value decomposition.

1.3. General regularization theory, classical approaches (spectral, Tikhonov)

Part 2: Variational approach and Bayesian viewpoint

2.1. Variational formulation and standard penalties

2.2. Preliminaries on optimization theory

2.3. Algorithms for optimization: gradient descent, forward-backward splitting, ADMM.

Part 3: Explainable AI methods for inverse problems.

3.1. Network models

3.2. Fully learned models and post-processing

3.3 Denoisers

3.4. Plug-and-Play algorithms

3.5. Fixed point iteration theory

3.6. Learned (convex) regularizers: Convex CNNs and simple convex regularizers.

3.7. Unrolling: Convergence of LISTA, Deep Equilibrium Models

If time allows:

3.8. Bayesian inverse problems and their stability

3.9. Learned Bayesian approaches : normalizing flows and score matching, conditional normalizing flows, PnP via Tweedie

Lectures

Lectures will be given by Prof. Melnyk on Tuesdays 16:00-18:00 in room B006 and Fridays 10:00-12:00 in room A027 in Theresienstr. 39.

Prof. Melnyk holds offers an office hour on Tuesdays 10:00-12:00 in room 509 in Akademiestr. 7. Please write an email (melnyk@math.lmu.de) in advance if you plan to come to the office hour.

Exercises

The teaching assistant for this course is Stefan Kolek and he will be in charge of the exercises. Every week we offer two slots for the exercise sessions:

Mondays 12:00-14:00 in room 504 in Akademiestr. 7 (5th floor).
Mondays 16:00:18:00 in room 504 in Akademiestr. 7 (5th floor).

Every week we will release an exercise sheet with theory and programming assignments. It's difficult to present solutions to programming problems in the exercises session. Hence we will release the solutions of the programming problems immediately with the exercise sheet. You should still try to solve the programming problems on your own before looking at the solution. Please look at the solution of the programming assignments before the exercise class. In the exercise session we will discuss both theory problems and results from the programming tasks. It will also be an opportunity to ask questions about proofs, programming implementation, and general course content.

Exercises begin in the second week of the semester.