Convex Optimization for Data Science
Optimization is a fundamental tool for data science and machine learning. From a practical point of view, it is crucial to understand the convergence behavior of the various optimization methods. In light of modern large-scale applications, resource efficient first-order methods are here of particular interest. Relying on the mathematical theory of convex analysis, it is possible to derive rigorous convergence rates for such methods. Moreover, the developed theory often can be transferred to general non-convex problems.
This lecture will provide an introduction into the basic concepts of convex analysis and will show, how the theory can be used to analyze the convergence behavior of first-order methods like gradient descent. In particular, the presented methods will be linked to concrete data science problems. The topics we discuss will encompass:
- Convex Analysis
- First-order methods like gradient descent, proximal gradient descent, mirror descent, etc.
- Second-order methods like Newton’s method and quasi-Newton methods
The course is targeted at Master students from mathematics. Basic knowledge of functional analysis is highly recommended.
Schedule and Venue
Lecture: Tue 10:15–12:00, Thu 10:15–12:00 (by Prof. Dr. Johannes Maly)
Exercise class: Wed 10:15-12:00, Thu 12:15-14:00 (by Mariia Seleznova)
Office hour (lecture): Tue 16.00-17.00 in Akademiestr. 7 Room 515. (Please announce via email to email@example.com if you intend to come since the outside door on the fifth floor is normally locked.)
WP35 Fortgeschrittene Themen aus der künstlichen Intelligenz
Data Science (9 ECTS)
WP 26 Fortgeschrittene Themen aus der numerischen Mathematik
Other modules in MSc Mathematik and MSc Finanz- und Versicherungsmathematik are possible as well. Interested students shall directly approach the Prüfungsamt and inform us.
Please register for the course on the uni2work page: https://uni2work.ifi.lmu.de/course/S23/MI/OptDS