Optimization for Distributed Estimation and Learning

The participants learn to independently explore and understand a given topic and present it to the other participants in a concise and coherent way.

Intended Participants  Master Students
Instructors Christopher Funk, Benjamin Noack
SWS 2
Credits 3
Languages English / German
Prerequisites Linear algebra and calculus
Kick-Off

Monday, 11.04.2022, TBA

 

Course Description

Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features, training examples, or both. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. The alternating direction method of multipliers (ADMM) is an algorithm that solves convex optimization problems in a distributed fashion by breaking them into smaller pieces, each of which are then easier to handle.

In this seminar the participants will learn about the basics of convex optimization and the alternating direction method of multipliers (ADMM). Individual topics related to real-world applications, as well as important implementation details, will be assigned to the participants. Their findings will eventually be presented to the other participants. The seminar should enable the participants to apply the ADMM to a diverse range of convex optimization problems.

 

Registration

Please email

 

Last Modification: 13.06.2023 - Contact Person: Webmaster