Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence
print


Breadcrumb Navigation


Content

Robustness of Deep Neural Networks

Type

Master's thesis / Bachelor's thesis

Prerequisites

  • Strong machine learning knowledge
  • Proficiency with python
  • (Preferred) Proficiency with deep learning frameworks (Tensorflow or Pytorch)

Description


Modern AI systems, in particular deep learning methods, have demonstrated unparalleled accomplishments in many different fields. At the same time, there is overwhelming empirical evidence that these methods are unstable. These instabilities often occur in the form of so-called adversarial examples-misclassified data points that are very close (e.g., visually indistinguishable in case of images) to correctly classified data points. Why does deep learning consistently produce unstable learners even when one can prove that stable and accurate neural networks exist? The robustness field aims to understand this phenomenon and develop, ideally provable, robust AI systems. Today, so-called adversarial attacks can produce adversarial examples very reliably-posing a sizeable problem for safety-critical applications in AI. Thus, the study of robustness in deep learning continues to be vital for the safe deployment of AI in today's society.

References