Robustness of Deep Neural Networks
Type
Master's thesis / Bachelor's thesis
Prerequisites
- Strong machine learning knowledge
- Proficiency with python
- (Preferred) Proficiency with deep learning frameworks (Tensorflow or Pytorch)
Description
Modern AI systems, in particular deep learning methods, have demonstrated unparalleled accomplishments in many different fields. At the same time, there is overwhelming empirical evidence that these methods are unstable. These instabilities often occur in the form of so-called adversarial examples-misclassified data points that are very close (e.g., visually indistinguishable in case of images) to correctly classified data points. Why does deep learning consistently produce unstable learners even when one can prove that stable and accurate neural networks exist? The robustness field aims to understand this phenomenon and develop, ideally provable, robust AI systems. Today, so-called adversarial attacks can produce adversarial examples very reliably-posing a sizeable problem for safety-critical applications in AI. Thus, the study of robustness in deep learning continues to be vital for the safe deployment of AI in today's society.
References
- Intriguing properties of neural networks (https://arxiv.org/pdf/1312.6199.pdf)
- Adversarial vulnerability for any classifier (https://arxiv.org/pdf/1802.08686.pdf)
- Towards deep learning models resistant to adversarial attacks (https://arxiv.org/pdf/1706.06083.pdf)
- On instabilities of deep learning in image reconstruction - Does AI come at a cost? (https://www.pnas.org/content/pnas/117/48/30088.full.pdf)
- The troublesome kernel: why deep learning for inverse problems is typically unstable (https://arxiv.org/pdf/2001.01258.pdf)