Images  éco‑responsables

La compression des images réduit le poids des pages et leur chargement.

En savoir plus

Rechercher dans

Abstract:

Cybersecurity ensures the trustworthy and reliable functioning of digital systems. Currently, companies spend about 10% of their IT budget on cybersecurity. Thus, security becomes increasingly relevant also for emerging technologies like machine learning (ML). Despite a large body of work studying this area, recent works have been criticized for being impractical, or how well they cover ML as used by companies, public institutions, and non-profits. In this talk, I will give a rough overview of the vulnerabilities of ML, and outline how ML security research often diverges from practice: models are studied instead of pipelines, applied perturbations are not practical, or assumptions are unrealistic. To conclude the talk, I will show a first step towards making ML security research more practical via surveys that measure how ML is used in practice. 

Short bio:

Kathrin Grosse is a Post-Doctoral Researcher with EPFL, Switzerland. Her research interests focus on AI security in autonomous vehicles and in the industry in general. Her work bridges research (in AI security) and industry needs. She received her master’s degree from Saarland University and her Ph.D. at CISPA Helmholtz Center, Saarland University, in 2021 under the supervision of Michael Backes. She interned with IBM in 2019 and Disney Research in 2020/21. As part of her work, she serves as a reviewer for IEEE S&P, Usenix Security, and ICML and organizes workshops at ICML. In 2019, she was nominated as an AI Newcomer for the German Federal Ministry of Education and Research’s Science Year.


Intervenante(s), Intervenant(s)

Organisation

Voir plus d'événements