Hacking AI: Towards Algorithms that Humans Can Trust
mercoledì 5 febbraio 2020 alle 18.30 presso Tiscali OpenCampus
Speaker: Battista Biggio
Data-driven AI and machine-learning technologies have become pervasive, and even able to outperform humans on specific tasks. However, it has been shown that they suffer from hallucinations known as adversarial examples, i.e., imperceptible, adversarial perturbations to images, text and audio that fool these systems into perceiving things that are not there. This has severely questioned their suitability for mission-critical applications, including self-driving cars and autonomous vehicles. This phenomenon is even more evident in the context of cybersecurity domains with a clearer adversarial nature, like malware and spam detection, in which data is purposely manipulated by cybercriminals to undermine the outcome of automatic analyses. As current data-driven AI and machine-learning methods have not been designed to deal with the intrinsic, adversarial nature of these problems, they exhibit specific vulnerabilities that attackers can exploit either to mislead learning or to evade detection. Identifying these vulnerabilities and analyzing the impact of the corresponding attacks on learning algorithms has thus been one of the main open issues in the research field of adversarial machine learning, along with the design of more secure and explainable learning algorithms. In this talk, I discuss and exemplify attacks against AI and machine-learning algorithms, in the context of real-world applications, including computer vision, biometric identity recognition and computer security, along with promising defense mechanisms that are paving the way towards building AI algorithms that humans can trust.
Bio. Battista Biggio si è laureato in Ingegneria Elettronica, con lode ed ha conseguito il dottorato (PhD) in Ingegneria Elettronica ed Informatica, rispettivamente nel 2006 e nel 2010, dall’Università di Cagliari. Dal 2007 lavora per il Dipartimento di Ingegneria Elettrica ed Elettronica della stessa Università, dove attualmente lavora come ricercatore a tempo determinato (RTDa). Dal 12 maggio al 12 novembre del 2011, ha visitato l’Università di Tubinga, in Germania, dove ha lavorato sulla sicurezza degli algoritmi di apprendimento automatico rispetto alla contaminazione dei dati di addestramento.
L’evento è gratuito ma è necessario registrarsi per partecipare: