Hildebrandt keynoting at Privacy Preserving Machine Learning (PPML) Workshop at the ACM CCS 2021 Conference (19 November 2021)

This one day workshop focuses on privacy preserving techniques for training, inference, and disclosure in large scale data analysis, both in the distributed and centralized settings. Mireille Hildebrandt will deliver a keynote on the topic of PPML and the AI Act’s Fundamental Rights Impact Assessment (FRIA) for ML Systems.

Abstract

Bringing together the ML community with MPC, HE and DP communities should allow for pivotal awareness within the ML community of myriad security and privacy issues that may affect both the reliability of ML systems and the confidentiality of the information these systems process. In this talk I will discuss how reducing access to identifiable information may nevertheless increase the risk to other fundamental rights, notably those of non-discrimination, presumption of innocence and freedom of information. This will be followed by an analysis of the legal requirement of a fundamental rights impact assessment (FRIA), referring to the EU’s GDPR and the EU’s proposed AI Act.

For more information, click here.

For slides, click here.

Discussion