The main points of her feedback are that (1) the beneficial character of AI cannot be assumed, but must be mathematically as well as empirically verifiable, (2) this means that Europe should refrain from advocating the uptake of AI per se, instead restricting itself to reliable AI that ‘does’ what it is claimed to do, while taking a precautionary approach that involves taking uncertainty seriously and involving those who may suffer adverse effects, (3) legislation for AI should best be inspired by the GDPR safeguards for automated decision making, meaning that the scope of such legislation should not be restricted to vague definitions of AI but apply whenever the behaviour of automated computing systems has a major effect on natural persons. She also speaks out for AI that functions as ‘a machine in the loop’, instead of AI systems that keep ‘a human in the loop’.
The word compliance entered our daily vocabulary, pervading the regulatory strategy and discourse in the European Union law. But what... Read more
Duarte spoke on Google and Apple Exposure Notifications System at Kozminski University (24 June 2022)
In April 2020 Google and Apple announced a joint project under whose constraints countries could develop proximity tracing apps, called... Read more
Mireille Hildebrandt presented on The New Methodenstreit in Machine Learning at the International Conference on Explaining Machines, at the University... Read more
Van den Hoven presents on normativity and international computational law at Aberdeen University (23 June 2022)
International legal scholarship has over recent years started to pay much closer attention to the impact of artificial intelligence (AI)... Read more