The main points of her feedback are that (1) the beneficial character of AI cannot be assumed, but must be mathematically as well as empirically verifiable, (2) this means that Europe should refrain from advocating the uptake of AI per se, instead restricting itself to reliable AI that ‘does’ what it is claimed to do, while taking a precautionary approach that involves taking uncertainty seriously and involving those who may suffer adverse effects, (3) legislation for AI should best be inspired by the GDPR safeguards for automated decision making, meaning that the scope of such legislation should not be restricted to vague definitions of AI but apply whenever the behaviour of automated computing systems has a major effect on natural persons. She also speaks out for AI that functions as ‘a machine in the loop’, instead of AI systems that keep ‘a human in the loop’.
On 16 December Desara Dushi gave a lecture at the Winter School on Computational law and Cybernetics organized by the... Read more
In recital 6 of the proposed AI Act, the Council’s draft aims to generally exclude ADM systems under the heading... Read more
Hildebrandt talks about the real life implications of reinforcement learning at the PERLS workshop (14 December 2021)
Reinforcement Learning (RL) is a rapidly growing branch of AI research, with the capacity to learn to exploit our dynamic... Read more
Laurence’s article ‘Interpreting the Rule(s) of Code: Performance, Performativity, and Production’ has been published open access in Release 3.2 of... Read more