Hildebrandt discusses the EU AI Act at a Privacy Hub webinar (16 December 2021)

In recital 6 of the proposed AI Act, the Council’s draft aims to generally exclude ADM systems under the heading of excluding ‘classic software systems and programming’ from the Act:

The definition should be based on the key functional characteristics of the software of artificial intelligence distinguishing it from more classic software systems and programming. , iIn particular, for the purposes of this Regulation AI systems should be intended as having the ability, on the basis of machine and/or humanbased data and inputs, to infer the way to achieve a given set of human-defined objectives through learning, reasoning or modelling and to for a given set of human defined objectives, to generate specific outputs in the form of such as content for generative AI systems (such as text, video or images), as well as predictions, recommendations, or decisions, which influence the environment with which the system interacts, be it in a physical or digital dimension.

On 16 December, Mireille Hildebrandt will take part in the webinar ‘The EU AI Act: Where do we stand after the EU Council Position?’ organised by Brussels Privacy Hub, where she will argue that this would be a grave mistake with many implications. Notably because these systems squarely fall within the scope of logic- and knowledge-based systems (Annex I(b)). They should be considered high risk AI systems whenever their output interacts with an environment in ways that may disadvantage, discriminate or manipulate individuals or categories of individuals. Protecting people against crapware by means of the quality control and risk assessments stipulated in the ACT, is crucial, especially since the protection offered by the GPDR is both different and inadequate.

For more information, click here.

Discussion