A proposal for EU legislation intended to regulate the use of artificial intelligence (AI) was expected to be made public later this week. But the Commission’s draft proposal has been leaked in advance.
The Commission’s draft regulation proposes a general prohibition on certain AI-systems used for mass surveillance and general social ranking, rules on AI-generated “deepfakes” and turnover-based corporate fines for non-compliant practices.
The draft proposal reflects the political agenda of Commission President Ursula von der Leyen, published during her electoral campaign, promising “a coordinated European approach on the human and ethical implications of Artificial Intelligence”. The proposal is also preceded by a Commission white paper published in February 2020, supporting “a regulatory and investment oriented approach with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology”.
The draft proposal, recently published online, targets actors who provides or uses AI-systems – with some exceptions for AI-systems covered by sector specific legislation and AI used for military purposes. An AI-system is defined in the draft proposal as a software developed using certain methods or techniques – e g machine learning or inductive programming – that, for a given set of human-defined objectives, can generate content, predictions, recommendations, or decisions influencing real or virtual environments.
The draft proposal sets out a general prohibition on certain types of applied AI “contravening the Union values or violating fundamental rights protected under Union law”. This includes, for instance, AI systems developed or used for:
- manipulating human behaviour, opinions, or decisions to the detriment of a person,
- exploiting information or predictions about a person or group of persons in order to target their vulnerabilities or special circumstances, causing them to behave, form an opinion or take a decision to their detriment,
- generalised and indiscriminate mass surveillance of natural persons,
- general social scoring of natural persons.
Social scoring, according to the draft proposal, consists in the large-scale evaluation or classification of the trustworthiness of natural persons based on their behaviour or characteristics, that may lead to detrimental treatment of groups or individuals.
The draft proposal also sets forth additional requirements for “high risk” AI systems. For such systems, the draft proposes i e that the system shall be developed and tested on the basis of high-quality input, that output emanating from the system shall be possible to verify and trace back throughout the system’s lifecycle, that the functioning of the system shall be transparent and that the system shall be subject to the supervision and control of human beings.
Some other transparency obligations are also suggested for AI-systems that are not deemed to constitute “high risk” AI. According to the draft proposal, AI systems intended to interact with natural persons shall be designed and developed in such a manner that natural persons are notified that they are interacting with an AI system. For AI systems used to generate or manipulate image, audio or video content that appreciably resembles existing persons, places, objects, or events (so called “deepfakes”), it shall be duly disclosed that the content has been artificially created or manipulated.
According to the draft proposal, member states shall be responsible for implementing efficient and proportional sanctions applicable in cases of non-compliance with the regulation. For some infringements, such as infringements of the general prohibitions set out in the regulation (see above), the draft proposes mandatory administrative fines set at a maximum of 4 percent of the worldwide annual turnover of the infringing company.
The Commission’s official proposal is expected to be made public within short.
If you have any questions regarding the draft proposal, or want to know more about the Commission’s approach towards artificial intelligence, please feel free to contact attorneys Esa Kymäläinen and Jonas Forzelius of TIME DANOWSKY Advokatbyrå.