Last week, it was reported that the EU Commission’s draft for a new proposal to regulate the use of artificial intelligence – the so-called AI Act – had been prematurely leaked. The Commission has now presented its official proposal.
The official proposal is much similar to the Commission draft recently published online. However, the regulation’s penalty fees have been raised, and the prohibition on generalised and indiscriminate mass surveillance has been narrowed down.
On Wednesday, after a few days of speculations regarding the draft proposal that was leaked on the internet last week, the Commission published its official proposal for a new AI Act.
The official proposal, much like the draft, targets actors who provides or uses AI systems – with some exceptions for AI systems covered by sector specific legislation and AI used for military purposes. An AI system is defined in the proposal as a software developed using certain methods or techniques – e g machine learning or inductive programming – that, for a given set of human-defined objectives, can generate content, predictions, recommendations, or decisions influencing the environments with which they interact.
Article 5 of the regulation proposal sets forth a general prohibition against certain kinds of AI. The prohibitions targets AI systems that:
- deploys “subliminal techniques” in order to distort a person’s behaviour so as to cause that person, or another person, physical or psychological harm,
- exploits the vulnerabilities of a group of persons due to their age, physical or mental disability, to distort the behaviour of a person pertaining to that group in a manner that causes that person, or another person, physical or psychological harm,
- is provided or used by public authorities to evaluate or classify the trustworthiness of natural persons based on their behaviour or personality characteristics in a way that is detrimental or unfavourable (i e social scoring).
As opposed to the draft proposal, the official proposal does not contain a prohibition per se against “generalised and indiscriminate mass surveillance”. Instead, the official proposal prohibits the use of “real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement”. The prohibition applies to e g live camera surveillance with face recognition software deployed by the police in public transportation facilities or shopping malls. According to the preamble, the prohibition is limited to “physical spaces”, and does not apply within online environments. The prohibition also provides an exemption for e g targeted search operations regarding missing children, victims of crime and certain criminal suspects.
The proposal also contains special requirements for “high risk” AI systems. “High risk” AI-systems, according to the proposal, includes AI used in and for recruitment, admittance to educational institutions, assessment of students, credit assessments, assessments of evidence within criminal investigations, asylum assessments, and judicial assessments in courts of law. It is proposed, for instance, that such systems shall be trained and tested on the basis of input that is “relevant, representative and free of errors and complete”, that the system shall enable automatic “logging” of system events and use, that the systems functioning shall be transparent, and that the system shall be subject to the oversight and control of human beings.
Some additional transparency obligations are also provided for AI systems that are not considered “high risk”. According to article 52 of the proposal, AI systems intended to interact with natural persons shall be developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances. Users of AI systems that generate or manipulate image, audio or video content that appreciably resembles existing persons, places, objects, or events (so called “deep fakes”) shall disclose that the content has been artificially created or manipulated. Users of an “emotion recognition system” or a “biometric categorisation system” shall inform any natural persons exposed of the operation of the system.
The transparency obligations in article 52 do not apply for lawful use of AI systems intended to o detect, prevent, and investigate criminal offences. The special transparency obligation targeting the use of “deep fakes” further exempts cases where the use is necessary for the exercise of freedom of expression and freedom of the arts and sciences.
In accordance with the leaked draft, the official proposal contains mandatory rules on turnover-based penalty fees in cases of non-compliance with the regulation – e g with regards to the general prohibitions set forth in article 5.
But the maximum penalties prescribed in the official proposal have been raised – from 4 to 6 percent of the infringing company’s global annual turnover.
The Commission’s proposal will now be assessed by the European Parliament and the European Council within the ordinary legislative procedure. If the proposal is adopted, it will be directly applicable within EU member states.
The Commission’s official proposal and further information about the initiative is available here.
If you have any questions regarding the draft proposal, or want to know more about the Commission’s approach towards artificial intelligence, please feel free to contact attorneys Esa Kymäläinen and Jonas Forzelius of TIME DANOWSKY Advokatbyrå.