On Sunday in the European Union, block regulators can prohibit the use of AI systems which they consider as an “unacceptable risk” or damage.
February 2 is the first deadline for compliance for the EU AI law, the AI full regulatory framework that the European Parliament finally approved last March after years of development. The act officially entered into force on August 1; The following is the first time for compliance.
The specificities are defined in article 5But largely, the law is designed to cover a myriad of use cases where AI can appear and interact with individuals, consumption applications in physical environments.
Below Bloc approachThere are four general risk levels: (1) the minimum risk (for example, spam filters per email) will be confronted with any regulatory monitoring; (2) The limited risk, which includes customer service chatbots, will have light regulatory monitoring; (3) A high risk – AI for health care recommendations is an example – will be faced with strong regulatory monitoring; and (4) unacceptable risk applications – the objective of this month’s compliance requirements – will be fully prohibited.
Some of the unacceptable activities include:
- The AI used for social rating (for example, the construction of risk profiles according to a person’s behavior).
- AI that manipulates a person’s decisions in a subliminal or misleading way.
- IA which exploits vulnerabilities such as age, disability or socioeconomic status.
- The one trying to predict people committing crimes according to their appearance.
- AI that uses biometrics to deduce the characteristics of a person, such as their sexual orientation.
- AI that collects biometric data “in real time” in public places for the purposes of the application of laws.
- AI who tries to deduce people’s emotions at work or at school.
- AI that creates – or widens – facial recognition databases by scraping online images or from security cameras.
Companies that use one of the AI applications above in the EU will be subject to fines, regardless of where they are based. They could be on the hook up to 35 million euros (~ $ 36 million), or 7% of their annual income from the previous year, according to the largest.
The fines will not be hampered for a while, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with Techcrunch.
“Organizations should be fully compliant by February 2, but … The next large deadline that companies must be aware is in August,” said Sumroy. “Until then, we will know who are the competent authorities and the fines and application provisions will take effect.”
Preliminary promises
The deadline of February 2 is in a way a formality.
Last September, more than 100 companies signed the EU AI Pact, a voluntary commitment to start applying the principles of the AI Act before entering the request. As part of the PACT, the signatories – which included Amazon, Google and Openai – are committed to identifying AI systems likely to be classified as a high risk under the AI Act.
Some technology giants, including Meta and Apple, jumped the pact. The startup of French AI Mistral, one of the hardest critics in AI, also chose not to sign.
This does not suggest that Apple, Meta, Mistral or others who did not agree on the Pact did not fulfill their obligations – including the ban on unacceptable systems. Sumroy underlines that, given the nature of the prohibited use cases arranged, most companies will not engage in these practices anyway.
“For organizations, a key concern concerning the EU AI law is whether clear directives, standards and codes of conduct – and above all, if they will bring clarity to compliance” said Sumroy. “However, the working groups have respected their deadlines so far on the code of conduct for … developers.”
Possible exemptions
There are exceptions to several of the AI Act prohibitions.
For example, the law allows the police to use certain systems which collect biometrics in public places if these systems help to carry out a “targeted research”, for example, a victim of kidnapping, or to help Prevent a “specific, substantial and imminent” threat to life. This exemption requires the authorization of the appropriate director body, and the law stresses that the police cannot make a decision which “produces an unfavorable legal effect” on a person solely based on the results of these systems.
The ACT also develops exceptions for systems that deduce emotions in workplaces and schools where there is a “medical or safety” justification, such as systems designed for therapeutic use.
The European Commission, the executive power of the EU, said he would publish additional directives In “early 2025”, following a consultation with the stakeholders in November. However, these directives have not yet been published.
Sumroy said that it is also difficult to know how other books on books could interact with the prohibitions and related provisions of the AI Act. Clarity can only arrive later in the year, as the law application window approaches.
“It is important for organizations to remember that AI regulations do not exist in isolation,” said Sumroy. “Other legal frameworks, such as the GDPR, NIS2 and Dora, will interact with the AI Act, creating potential challenges – in particular around the notification requirements of overlap incidents. Understanding how these laws adapt will be just as crucial as the understanding of the act of AI itself. »»