Share this post
Tony Petrov, chief legal officer at Sumsub, summarises the rules incorporated in the EU Artificial Intelligence Act (EU AI Act) and how they might impact on the industry.
Regulators worldwide are taking aim at the AI industry. The UK government presented a white paper on responsible innovation in artificial intelligence (‘A Pro-Innovation Approach to AI Regulation‘) in March 2023, following similar moves by the European Commission and the US.
Meanwhile, US president, Joe Biden, signed an executive order in February 2023 directing federal agencies to root out bias in the design and use of the AI technologies and to make sure there is no algorithmic discrimination, specifically based on race.
The most important development of all these, however, seems to be the EU AI Act (the Act). The Act, which is not yet in force, constitutes a unique set of rules for risk-based regulation of AI technologies. We’ll now summarise what the EU AI Act is suggesting.
Definitions
First of all, the Act provides a definition of AI technology, which is ‘software that is developed with one or more of the techniques listed in a special annex to the regulation, and that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.’
So far, the annex has listed only three categories of techniques used in the development of AI systems:
- Machine learning, including supervised, unsupervised, reinforcement learning and deep learning;
- Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inferences and deductive engines, (symbolic) reasoning and expert systems; and
- Statistical approaches, Bayesian estimation, search and optimisation methods.
The Act also differentiates between the various entities that use AI systems, namely:
- Providers—anyone (entity or individual) who develops an AI system;
- Users—those who use AI;
- Representatives—someone who officially represents the AI provider in the EU;
- Importers and distributors—those who distribute the AI system in the EU; and
- Operators—any of the above.
These definitions cast a wide net over the entities potentially subject to the Act’s legal regime.
Risk-based approach
Just like the EU general data protection regulation (GDPR), the Act establishes different rules on AI based on risk level. This means that the Act’s measures range from the implementation of certain safeguards to outright prohibition, depending on the nature of and risks presented by the AI system in question.
Below are the three risk levels defined by the Act:
- Unacceptable risk;
- High risk; and
- Limited risk.
AI systems bearing unacceptable risk are considered a threat and therefore must be banned. These include:
- Systems relying on subliminal techniques;
- Systems that exploit specific vulnerabilities of people (such as children);
- Systems enabling social classification of people (or ‘social scoring’); and
- Systems for real-time, remote biometric identification.
High-risk AI technologies, meanwhile, relate to the following:
1) AI systems that are used in products falling under the EU’s product safety legislation, including toys, aviation, cars, medical devices, and lifts.
2) AI systems falling into eight specific areas that will have to be registered in an EU database:
- Biometric identification and categorisation of natural persons;
- Management and operation of critical infrastructure;
- Education and vocational training;
- Employment, worker management and access to self-employment;
- Access to and enjoyment of essential private services and public services and benefits;
- Law enforcement;
- Migration, asylum and border control management; and
- Assistance in legal interpretation and application of the law.
Limited-risk AI systems include systems that generate or manipulate images, audio or video. These will have to comply with minimal transparency requirements, ensuring users are aware they are interacting with AI when using such applications.
For instance, generative AI, like ChatGPT or Midjourney, will have to comply with transparency rules that:
- Disclose that the content was AI-generated;
- Create a model to prevent it from generating illegal content; and
- Publish summaries of copywritten data used for training.
What exactly is prohibited and what is not?
While these categories are still being clarified, prohibited practices so far include:
- Subliminal techniques beyond a person’s consciousness that aim to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.
Example: AI-driven search engine manipulation for marketing purposes that makes a person radically change his or her consumption habits.
- Techniques exploiting any of the vulnerabilities of a specific group of persons due to their age (or physical or mental disability) in order to materially distort behaviour in a manner that causes or is likely to cause physical or psychological harm.
Example: A toy with an integrated voice assistant which encourages a minor to engage in progressively dangerous behaviour).
- Techniques evaluating or classifying the trustworthiness of natural persons over a certain period based on their social behaviour with a social score.
Example: Social scoring systems in China.
- The use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless it is strictly necessary for one of the following objectives:
(i) The targeted search for specific potential victims of crime, including missing children;
(ii) The prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;
(iii) The detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence.
Example: CCTV with automated and non-discriminate facial recognition in China.
Measures for high-risk AI systems
The Act establishes a number of measures that risky AI systems must implement. These measures should be the result of the provider’s ex-ante risk self evaluation.
These measures include:
- Introduction of risk management systems (identification and evaluation of risks; regular testing and risk mitigation).
- Data governance (training, validation, and testing data sets will be subject to appropriate data governance and management practices).
- Maintenance of technical documentation (technical documentation will be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements of the Act).
- Record keeping (capabilities enabling the automatic recording of events or ‘logs’).
- Transparency and the provision of information to users (including the identity and contact details of the provider and, where applicable, its authorised representative; and characteristics, capabilities, human oversight, intended purpose and limitations of performance of the high-risk AI system).
- Human oversight (AI systems need to be designed and developed so that they can be effectively overseen by natural persons).
- Accuracy, robustness and cybersecurity (AI systems need to be designed and developed to perform consistently throughout their life cycle).
All categories of AI operators, including providers, importers, representatives, distributors and users, will have to implement the above measures.
The EU AI Act is expected to create a new compliance vertical in each EU company dealing with AI, just like the EU General Data Protection Regulation did in 2018. This means that AI is expected to become a popular area of consultancy not only in the EU, but also in other regions.
In the near future, we’ll see more attempts to regulate AI worldwide based on the EU AI Act. This means that the regulators will make additional attempts at defining what is limited risk, very risky, or outright prohibited.
Tony Petrov is the chief legal officer at Sumsub.