The unique risk-based EU Artificial Intelligence Act

by Tony Petrov, Chief Legal Officer at Sumsub
AI

Share this post

Tony Petrov, chief legal officer at Sumsub, summarises the rules incorporated in the EU Artificial Intelligence Act (EU AI Act) and how they might impact on the industry.

Regulators worldwide are taking aim at the AI industry. The UK government presented a white paper on responsible innovation in artificial intelligence (‘A Pro-Innovation Approach to AI Regulation‘) in March 2023, following similar moves by the European Commission and the US.

Meanwhile, US president, Joe Biden, signed an executive order in February 2023 directing federal agencies to root out bias in the design and use of the AI technologies and to make sure there is no algorithmic discrimination, specifically based on race.

The most important development of all these, however, seems to be the EU AI Act (the Act). The Act, which is not yet in force, constitutes a unique set of rules for risk-based regulation of AI technologies. We’ll now summarise what the EU AI Act is suggesting.

Definitions

First of all, the Act provides a definition of AI technology, which is ‘software that is developed with one or more of the techniques listed in a special annex to the regulation, and that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.’

So far, the annex has listed only three categories of techniques used in the development of AI systems:

  1. Machine learning, including supervised, unsupervised, reinforcement learning and deep learning;
  2. Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inferences and deductive engines, (symbolic) reasoning and expert systems; and
  3. Statistical approaches, Bayesian estimation, search and optimisation methods.

The Act also differentiates between the various entities that use AI systems, namely:

  • Providers—anyone (entity or individual) who develops an AI system;
  • Users—those who use AI;
  • Representatives—someone who officially represents the AI provider in the EU;
  • Importers and distributors—those who distribute the AI system in the EU; and
  • Operators—any of the above.

These definitions cast a wide net over the entities potentially subject to the Act’s legal regime.

Risk-based approach

Just like the EU general data protection regulation (GDPR), the Act establishes different rules on AI based on risk level. This means that the Act’s measures range from the implementation of certain safeguards to outright prohibition, depending on the nature of and risks presented by the AI system in question.

Below are the three risk levels defined by the Act:

  • Unacceptable risk;
  • High risk; and
  • Limited risk.

AI systems bearing unacceptable risk are considered a threat and therefore must be banned. These include:

  1. Systems relying on subliminal techniques;
  2. Systems that exploit specific vulnerabilities of people (such as children);
  3. Systems enabling social classification of people (or ‘social scoring’); and
  4. Systems for real-time, remote biometric identification.

 

High-risk AI technologies, meanwhile, relate to the following:

1) AI systems that are used in products falling under the EU’s product safety legislation, including toys, aviation, cars, medical devices, and lifts.

2) AI systems falling into eight specific areas that will have to be registered in an EU database:

  • Biometric identification and categorisation of natural persons;
  • Management and operation of critical infrastructure;
  • Education and vocational training;
  • Employment, worker management and access to self-employment;
  • Access to and enjoyment of essential private services and public services and benefits;
  • Law enforcement;
  • Migration, asylum and border control management; and
  • Assistance in legal interpretation and application of the law.

Limited-risk AI systems include systems that generate or manipulate images, audio or video. These will have to comply with minimal transparency requirements, ensuring users are aware they are interacting with AI when using such applications.

For instance, generative AI, like ChatGPT or Midjourney, will have to comply with transparency rules that:

  • Disclose that the content was AI-generated;
  • Create a model to prevent it from generating illegal content; and
  • Publish summaries of copywritten data used for training.

What exactly is prohibited and what is not?

While these categories are still being clarified, prohibited practices so far include:

  • Subliminal techniques beyond a person’s consciousness that aim to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.

Example: AI-driven search engine manipulation for marketing purposes that makes a person radically change his or her consumption habits.

  • Techniques exploiting any of the vulnerabilities of a specific group of persons due to their age (or physical or mental disability) in order to materially distort behaviour in a manner that causes or is likely to cause physical or psychological harm.

Example: A toy with an integrated voice assistant which encourages a minor to engage in progressively dangerous behaviour).

  • Techniques evaluating or classifying the trustworthiness of natural persons over a certain period based on their social behaviour with a social score.

Example: Social scoring systems in China.

  • The use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless it is strictly necessary for one of the following objectives:

(i) The targeted search for specific potential victims of crime, including missing children;

(ii) The prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;

(iii) The detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence.

Example: CCTV with automated and non-discriminate facial recognition in China.

Measures for high-risk AI systems

The Act establishes a number of measures that risky AI systems must implement. These measures should be the result of the provider’s ex-ante risk self evaluation.

These measures include:

  • Introduction of risk management systems (identification and evaluation of risks; regular testing and risk mitigation).
  • Data governance (training, validation, and testing data sets will be subject to appropriate data governance and management practices).
  • Maintenance of technical documentation (technical documentation will be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements of the Act).
  • Record keeping (capabilities enabling the automatic recording of events or ‘logs’).
  • Transparency and the provision of information to users (including the identity and contact details of the provider and, where applicable, its authorised representative; and characteristics, capabilities, human oversight, intended purpose and limitations of performance of the high-risk AI system).
  • Human oversight (AI systems need to be designed and developed so that they can be effectively overseen by natural persons).
  • Accuracy, robustness and cybersecurity (AI systems need to be designed and developed to perform consistently throughout their life cycle).

All categories of AI operators, including providers, importers, representatives, distributors and users, will have to implement the above measures.

The EU AI Act is expected to create a new compliance vertical in each EU company dealing with AI, just like the EU General Data Protection Regulation did in 2018. This means that AI is expected to become a popular area of consultancy not only in the EU, but also in other regions.

In the near future, we’ll see more attempts to regulate AI worldwide based on the EU AI Act. This means that the regulators will make additional attempts at defining what is limited risk, very risky, or outright prohibited.

Tony Petrov is the chief legal officer at Sumsub.

More To Explore

Membership

Are you a member of The Payments Association?

Member benefits include free tickets, discounts to more tickets, elevated brand visibility and more. Sign in to book tickets and find out more.

Welcome

Log in to access complimentary passes or discounts and access exclusive content as part of your membership. An auto-login link will be sent directly to your email.

Member of The Payments Association? Log in to continue reading

Development note: Shows when the article IS from Payments Intelligence, AND when a reader is NOT a member of TPA

Subscribe to continue reading

Development note: Shows when someone IS logged in OR logged out AND we don’t know if they are a subscriber or a member (i.e. no Cookie “role” is set to “guest” and “is_subscriber” is “false”)

Already a subscriber? Log in to continue reading

Development note: Shows when we know someone IS logged-out, IS a subscriber, but their role is NOT one of the member roles (i.e. Cookie “role” IS set to “guest, customer, non-member” and “is_subscriber” is “true”)

Member of The Payments Association? Log in to continue reading

Development note: Shows when we know someone IS logged-out, IS a subscriber and IS a member (i.e. Cookie “role” is NOT set to “guest, customer, non-member” and “is_subscriber” is “true”)

Sign in or become a member to access this content

Gain Insider Knowledge

Become a member of The Payments Association today

Join The Payments Association and unlock a world of benefits:

  • Up to 25 introductions per year
  • Exclusive member content
  • Access member-only events, as well as free passes to headline events
  • Influence and shape the industry & policy agenda
  • Elevate your brand profile
  • Access an all-year round networking app

Having trouble signing?

We use an auto-login link to ensure optimum security for your members hub. Simply enter your professional work e-mail address into the input area and you’ll receive a link to directly access your account.

First things first

Have you set up your Member account yet? If not, click here to do so.

Still not receiving your auto-login link?

Instead of using passwords, we e-mail you a link to log in to the site. This allows us to automatically verify you and apply member benefits based on your e-mail domain name.

Please click the button below which relates to the issue you’re having.

I didn't receive an e-mail

Tip: Check your spam

Sometimes our e-mails end up in spam. Make sure to check your spam folder for e-mails from The Payments Association

Tip: Check “other” tabs

Most modern e-mail clients now separate e-mails into different tabs. For example, Outlook has an “Other” tab, and Gmail has tabs for different types of e-mails, such as promotional.

Tip: Click the link within 60 minutes

For security reasons the link will expire after 60 minutes. Try submitting the login form again and wait a few seconds for the e-mail to arrive.

Tip: Only click once

The link will only work one time – once it’s been clicked, the link won’t log you in again. Instead, you’ll need to go back to the login screen and generate a new link.

Tip: Delete old login e-mails

Make sure you’re clicking the link on the most recent e-mail that’s been sent to you. We recommend deleting the e-mail once you’ve clicked the link.

Tip: Check your security policies

Some security systems will automatically click on links in e-mails to check for phishing, malware, viruses and other malicious threats. If these have been clicked, it won’t work when you try to click on the link.

Need to change your e-mail address?

For security reasons, e-mail address changes can only be complete by your Member Engagement Manager. Please contact the team directly for further help.

Still got a question?