EU AI Act: risk classification and impact on digital accessibility

EU AI Act: risk classification and impact on digital accessibility

The use of Atificial Intelligence in the European Union is regulated by the AI Act, the world’s first comprehensive AI law. Proposed by the Commission in April 2021, and agreed by the European Parliament and the Council in December 2023, the AI Act addresses potential risks to the health, safety, and fundamental rights of citizens while supporting the development of innovative and responsible AI in the EU.

What was the goal of the legislators? Parliament's priority was to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes. Parliament also wanted to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.

European Union lawmakers signed the artificial intelligence (AI) act in June 2024. The AI act, the first binding worldwide horizontal regulation on AI, sets a common framework for the use and supply of AI systems in the EU.

The AI Act was published in the EU's Official Journal on 12 July 2024. It entered into force in August 2024. Consequently, the new law becomes applicable on 2 August 2026, which is twenty-four months from the date of the entry into force. There are however three special transition periods for certain categories of articles in the AI Act. See here for a full implementation timeline which includes all key milestones listed here, and more.

Different rules for different risk levels

The new act offers a classification for AI systems with different requirements and obligations tailored to a 'risk-based approach'. Some AI systems presenting 'unacceptable' risks are prohibited. A wide range of 'high-risk' AI systems that can have a detrimental impact on people's health, safety or on their fundamental rights are authorised, but subject to a set of requirements and obligations to gain access to the EU market. 

AI systems posing limited risks because of their lack of transparency will be subject to information and transparency requirements, while AI systems presenting only minimal risk for people will not be subject to further obligations.

1. Unacceptable risk

Anything considered a clear threat to EU citizens will be banned: from social scoring by governments to toys using voice assistance that encourages dangerous behaviour of children. Banned AI applications in the EU include:

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children.
  • Social scoring AI: classifying people based on behaviour, socio-economic status or personal characteristics.
  • Biometric identification and categorisation of people.
  • Real-time and remote biometric identification systems, such as facial recognition in public spaces.

Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.

2. High risk

AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:

1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.

2) AI systems falling into specific areas that will have to be registered in an EU database:

  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law.

All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle. People will have the right to file complaints about AI systems to designated national authorities.

3. Limited risk

AI systems such as chatbots are subject to minimal transparency obligations, intended to allow those interacting with the content to make informed decisions. The user can then decide to continue or step back from using the application.

4. Minimal risk

Free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category where the new rules do not intervene as these systems represent only minimal or no risk for citizen’s rights or safety.

Transparency requirements

Generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission.

Content that is either generated or modified with the help of AI - images, audio or video files (for example deepfakes) - need to be clearly labelled as AI generated so that users are aware when they come across such content.

Who does the EU AI Act apply to?

The EU AI Act applies to multiple operators in the AI value chain, such as providers, deployers, importers, distributors, product manufacturers and authorized representatives).

The majority of obligations fall on providers (developers) of high-risk AI systems:

  • those that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country.
  • and also third country providers where the high risk AI system’s output is used in the EU.

The EU AI Act also applies to providers and deployers outside of the EU if their AI, or the outputs of the AI, are used in the EU. For example, suppose a company in the EU sends data to an AI provider outside the EU, who uses AI to process the data, and then sends the output back to the company in the EU for use. Because the output of the provider’s AI system is used in the EU, the provider is bound by the EU AI Act.

Providers outside the EU that offer AI services in the EU must designate authorized representatives in the EU to coordinate compliance efforts on their behalf.

While the act has a broad reach, some uses of AI are exempt. Purely personal uses of AI, and AI models and systems used only for scientific research and development, are examples of exempt uses of AI.

EU AI Act and Digital Accessibility 

Digital accessibility is a civil right of people with disabilities. All people should have equal access to products or services, regardless of existing or potential limitations. This right is now protected by law in the EU

Mandatory accessibility for AI that is high-risk: this requirement is now written in Article 16. This will ensure that no high-risk AI system can be deployed in the EU unless it meets the necessary accessibility standards to ensure that people with disabilities are not excluded or discriminated against. The AI Act refers specifically to the European Accessibility Act and the Web Accessibility Directive.

The European Disability Forum summarizes: The AI Act also requires that high-risk AI systems must be accessible to everyone, including persons with disabilities, as described in Article 16(l). AI developers are required to apply the principles of universal design from the outset to ensure that these systems are accessible and usable by people with different abilities and needs.

Recital 80 emphasises the EU's legal obligation as a signatory to the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD) to protect persons with disabilities from discrimination and to ensure equal access to information and communication technologies. It emphasises the need to apply the principles of universal design to ensure full and equal access for all, taking particular account of the dignity and diversity of persons with disabilities.

Article 5(1)(b) contains a provision that prohibits the use of artificial intelligence to exploit people who are at risk of being exploited, e.g. because they have a disability, live in extreme poverty, etc. This does not refer to a specific type of AI, but rather to undesirable behaviour by different types of AI. It can be systems that are designed to manipulate, but the rule also applies when the bot acts manipulatively without the intention of the AI’s creator.

An example to illustrate this can be a personal health assistant designed to support individuals managing a chronic condition. If the assistant exaggerates the risks associated with the condition and pushes them into buying unnecessary medical treatments or expensive monitoring devices, it could exploit the user’s situation.

Another example would be if you are limited in your mobility and use a virtual personal assistant like Alexa to control your automated home, for example to close the door, switch on the lights, buy products to be delivered to your home, etc. If you use this device on a daily basis, you could be taken advantage of if it recommends you buy premium services or products to automate your home that you don’t actually need.

There is a risk that you will be manipulated to buy things you don’t need, or that you will buy a version that is more expensive than you need. If you interact a lot with an AI, it gets a lot of data about you. It can make assumptions about your health, your goals and your fears.

The purpose of banning this type of commercial practise is to prevent the AI from using your fears to persuade you to do something that you don’t want or that is bad for you. This rule helps people with disabilities to maintain autonomy and control over our lives, even as we increasingly rely on AI and smart technology.

How to assess AI risks in your product?

Evaluate your business needs and strategy regarding the implementation of the AI Act. Identify the current or planned use of AI systems in your organisation and compare it to the risk classification levels. Proactively plan the implementation of an AI governance framework to stay ahead of the regulations. Be mindful of the staggered enforcement of the Act and prioritise which requirements and risks to address first.

The AI Act sets out a solid methodology for the classification of AI systems as high-risk. This aims to provide legal certainty for businesses and other operators.

The risk classification is based on the intended purpose of the AI system, in line with the existing EU product safety legislation. It means that the classification depends on the function performed by the AI system and on the specific purpose and modalities for which the system is used.

AI systems can classify as high-risk in two cases: 

  • If the AI system is embedded as a safety component in products covered by existing product legislation (Annex I) or constitute such products themselves. This could be, for example, AI-based medical software. 
  • If the AI system is intended to be used for a high-risk use case, listed in an Annex III to the AI Act. The list includes use cases from in areas such as education, employment, law enforcement or migration.

The team at the Future of Life Institute developed EU AI Act Compliance Checker, an interactive tool to determine whether or not your AI system will be subject to these.

When will the AI Act be fully applicable?

The AI Act will apply two years after entry into force on 2 August 2026, except for the following specific provisions:

  • The ban of AI systems posing unacceptable risks started to apply on 2 February 2025;
  • Codes of practice will apply 9 months after entry into force;
  • The rules on governance and the obligations for general purpose AI become applicable 12 months after entry into force on 2 August 2025;
  • The obligations for high-risk AI systems that classify as high-risk because they are embedded in regulated products, listed in Annex II (list of Union harmonisation legislation), apply 36 months after entry into force on 2 August 2027.

CONCLUSIONS

Considered the world's first comprehensive regulatory framework for AI, the EU AI Act prohibits some AI uses outright and implements strict governance, risk management and transparency requirements for others.

The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market, or its use has an impact on people located in the EU.

Looking for help? Speak to our team! 

Contact us

0 / 10000

By submitting this form, I consent to SmithySoft® processing my personal information as set out in the Privacy policy; and I understand that given the global nature of the SmithySoft® business, such processing may take place outside of my home jurisdiction.

Schedule a meeting with us

Galina's photoLindedin

Galina Berezina

Calendlybook a call
Igor's photoLindedin

Igor Bilan

Calendlybook a call