What should you know about the AI regulation?
The AI Act is a European Union regulation that aims to ensure that AI is developed and used safely, transparently, and respecting human rights. The regulation came into force in August 2024 and is the world's first comprehensive legal framework for AI.
EU AI regulation and risk classifications
The regulation is based on the idea that regulation is done on a risk-based approach based on potential risk factors. In the regulation, AI applications are classified risk-based into four levels:
Prohibited risk
Applications that endanger fundamental rights or are in conflict with EU values. Examples include social scoring and emotion recognition using AI in educational institutions and workplaces.
High risk
Use cases that may have a significant impact on people's safety, health, or fundamental rights. High-risk systems include, for example, certain AI use cases in law enforcement and judicial proceedings, such as preparing court decisions.
Limited risk/transparency risk
This category covers AI applications that users interact with directly, but which do not make decisions independently. A typical example is a website's customer service chatbot. The use of these systems is permitted when users are clearly informed that it is an AI-powered system and not a human. The goal is to ensure that users do not develop a false understanding of the system's nature.
Minimal risk
AI applications that are in everyday use and do not involve any particular risk. For example, text input autocorrection or music recommendations in a streaming service.
Additionally, the AI Act considers so-called systemic risks that may arise from general-purpose, large generative models. Such models can be very extensive and multi-purpose, and they often form the basis for many AI applications in the EU.
Obligations for AI system users
The AI regulation brings new obligations that must be considered together with other regulations such as the General Data Protection Regulation. Although the greatest responsibilities are with the provider of the AI-powered service, it is good for organizations using the systems to understand which risk category the system being used belongs to. Organizations are also responsible for training personnel who use the organization's AI systems to understand the basics of AI and the system's limitations.
Do you want to utilize AI responsibly with a reliable partner?
Avatars Intelligence helps organizations adopt AI in a way that is both business-promoting and compliant with the EU AI regulation. We help identify risks, build transparent solutions, and ensure that AI usage is safe and human-centered. Whether it's developing customer service, supporting production, or improving onboarding, we provide concrete help from implementation to continuous development.
The AI regulation is not an obstacle to innovation – it is a framework for safe and responsible AI utilization. When properly implemented, AI can bring significant benefits to the organization while complying with EU requirements.


