Artificial intelligence regulations
applied in practice
In recent years, the European Union has created the world’s first comprehensive, unified legal framework for AI-based systems (Regulation (EU) 2024/1689). This is not merely a recommendation but a binding legal act that affects both Member States and industry stakeholders. The regulation will become applicable gradually, with full application expected from August 2026.
What does the EU AI Regulation regulate?
It classifies AI systems into the following risk categories, and assigns different legal requirements to each:
- Unacceptable risk, prohibited AI practices (from February 2025)
- High-risk AI systems (from August 2026)
- General-purpose systems, transparency risk (from August 2026)
- Minimal risk
What obligations do industry stakeholders face?
- Compliance: they must ensure that AI-based systems meet the requirements applicable to their risk category.
- Transparency: providing detailed documentation to the authorities about how the AI-based product works, and ensuring the traceability of risks
- Risk management: the obligation to prepare legal analyses and audit reports both before deployment and during operation.
What we offer?
The AI Act Evaluator developed by AITIA is an auditing tool that, based on the uploaded technical and other documentation, automatically performs the risk assessment and compliance review required under the EU AI Act. Following the logical structure of the regulation, the system determines which AI Act risk category the given AI solution falls into, whether it is affected by prohibited practices, whether it qualifies as a high-risk AI system, and which role and obligations apply to the organization concerned. The result is a transparent report that clearly presents the risk classification and the relevant conclusions.
With the help of the tool, developers and compliance and legal teams can assess the compliance of their AI systems quickly and in a standardized way, significantly reducing the time and resources required for manual analysis. The solution is especially useful for organizations that manage multiple AI systems or want to align complex documentation with the AI Act requirements automatically and reliably.
Assessment under the AI Act
The system applies a predefined decision tree based on the AI Act, which:
- Provides consistent findings that are relevant from a compliance perspective.
- Attaches a justification to every checkpoint.
- Indicates the level of certainty.
- Identifies gaps in the documentation.
Use cases
- Preparing the market launch of AI products and services.
- Supporting internal or external compliance audits.
- Speeding up the work of lawyers, compliance professionals, and development teams.
- Reviewing the documentation of prohibited and high-risk AI systems.
- Preliminary risk analysis of suppliers’ and partners’ AI solutions.
Audit reports
During the analysis process, the system generates documents that contain the entire analysis process in a transparent way.
Below you can see an excerpt from the report: