HomepageSolutions

EU AI Act

Satisfying legal requirements for Artificial Intelligence

Take a proactive approach to AI compliance and innovation. With DEKRA, you can ensure your company stays ahead of the curve in the dynamic regulatory landscape. On this page we explian all you need to know about how to prepare your organization for the EU AI Act..

In an era where AI technology is set to become a pivotal asset for businesses, the potential for exponential growth is undeniable. Studies forecast a staggering increase in cash flow by over 120% for companies leveraging AI in the next five to seven years, indicating a monumental shift in profitability and operational efficiency by the year 2030.
However, alongside these transformative opportunities, the landscape of AI regulation is rapidly evolving on a global scale. Various countries are implementing or exploring different regulatory frameworks to address the risks associated with AI. Europe leads the charge with its groundbreaking Artificial Intelligence Act, ratified in June 2023, aiming to ensure the trustworthy and accountable use of AI technologies.

EU AI Act: FAQs

The AI Act is the European Union's first comprehensive legislation aimed at regulating artificial intelligence technologies. It establishes a risk-based framework to ensure trustworthy, transparent, and accountable AI use, setting a global precedent for AI governance.
AI applications can pose risks to human rights and safety, such as biases in decision-making or privacy violations. The rapid emergence of generative AI tools like ChatGPT has heightened public concern and political pressure, emphasizing the need for timely regulation.
Does the EU AI Act treat all AI applications in the same way? The EU AI Act categorizes AI systems into four risk levels: unacceptable, high, limited, and minimal risk. Unacceptable-risk systems, such as social scoring or manipulative AI, are banned. High-risk systems face stringent requirements, while limited-risk systems have lighter obligations.
Non-compliance can result in severe penalties, including fines of up to 7% of global annual revenue. This underscores the importance of adhering to the Act to avoid financial and reputational damage.
High-risk applications include systems influencing access to essential services (e.g., healthcare, housing, credit), employment decisions, law enforcement, and biometric identification. These systems are regulated due to their potential to impact safety and fundamental rights.
High-risk AI systems must meet strict requirements, including risk assessments, conformity evaluations, quality management systems, data protection, and transparency measures. These ensure the systems are safe, fair, and reliable.
The AI Act was adopted in 2024, with phased implementation. Key dates include:
  • February 2025: Ban on unacceptable-risk systems.
  • August 2025: Transparency rules for general-purpose AI.
  • August 2026: Compliance for high-risk systems.
  • August 2027: Full enforcement for all operators.
No, other countries like China, US, Singapore Australia, etc. are addressing AI regulation following different approaches, from more strict regulations to softer good practices or guidelines.
Companies will need to ensure compliance with the regulations in each jurisdiction they operate, depending on where. This could potentially lead to a complex regulatory landscape that may require integrated solutions in their AI technology, over-regulation could potentially stifle AI use.
Companies, especially those implementing AI models, should start by establishing a responsible AI program, including an AI Quality Management System. This includes defining AI Policies and objectives, documentation of all the AI resources in the company, as well as the procedures along the complete AI lifecycle and understanding the broad requirements of compliance. Starting now is essential, as it may take time to adapt to regulations.
The EU AI Act affects companies and organizations that develop, deploy, or use AI systems in the European Union. It applies across various sectors including healthcare, finance, social services, housing, credit, and more. The audience affected includes businesses, developers, policymakers, and consumers who interact with AI-powered systems.
To ensure your organization is prepared for the upcoming AI Act, consider the following:
  • Understand the requirements: Familiarize yourself with the specific obligations outlined in the AI Act, such as those related to disclosure, certification, transparency, and post-deployment documentation.
  • Establish AI Quality Management: Implement a robust AI Quality Management System within your organization to ensure compliance and responsible AI practices.
  • Develop internal processes: Create clear procedures for documentation, risk identification, mitigation, and validation of your AI systems.
  • Partner with an expert: Collaborate with a trusted certification partner like DEKRA. We provide comprehensive audits, testing, and certification for AI systems to help you navigate the regulatory landscape with confidence and ensure your technology meets all necessary standards.
  • Stay informed: Keep up-to-date with the latest developments and changes in AI regulations.

DEKRA: Your partner in AI

With leadership in AI testing, inspection, and certification (TIC), DEKRA guides companies through the dynamic global AI regulations, ensuring safety, reliability, and compliance. Our extensive expertise spans various industries, including automotive, medical, cybersecurity, and more. By collaborating with leading technology companies, universities, and R&D centers, we are at the forefront of AI development and can quickly adapt to technological changes across all sectors.