The first set of EU AI Act rules took effect on February 2, 2025, introducing new legal requirements for companies developing, deploying, or using AI in the European Union. Businesses must now ensure compliance with AI literacy obligations and avoid prohibited AI practices to mitigate regulatory risks.
What Changed with the First EU AI Act Rules?
The EU AI Act rules mark a major milestone in AI regulation, requiring organizations to adopt measures that promote AI literacy and eliminate high-risk AI applications. These initial provisions set the foundation for broader compliance requirements that will roll out in the coming months and years.
Organizations operating within the EU or offering AI systems in the region must now align with these regulations. To support implementation, the European Commission is preparing non-binding guidelines on prohibited AI practices and AI system definitions. These guidelines will clarify how businesses should assess their AI technologies under the Act. Companies that fail to comply may face significant penalties, making it essential to take immediate steps to meet the new legal standards.
AI Literacy
AI literacy is now a legal requirement under the AI Act. Providers and deployers of AI systems must ensure that employees and affected users are sufficiently trained to understand how AI works, its risks, and compliance obligations. This requirement aims to improve the responsible use of AI technologies by ensuring that those involved in AI operations have the necessary skills and knowledge.
The AI Act defines AI literacy as more than just familiarity with AI technology. It includes an understanding of fundamental rights, ethical considerations, and risk mitigation. Companies must train their personnel to correctly interpret AI outputs, apply appropriate safeguards, and recognize situations where AI use could lead to harmful consequences. This obligation extends beyond developers and engineers to all staff members interacting with AI systems, including compliance officers, decision-makers, and frontline employees.
A common misconception is that AI literacy under the Act promotes “AI-first” strategies or increased AI-driven productivity. However, the actual focus is on ensuring compliance, protecting fundamental rights, and preventing harm. Misinterpreting AI literacy as merely an innovation strategy could lead to gaps in compliance, increasing regulatory and reputational risks.
Prohibited AI Practices: What Is Now Banned?
The EU AI Act rules prohibit certain AI practices that pose an unacceptable risk to individuals and society. These banned AI applications, outlined in Article 5, are considered too dangerous due to their potential for manipulation, discrimination, or harm. Companies must immediately cease using these AI systems in the EU, and failure to comply may result in severe penalties, including fines of up to €35 million or 7% of global turnover.
Among the prohibited AI practices are AI systems that manipulate human behavior beyond a person’s awareness, such as subliminal techniques that influence decision-making. AI systems that exploit vulnerable individuals, including minors or persons with disabilities, are also banned. Additionally, emotion recognition AI in workplaces and schools, social scoring by governments, and predictive policing systems that claim to assess a person’s likelihood of committing a crime are no longer allowed under EU law.
Another major restriction targets AI systems that create or expand facial recognition databases by scraping images from the internet or using CCTV footage. However, certain exceptions exist for law enforcement agencies under strictly defined conditions. The European Commission has announced it will publish non-binding guidelines explaining these prohibitions in greater detail, helping organizations understand specific use cases that fall under the banned practices.
What’s Next? Key Compliance Deadlines
The EU AI Act rules will continue to take effect in phases, with additional requirements applying in the coming months and years. The next major compliance milestone is on August 2, 2025, when obligations for General-Purpose AI Models come into force. These requirements will ensure greater transparency, requiring AI providers to maintain technical documentation on their models and datasets. At the same time, national regulatory authorities will be appointed to oversee compliance, investigate violations, and issue fines.
Ahead of this, by May 2, 2025, the European Artificial Intelligence Office is expected to release Codes of Practice for AI providers. These guidelines will offer practical compliance strategies for companies developing or deploying AI technologies in the EU. Further guidance will follow on incident reporting for high-risk AI systems, ensuring that providers document and disclose serious AI-related failures or harms.
With the staggered rollout of the AI Act, organizations should take a proactive approach to compliance. This includes reviewing their AI systems, ensuring that banned practices are eliminated, and preparing for transparency obligations. Companies that delay may struggle to meet the stricter requirements that will come into effect later, increasing their risk of enforcement actions.
The Road Ahead for AI Regulation
With the first EU AI Act rules now in effect, companies must shift from preparation to implementation. The new obligations on AI literacy and prohibited AI practices mark the beginning of a broader regulatory framework that will gradually introduce more requirements in the coming years. While many organizations have been preparing for the AI Act, the speed at which the first compliance deadlines arrived has caught some by surprise. Even the European Commission has yet to release all promised guidance, with additional materials on AI literacy and prohibited practices expected soon.
One of the key challenges is ensuring correct implementation of AI literacy requirements. Some businesses and consultants are misrepresenting this obligation, promoting AI literacy as a tool for increasing AI adoption and productivity. However, the AI Act defines AI literacy as an understanding of AI risks, safeguards, and ethical considerations, not a strategy to become “AI-first.” Misinterpretations of this requirement could lead organizations to focus on the wrong priorities, neglecting their legal obligations. The European Commission’s upcoming repository of AI literacy practices should help clarify what AI literacy means in the context of compliance.
At the same time, the enforcement landscape is taking shape. Although fines for prohibited AI practices are already in place, full enforcement mechanisms will only be operational from August 2, 2025, when national regulators take on their supervisory roles. The list of banned AI practices is also subject to annual review by the Commission, meaning that additional AI applications could be prohibited in the future. Organizations should remain vigilant and ensure they are continuously aligning their AI strategies with regulatory developments.
Preparing for Compliance
The AI Act is designed to create a safer and more transparent AI ecosystem in the EU. While only the first EU AI Act rules are currently in effect, businesses should not wait to take action. The following steps will help organizations stay ahead of compliance challenges:
- Ensure AI literacy training is legally compliant. AI literacy should focus on fundamental rights, ethical AI use, and regulatory obligations, not just AI adoption. Misinterpreting this requirement could lead to compliance failures.
- Eliminate prohibited AI practices. AI applications that manipulate behavior, exploit vulnerabilities, or use unauthorized biometric data must be phased out immediately to avoid significant penalties.
- Stay informed about upcoming deadlines. The next major compliance milestones in May and August 2025 will introduce transparency obligations for AI providers and require companies to document and report AI-related risks.
- Monitor regulatory updates. The AI Act is evolving, with additional guidance and enforcement mechanisms still being developed. Companies should track updates from the European Commission and AI governance bodies.
- Adopt a proactive compliance strategy. Organizations that integrate AI compliance into their business practices early will have an advantage over those that wait until enforcement becomes stricter.
The AI Act’s impact will extend far beyond these first rules. As AI regulation continues to develop, businesses must remain adaptable and committed to ethical AI practices. The first compliance deadline is just the beginning of a long-term shift toward responsible AI governance in the EU and beyond.