Focus points
- The Phased Timeline and Harsh Penalties for Businesses
- A Look at Key Countries in Europe’s Enforcement
- The B2B Compliance Imperative
The Phased Timeline and Harsh Penalties for Businesses
The EU opted for a phased approach to give businesses time to adjust. These are the milestones that mark the phased introduction of the AI Act.
- August 2025: GPAI providers, those who build large models used across industries, must now comply with transparency and safety obligations. That means documenting training data, ensuring proper labeling of AI outputs, and implementing safeguards against misuse.
- August 2026: By this time, attention turns to high-risk AI systems, including those used in law enforcement, critical infrastructure, healthcare, and education. In these areas, companies will need to show they are doing their homework with proper risk checks, reliable data, and human oversight at every step.
This timeline gives companies breathing room, but it is not a grace period for inaction. Businesses that fail to prepare now will find themselves scrambling next year.
The Cost of Non-Compliance to the AI Act
The AI Act’s penalty structure is deliberately severe to force compliance:
- €35m or 7% of global turnover: For using prohibited AI practices such as social scoring or manipulative biometric surveillance.
- €15m or 3% of turnover: For breaches of core obligations, like failing to meet transparency or risk management requirements.
- €7.5m or 1% of turnover: For supplying incorrect information to authorities.
These numbers are not symbolic. They are designed to be a deterrent, ensuring businesses treat AI governance as a board-level issue, not an afterthought.
A Look at Key Countries in Europe’s Enforcement
The enforcement of AI regulations follows a dual governance model. The EU AI Office in Brussels supervises the largest systemic GPAI models, while national authorities handle most other cases. This means your company will likely interact with regulators at the national level.
Here’s how some EU countries are gearing up:
- Germany: As Europe’s industrial powerhouse, Germany is expected to take a strict approach. Its regulators are likely to focus heavily on AI in manufacturing, automotive, and logistics.
- France: Digital sovereignty is a top priority. France’s data protection authority, CNIL, already enforces GDPR aggressively and will likely bring the same energy to AI oversight.
- Ireland: With many global tech giants headquartered in Dublin, Ireland’s Data Protection Commission has extensive regulatory experience. Expect Ireland to be a central hub for AI compliance investigations.
- Spain: Spain has gone one step further, creating a dedicated Spanish Artificial Intelligence Supervisory Agency (AESIA) to enforce the AI Act.
- Italy: Known for its proactive stance on consumer protection, Italy is expected to focus on ethical deployment of AI in finance and healthcare.
- United Kingdom: Although no longer in the EU, the UK is moving in a parallel direction with its own AI regulatory framework. Businesses operating across both markets will need to ensure dual compliance.
The challenge for businesses is that, even with one EU Act, companies still face different national regulators, interpretations, and enforcement styles, making it essential to work with reliable partners.

The B2B Compliance Imperative
For B2B businesses, the message is simple: don’t wait. Compliance must start now. Here are the key steps:
Step 1: Audit Your AI Landscape Map out all the AI tools and systems you use - whether developed in-house or purchased from vendors. Classify them as low-risk, GPAI-related, or high-risk. This inventory will form the basis of your compliance strategy.
Step 2: Engage Your Supply Chain Your obligations don’t stop at your own systems. Demand compliance from your suppliers and service providers. Update contracts to include AI Act clauses and request documentation proving they meet transparency requirements.
Step 3: Implement an AI Governance Framework Establish an internal governance framework that covers:
- Risk management procedures
- Data quality and traceability
- Human oversight and accountability
This framework should be embedded into your overall corporate compliance structure, much like GDPR and ESG standards.
Step 4: Educate Your Teams Regulation is only effective if people understand it. Train managers, IT teams, and frontline employees on the basics of the AI Act. Awareness reduces risk and builds a culture of responsibility.
Conclusion
The AI Act is no longer a proposal and it is shaping the future of AI in Europe. Since August 2025, GPAI providers are already bound by strict obligations, and by August 2026, high-risk AI systems will come under even tighter scrutiny. With penalties reaching €35 million or 7% of turnover, non-compliance is not an option.
But this is not just about avoiding fines. Businesses that embrace compliance early will stand out as trustworthy, ethical partners. They will protect their reputation, secure customer loyalty, and stay ahead in a digital economy that increasingly values accountability as much as innovation.
For more insights and related industry coverage visit Inside Business, the europages blog, to explore more articles about artificial intelligence in business.