EU AI Act: what B2B companies need to do now
The EU AI Act is the first comprehensive AI legislation in the world. AI literacy has been mandatory since February 2025. In August 2026, the rules for high-risk systems take effect. Research shows that half of Dutch companies still don't have an AI policy. In this article I explain what the law entails, what it means for your business, and what you need to do now.
AI & automation consultant. Helps B2B companies with lead generation, workflow automation, and AI training.
Why this is urgent
The EU AI Act is not a future plan. It is law. Parts of it are already in effect. AI literacy has been mandatory since February 2, 2025 for everyone who uses AI in a professional context. Prohibited AI practices should have been stopped on that same date.
In August 2026, less than four months from now, the rules for high-risk AI systems take effect. And in August 2027, the full law applies to all existing systems.
Meanwhile, research from CBS and industry organizations shows that roughly half of Dutch companies have not yet drafted an AI policy. Many SMEs think the law doesn't apply to them. That is a misconception.
What is the EU AI Act in plain English?
The EU AI Act is a European law that regulates how AI systems may be developed and used. The goal is to ensure that AI is deployed safely, transparently and fairly, without unnecessarily hindering innovation.
The core of the law is a risk-based approach. AI systems are categorized into four levels based on how great the risk is to society. The higher the risk, the stricter the rules.
For most B2B companies that use AI for sales, marketing and operations, the lowest risk level applies. But there are general obligations that apply to everyone, regardless of risk level.
The 4 risk levels explained
The EU AI Act categorizes AI systems by risk. This determines which rules apply to you.
Prohibited
AI that violates fundamental rights. Completely banned since February 2025.
Examples:
- Social scoring by governments
- Real-time facial recognition in public spaces (with exceptions)
- AI that recognizes emotions in the workplace or education
- AI that manipulates vulnerable groups
What does this mean for you? Not relevant for most B2B companies. But if you use AI to monitor or score employees, be careful.
High risk
AI that influences important decisions about people. Strict rules from August 2026.
Examples:
- AI in hiring processes (CV screening, ranking of candidates)
- Credit scoring and financial assessments
- AI in education (student assessment)
- Medical AI systems
What does this mean for you? Relevant if you use AI for HR processes, employee evaluation or financial decisions. Not for standard sales and marketing.
Limited risk
AI systems that require transparency. You must inform users that they're dealing with AI.
Examples:
- Chatbots on your website
- AI-generated content (text, images, video)
- Deepfakes and synthetic media
- Emotion recognition (where permitted)
What does this mean for you? Relevant if you have an AI chatbot on your website or publish AI-generated content. Make sure users know they're communicating with AI.
Minimal / Low risk
Most AI applications in B2B sales and marketing. No specific obligations, but best practices are recommended.
Examples:
- AI for lead generation and lead scoring
- Personalized email outreach
- Content creation with Claude or GPT
- Workflow automation with n8n
- CRM automation and data enrichment
What does this mean for you? This is where most B2B companies sit. No strict rules, but AI literacy is mandatory and documentation is recommended.
What applies to B2B sales and marketing tools?
If you use AI agents for sales , ChatGPT for emails, Claude for content, or n8n for workflow automation, you fall under the minimal or low risk category.
That means: no conformity assessment, no certification, no mandatory audit. But it doesn't mean you don't need to do anything.
What is mandatory:
What is strongly recommended:
Checklist: 10 things to arrange now
Start at the top and work your way down. Items with an orange marker are the most urgent.
1. Create an AI register
UrgentMap all AI systems your organization uses. Per system: name, vendor, purpose, risk level, responsible person and start date.
2. Train your team in AI literacy
UrgentEveryone who works with AI must understand what it does and what its limitations are. This has been mandatory since February 2025. Document the training.
3. Draft an AI policy
UrgentAn internal document that describes how your organization uses AI, which rules apply, who is responsible and how you handle complaints.
4. Appoint an AI responsible person
Someone who owns the AI policy, maintains the register and is the point of contact for questions. Doesn't need to be a full-time role.
5. Check your AI vendors
UrgentAsk your vendors (OpenAI, Anthropic, etc.) about their compliance. How do they process data? Where are the servers? Do they have a data processing agreement?
6. Check your data processing agreements
UrgentEnsure you have a data processing agreement with every AI vendor that complies with GDPR and aligns with the AI Act. Pay attention to data storage and model training.
7. Be transparent to users
If you use AI chatbots or publish AI-generated content, inform your users. Add a disclaimer to chatbots and label AI content where relevant.
8. Keep a human in the loop
For decisions that affect people (such as lead scoring that determines who gets contacted and who doesn't), ensure a human makes or can override the final decision.
9. Document your AI usage
UrgentRecord per AI application: what it does, what data it uses, how the output is monitored and what you do when it goes wrong. This is your evidence during an audit.
10. Schedule an annual review
AI legislation evolves. Schedule at least one review per year of your AI register, policy and vendors. Adjust where needed.
Common mistakes
Thinking the AI Act doesn't apply to you
The most common mistake. 'We only use ChatGPT for emails, that doesn't count.' Yes, it does count. Any use of AI in your business operations falls under the law. The risk level is low, but AI literacy and documentation are mandatory.
Not having an AI policy
Without a policy, there is no framework. Employees use AI tools at their own discretion, there's no overview of what's happening, and during an audit you can't show anything. A policy doesn't need to be long, but it needs to exist.
Ignoring AI literacy
This has been mandatory since February 2025 and is the most frequently ignored requirement. It doesn't need to be an extensive course. An internal workshop of 2 hours with documentation is sufficient. But you need to have demonstrably done it.
Forgetting data processing agreements
If you use ChatGPT or Claude for business data, you're processing data via a third party. Without a data processing agreement, you're not compliant with GDPR, and neither with the AI Act.
Waiting until the deadline
August 2026 sounds far away, but building compliance takes time. Those who start in July 2026 are too late. Companies that start now do it calmly and thoroughly. The rest will have to do it in panic.
What does it cost to become compliant?
AI Act Quick Scan
€750Inventory of your current AI usage, risk analysis per application, and a priority list with concrete actions. You'll know exactly where you stand.
Policy + Implementation
€3,000 - 8,000Drafting an AI policy, creating an AI register, AI literacy workshop for your team, checking data processing agreements and setting up documentation structure.
Perspective: the maximum fine for non-compliance is 35 million euros or 7% of your annual turnover. A Quick Scan of 750 euros is a small investment to know where you stand.
How WaiBase helps with this
At WaiBase, we combine AI expertise with practical compliance knowledge. We don't just build AI automations, we also make sure they're compliant.
Also check out our AI consulting service and AI training for more information.
Timeline: what becomes mandatory when
EU AI Act published in the Official Journal of the European Union
Prohibited AI practices must be stopped. AI literacy mandatory for everyone who uses AI.
Rules for general-purpose AI (GPT, Claude, Gemini) take effect
Rules for high-risk AI systems take effect. Conformity assessments required.
Full EU AI Act takes effect. All rules apply, including for existing systems.
Frequently asked questions about the EU AI Act
Does the EU AI Act apply to my SME?
Yes. The EU AI Act applies to every company that develops, deploys or imports AI systems within the EU. It doesn't matter how big you are. If you use ChatGPT, Claude or other AI tools in your business operations, you must comply with the rules that correspond to your application's risk level.
What is AI literacy and is it really mandatory?
Yes, since February 2, 2025. AI literacy means that everyone in your organization who works with AI must understand what AI does, what its limitations are and how to use it responsibly. This doesn't need to be an extensive course, but you must be able to demonstrate that your team has been trained.
Are AI sales tools high risk?
Generally not. AI tools for lead generation, email outreach, content creation and CRM automation fall under low risk or minimal risk. They only become high risk when used for evaluating people in selection processes, credit scoring or employee evaluations.
What is an AI register and do I need one?
An AI register is an overview of all AI systems used in your company: which tools, what for, who is responsible, and which risk level applies. It's not mandatory for everyone, but it's a best practice recommended by the Dutch Data Protection Authority.
What are the fines for non-compliance?
The maximum fines are substantial: up to 35 million euros or 7% of global annual turnover for deploying prohibited AI. Lower maximums apply for other violations. In practice, enforcement for SMEs will start with warnings and improvement plans, but the risk is real.
When does the EU AI Act fully take effect?
The law takes effect in phases. AI literacy has been mandatory since February 2025. Prohibited AI practices had to be stopped since February 2025. Rules for high-risk AI apply from August 2026. The complete law takes effect from August 2027.
How long does it take to become compliant?
The basics (AI register, policy, training) can be arranged in 2 to 4 weeks. A full compliance process including vendor checks and documentation takes 1 to 3 months. The sooner you start, the less pressure as the deadline approaches.
What does it cost to become compliant?
An AI Act Quick Scan costs 750 euros. A complete policy and implementation process costs between 3,000 and 8,000 euros, depending on the number of AI systems and the complexity of your organization. Compare that to the potential fines and reputational damage.
Do I need a Data Protection Officer (DPO)?
The EU AI Act doesn't require a specific AI Officer, but it's wise to appoint someone responsible for AI governance. If you already have a DPO for GDPR, that role can be expanded. For larger organizations, a separate AI responsible person is recommended.
Where can I find the official text of the EU AI Act?
The full text is available on EUR-Lex, the official legal database of the EU. The Dutch Data Protection Authority (AP) also publishes guidelines and practical guidance. For a readable summary, you can use this article as a starting point.
Do you know where you stand?
Book an AI Act Quick Scan and know within a week exactly what you need to do. No legal jargon, just a concrete action plan.
Book a strategy call