,

Employers take note: EU AI Act makes you responsible for AI literacy in your organization!

EU AI act

You've probably already heard: the European Council approved a new law on AI, the AI Act, on May 21, 2024. This law was created to steer the development and use of AI in a responsible and ethical direction. But did you know that this law requires employers to make their employees AI-capable? In this article, we look at exactly what that means for employers and how to go about it.

What does the EU AI Act say?

The heart of this law is a system that divides AI applications based on their risks, from "too dangerous" to "harmless. Most of the rules apply to creators of high-risk AI systems, but even if you just use them, you must meet certain requirements.

The 4 risk categories are:

  1. Unacceptable risk: These AI systems are simply prohibited. Consider social scoring systems or AI that uses manipulative techniques.
  2. High risk: These systems are heavily regulated. This includes AI applications in critical infrastructure, education, and HR tools for recruitment and selection.
  3. Limited risk: Transparency obligations apply to this category. Chatbots and deepfakes are included.
  4. Minimal risk: These AI systems are unregulated. Think spam filters or AI in video games.

Marketers are most likely to encounter AI systems in the 'limited risk' and 'minimal risk' categories. But beware: if you use AI for employee recruitment or customer review, you could just fall into the 'high risk' category!

Who is responsible?

The AI Act distinguishes between two roles:

  1. AI "deployer": If you use external AI systems in your company (e.g. ChatGPT).
  2. AI "provider": If you are developing AI systems for internal use.

Both roles fall under Section 4 of the AI Act and have obligations to promote AI literacy.

What is AI literacy?

According to the AI Act, AI literacy is defined as:

"Skills, knowledge and understanding that enable informed deployment of AI systems, and awareness of AI opportunities, risks and potential harms."

This means your team must not only know how to use AI tools, but also understand their implications.

As an employer, what should you do?

Now comes the interesting part: the AI Act imposes obligations not only on developers of AI, but also on users. And yes, that's everyone in your company who uses AI. Here are your most important new duties:

  1. Promote AI literacy: You need to make sure your team has adequate knowledge of the AI systems they are using. Such as the technical side of AI, how to work with it, what they should pay attention to and how you use it in your specific situation. This is not only a legal requirement, but also an opportunity for your team to grow and innovate.
  2. Ensure transparency: If you use AI systems that fall under "limited risk," you need to communicate this clearly to your customers. Don't worry, this can actually build trust if you get it right.
  3. Maintain documentation: You need to be able to demonstrate how you use AI and what measures you take to comply with the law. This immediately presents a great opportunity to streamline your processes.
  4. Assessing AI literacy: You need to assess your team's AI literacy.
  5. Consider knowledge and experience: Consider the technical knowledge, experience, education and training of your team.
  6. Take action: Make sure your team achieves a sufficient level of AI literacy.

What about generative AI and AI Agents?

Let's zoom in for a moment on Generative AI, the AI application that is currently the most widely used and AI Agents that are also expected to become increasingly common in the near future.

Generative AI: Your creative assistant

Commonly used tools such as ChatGPT and Claude probably fall into the "limited risk" category. This means:

  • You have to be transparent about its use in customer interactions.
  • You are responsible for responsible use by your team.
  • You have to consider potential risks and ethical considerations.

Generative AI can be a great addition to your creative process, as long as you use it responsibly.

AI Agents: Your autonomous assistants

AI Agents are still little used, but they are expected to become more common in the coming year. They are a bit more complex, and depending on how you deploy them, they can fall into different risk categories. Here are a few things to keep in mind:

  • Provide human supervision, especially in higher-risk applications.
  • Train your staff well in the use and limitations of AI Agents.
  • Be alert to ethical issues such as potential discrimination or privacy violations.

AI Agents can take your processes and services to the next level, but require careful implementation and monitoring.

6 steps to successful AI implementation & compliance with the EU AI Act

The EU AI Act does not provide specific guidelines on how to go about this. Therefore, we have put together 6 practical steps to help you implement AI successfully:

1. Establish an AI working group. 

Gather a team of enthusiasts from different departments. They will become your AI pioneers. These "discoverers" will take the lead in developing and implementing AI strategies. Tip: Look for people who are already experimenting with or enthusiastic about AI.

2. Develop an AI policy 

Establish clear guidelines for AI use within your company. This gives guidance to your team and shows that you are serious about AI ethics. 

Create both internal and external guidelines for AI use. Consider: when to use AI, which tools are approved, how to handle data, etc. This provides clarity and helps with compliance with the EU AI Act.


3. Invest in AI training and education. 

Make sure your team has the necessary knowledge and skills to work with AI. Organize workshops, webinars or bring in outside experts. Take advantage of online learning platforms specializing in AI for marketing.

4. Conduct an AI Impact Audit 

Map out where AI could have the greatest impact in your organization, and where the potential risks lie. This will help you target investments and set priorities.

For example, for your marketing team, use the template you can download here to identify opportunities and risks. 


5. Select the right AI tools 

Experiment with different AI tools and see what works best for your team. Pay attention to usability, integration capabilities and scalability. Don't forget to check if the tools meet EU AI Act requirements.

6. Create an AI-first mindset and culture 

Encourage a culture of innovation and continuous learning around AI where it is seen as an opportunity for innovation and growth, not a threat. Consider an "AI Friday" as an R&D day to explore new AI applications and tools. Celebrate successes and learn from failures to accelerate AI adoption.

Don't wait but start now

Take the lead in this change.

  1. AI is already transforming work
  2. Most people lack extensive AI knowledge
  3. Early preparation ensures a smooth transition
  4. You can take advantage of the benefits of AI sooner

By following these steps, you are doing more than just complying with the EU AI Act. You are preparing your company for the future. In a world where AI is very quickly becoming increasingly important, you are giving yourself an edge.

Start AI-proofing your business today!

Want to learn more about how to implement AI in your marketing team? Sign up for the free Marketing AI Friday emailed to you weekly on Fridays, the monthly online AI Inspiration Session or check out more articles like this one.

Disclaimer: This information has been carefully compiled but does not replace legal advice. When in doubt, consult a legal expert.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business,'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.