What is the AI Act? A quick guide to the EU’s new rules for AI in the workplace

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >What is the AI Act? A quick guide to the EU’s new rules for AI in the workplace</span>

AI that scans CVs, analyzes employee performance and suggests who should be promoted. Smooth? Yes, it is. Risk-free? Not quite. With the AI Act comes clear rules for how technology can be used. Here's a look at what the Act means for HR professionals - plus five concrete tips for responsible AI use.

What is the AI Act and who is affected?

The AI Act is the EU's first comprehensive regulatory framework for artificial intelligence. It aims to create a common framework and guidelines for the development and use of AI, with a particular focus on systems that may affect people's rights and opportunities.

The AI Act covers all actors developing, selling, providing or using AI systems in the EU. In other words, not only technology providers are affected, but also employers who use AI in their business, for example in HR, recruitment or personnel management.

When does the AI Act apply?

The AI Act entered into force on August 1, 2024 and will be applied in stages. Some parts, such as the prohibition of certain AI uses, started to apply as early as February 2025. In August 2026, the majority of the rules start to apply, while some requirements for specific high-risk systems do not start to apply until August 2027.

Here in Sweden, work is also underway to develop supplementary national rules on how the AI Act will be implemented in practice. The Swedish draft law addresses issues such as which authorities will be responsible for supervision and compliance, as well as how sanctions and other supplementary provisions will be handled.

Read more: 6 Compelling Reasons to Use AI in Payroll

Why is the AI Act particularly relevant for HR?

The AI Act may seem like a large and rather technical piece of legislation. But at its core, it is based on a simple principle: the greater the impact an AI system can have on humans, the stricter the requirements.

That's why the AI Act divides AI systems into different risk levels. In short, the regulations distinguish between prohibited AI use, high-risk AI, and limited or minimal risk AI.

So how do you know if this applies to HR? A good rule of thumb is: if AI is used to influence decisions about people, it is often high-risk. Examples include AI in recruitment and selection, employee performance assessment or promotion decisions. In these cases, the law requires, among other things, transparency, documentation and that people always have the last word.

In short, if you are using AI to influence who gets the job, gets promoted or gets new opportunities, then the AI Act is highly relevant.

What do the requirements of the AI Act mean in more concrete terms?

So, for employers and HR, the AI Act provides clear rules for how AI can be used, but what does it actually mean in more concrete terms? Here are some key principles:

  • AI systems must not be based on biased, irrelevant or discriminatory factors.

  • It must be clear when an AI system is being used and what role it plays in a particular process.

  • Decisions affecting individuals must not be taken automatically - there must always be the possibility of review and appeal.

  • People working with AI-supported processes must have sufficient knowledge of how the systems work and what their obligations are.

Read more about how we use AI in Flex HRM

Checklist: five steps to responsible AI use at work

1. Find out where AI is used today
Can you and your colleagues say offhand how many AI applications you use today? If not, you are far from alone.

The first step is to get an overall picture of which AI-based tools are used in your organization. These could be recruitment systems, analytical tools, decision support, or internal platforms that affect how work is distributed, monitored, or prioritized. With this overview, it will be easier to determine if the systems fall into the high-risk category.

2. Human in the loop - make sure AI never makes decisions about people on its own
AI should act as a support, not as a sole decision-maker. This applies in all contexts where AI affects individuals' opportunities, working conditions or development.

Therefore, put a clear checkpoint in the process. Decisions based on AI should always be subject to review and approval by a responsible person before they are implemented.

3. Be aware of bias and wrong conclusions
AI is based on data - and data can carry historical patterns or biases. Therefore, there is a risk that AI support in recruitment, for example, could recreate inequalities. It might screen out candidates who happen to have a gap in their CV. Or it may rank men highest based on leadership potential, just because there have historically been more men in management positions - to name a few examples.

Therefore, the outcome of AI decisions needs to be reviewed regularly. Combining technical support with human judgment is key to using AI responsibly.

4. Ensure managers and key roles have the right knowledge
How well do your managers really know how the AI support in your systems works? And do they know when it's time to question an automated suggestion?

AI competence is no longer just an IT department issue. As AI starts to influence decisions about people (such as pay, selection or feedback), managers and other key people need to understand both opportunities and limitations. What does the system base its recommendations on? When can it go wrong? And who is responsible when it does?

HR has an important role to play here. Supporting them with the right knowledge, discussion material and practical examples will create the best conditions for using the technology in a thoughtful way.

5. Develop an AI policy
An AI policy is not always a formal requirement, but it is a wise investment for organizations that want to use AI safely, create a common framework and reduce the risk of misuse. For example, the policy can cover questions such as:

  • Which uses are allowed and which are not?

  • What principles should always apply (for example, transparency and human control)?

  • What data can be shared in AI tools, and what is prohibited?

  • Who is responsible for what?

How we work with responsible AI at Flex

How do we at Flex work with the AI Act in practice and how do we ensure that you as a user can feel safe using our AI solutions in Flex HRM? We had a chat with Emanuel Niska, Development Team Leader here at Flex.
emanuelN

- What it all boils down to is creating transparency and clarity in how AI is used in our solutions towards our users, which are basically principles we have had with us right from the start. But with the AI Act, you could say that we have an even clearer framework for how we should structure and document this," he says.

An important starting point is the Act's division into different risk levels:

- "We work continuously to make internal assessments and risk classifications linked to the AI Act. Among other things, we look at the impact a function can have, how it is used in practice in our solutions and what safeguards are in place. This allows us to adapt our working methods and controls based on the level of risk.

As AI evolves and new types of more autonomous functions, such as AI agents, emerge, it also becomes more important to clarify where responsibility lies, he emphasizes.

- "In our interfaces, it is always clear when a function is based on AI, what role it plays in the process, and that it is a suggestion or response that requires human judgment. This principle - human in the loop - is the same whether it's a simple assistant or a more advanced agent: it should always be a human who has the last word before anything happens in the system.

Finally, Emanuel highlights the protection of personal data as an important part of responsible AI.
- "Safe handling of data is a natural part of how we work with AI. That's why we take the GDPR into account when designing our AI solutions and ensure that all data processing takes place within the EU/EEA.

For those who want to know more

Would you like to know more about how we work with AI and what principles apply to our AI functions? You can read our policy here.

Questions or concerns? You are always welcome to contact us!

You may also be interested in