#HRIS

AI ACT: threat or ally for HR leaders?

17/11/2025

Artificial intelligence (AI) is gradually creeping into the heart of HR practices: automated recruitment, career- path analysis, predictive skills management, internal mobility...
But the year 2025 marks a real turning point: the European Union has just rolled out the AI Act, the world's first regulatory framework dedicated to AI systems.

This unprecedented text imposes new requirements: classification of high-risk uses, reinforced documentation, human supervision and algorithmic transparency. All of these elements are revolutionizing the way HR departments design, use and govern their tools.

For HR leaders, it opens up an area of uncertainty: is it an additional regulatory constraint or a strategic opportunity?

The answer will depend above all on the ability of organizations to anticipate, understand and integrate these requirements into their day-to-day practices.

THE AUDIT: IDENTIFYING RISK AREAS

First and foremost, the AI Act confronts HRDs with an exercise that is all too often neglected: taking a lucid look at their technological uses.

Today, most recruitment, assessment and skills management software already incorporates AI, often through standard modules offered by software publishers.

The audit is both the starting point for compliance and a means of understanding the true extent of these uses in HR solutions. All too often, their internal logic escapes the businesses that depend on them.

The AI Act now requires us to understand how these tools work and to assess their level of risk. The text distinguishes four categories, minimal, limited, high and prohibited, which which require a specific level of vigilance.

A few examples, not exhaustive, illustrate the diversity of situations:

  • Prohibited: the analysis of emotions (voice, face) in interviews, deemed intrusive, will have to be abandoned.
  • High-risk: automatic scoring of candidates without explanability. These models must be documented, auditable and regularly evaluated.
  • Under scrutiny: AIs likely to reproduce biases. HR departments will have to prove that decisions do not discriminate on the basis of gender, age or origin.

An audit provides an accurate picture of the current situation:

  • Which tools are based on algorithms?
  • Which decisions are partially or fully automated?
  • What models influence recruitment, mobility and appraisal choices without direct human supervision?

This mapping of uses must be based on a simple principle: the more directly a system influences a decision on a person, the more it must be explicable and controllable.

But the audit also highlights the text's blind spots: certain areas of interpretation remain unclear, administrative procedures are cumbersome, and dependence on non-European vendors can complicate compliance.

Far from being a hindrance, this awareness is an opportunity: better knowledge of your tools means better control over your decisions, greater credibility and more secure practices.

The audit then becomes an act of HR sovereignty, essential for building an ethical, useful and compliant AI.

GOVERNANCE: CO-CONSTRUCTING A RESPONSIBLE HR AI

To prevent complexity from taking over, it's not enough to identify risks: responsibilities must be clearly defined. The AI Act calls on organizations to rethink their governance and share responsibilities between HR, IT, legal and data teams. It's a real cultural change.


Managing algorithms is no longer just a technical matter: it is becoming a strategic, cross-functional and profoundly human issue.


To make it operational, organizations need to establish a structured dialogue between the various players. Setting up AI-HR committees - bringing together all stakeholders - helps steer automated decisions and clarify responsibilities.

But these bodies can only have a real impact if they are supported by a robust framework based on clear benchmarks, common methods and shared rules. It is this governance architecture that guarantees the consistency of practices and the long-term anchoring of principles.

Effective and sustainable AI governance rests on several essential foundations:

  • Regular model validation and audit processes
  • Complete, traceable documentation of each tool and its uses
  • Responsibilities shared between operational professions and support functions
  • Formalized procedures for detecting, managing and correcting incidents or biases in algorithms.

This "ethical by design" approach integrates responsibility and transparency from the outset, rather than correcting after the fact. Employees gain confidence, decisions can be explained, and the corporate culture is strengthened.

Setting up this governance implies new resources and coordination - often a challenge for SMEs and ETIs - but it's also an opportunity to progress collectively and strengthen AI mastery.

SUPPORTING TEAMS: FROM TEXT TO PRACTICE

Compliance cannot be decreed. It has to be lived and reflected, every day, in HR practices and decisions.

Support is therefore the keystone of the approach. It must link rules to action and adapt to the maturity of each organization, combining training, awareness-raising and operational implementation:

  • training in reading and evaluating HR algorithms,
  • workshops on the concepts of bias, transparency and fairness,
  • operational roadmaps with monitoring indicators and 6-12 month action plans.

The challenge is not just to "check a box", but to create a common AI culture, where everyone understands their role: from the recruiter who relies on a scoring model to the lawyer who checks its traceability.

Little by little, the teams are developing a common language, the same demand for proof and a real capacity for arbitration.

Yes, the AI Act is demanding. But it also creates a tremendous learning opportunity. It's no longer just a matter of protecting ourselves from AI, but of taming it.

By capitalizing on skills, companies gain in responsiveness, credibility and attractiveness. People remain at the heart of decision-making: technology becomes a partner, not a substitute.

ACTING NOW SAVES TIME AND CREDIBILITY

The benefits of being proactive are immediate:

  • Reduce legal and financial risks
  • Securing tools before inspections
  • More robust and explainable decisions
  • Strengthened relationships of trust with employees and social partners
  • Competitive edge in tenders thanks to controlled compliance

On the other hand, waiting exposes you to sanctions, tool withdrawal, high correction costs, loss of credibility...

At ACT-ON GROUP, we see the AI Act as much more than a constraint: it's an opportunity to give HR departments back their full role in transforming organizations.

By combining our HR, data and legal expertise, we support organizations at every stage, from auditing tools and governance to operational implementation and team training.

Our integrated approach transforms compliance into control, and control into performance.

Anticipate. Understand. Act.
AI Act offers HRDs the opportunity to build a more responsible, confident and successful HR function.

Have a question? A project? Contact the ACT-ON GROUP experts today!

You may also like…

crossmenuchevron-down