The “Peter Principle” is nothing new… Popularized in 1969 by Laurence J. Peter, it describes a structural flaw in organizations—one most of them still struggle to get rid of. According to this principle, within companies, individuals get promoted until they reach their level of incompetence. [1]
Today, the rise of artificial intelligence (AI) adds another layer to the issue. On one side, AI offers unlimited access to knowledge, automates tasks, and increases performance… In short, it seems to be a solution to (almost) everything...including incompetence, by challenging and also reinvigorating hierarchies.
But on the other hand, it clearly represents a kind of leveling down of human skills, since it can be seen (see flow theory) as a transfer of knowledge and abilities into datacenters and algorithms. This comes with a very real risk: further degradation—and even acceleration—of the Peter Principle, as organizations become more dependent on technology, with all the underlying vulnerabilities that entails.
We’re caught in the duality of AI, as applied to the sociology of organizations… a paradox in its own right! Between the threat of declining intelligence and its potential to redeem, how can companies manage to come out ahead?
THE "PETER PRINCIPLE"
OR THE PERSISTENT STRUCTURAL FLAW IN ORGANIZATIONS
The principle is simple, yet relentless… Picture a coworker (yes, any coworker!). So far, nothing complicated. They’re engaged, committed to their management chain, and deliver good or even excellent results (or, on the contrary, not at all). In any case, recognized by their hierarchy, they eventually get promoted!
And this cycle of performance measurement and/or recognition can be repeated, over and over again… Until, inevitably, that same colleague lands a role they simply can’t handle. Sometimes really can’t handle... And eventually, it shows.
Sure, this coworker might train and “grow” to make up for their shortcomings… but reality often (and even frequently) ignores the lovely world of personal and professional development.
In other words, training won’t always cut it. Success depends entirely on the position, the expectations, and the person’s learning curve—which, let’s face it, doesn’t always improve overtime. In some roles, it might even decline rapidly… and tech acceleration certainly doesn’t help.
A brilliant engineer can turn into a mediocre manager, unable to guide their team. A top graduate might not have an ounce of entrepreneurial instinct, just as a top-tier technocrat isn’t automatically a visionary… No need to bring up the infamous line about selling sand in the desert.
To give credit where it’s due, let’s go back to Laurence J. Peter’s own words: “In a hierarchy, every employee tends to rise to their level of incompetence.” The corollary? “Over time, every position will be occupied by someone who is incapable of fulfilling its responsibilities.”
And numbers (as facts) tend to be stubborn. A Harvard Business Review study found that 60% of new managers fail within the first 18 months, often due to a lack of appropriate skills[2].
Needless to say, this phenomenon leads to frustration, inefficiency, and a “leveling down” of the highest layers of organizations. Worse yet, these underperforming managers tend to entrench themselves—either in denial or simply enjoying the comfort of their new role—blocking the rise of true talent. And since there’s “no room at the top,” that ceiling gets a whole lot lower for everyone else.
What happens then? The high performers are usually the first to leave. As for the others… well, you know how the saying goes... So why not let them go? The reasons are endless: labor laws, office politics, favoritism, weak leadership, fear of change, complacency… Take your pick!
Needless to say, in a world where performance rules and tech keeps accelerating the pace, this internal deadlock can only be more disastrous for an organization’s competitiveness.
AI AS A NEW THREAT:
A PROVEN RISK OF DUMBING DOWN AND GROWING DEPENDENCE
There was a time— not so long ago, though it may feel ancient to anyone under 20—when studying meant going to the library to borrow books, or visiting a bookstore to buy them. The most recent generations to live this were probably Gen X and Gen Y. And you had to read! Take notes, learn, reproduce, analyze, critique… and yes, pay attention to handwriting and spelling, too.
That time appears to be over, at least according to data from INSEE[3] and the French Ministry of Education[4].
Other skills have now taken precedence over calculation, reading, writing, synthesis, or analytical thinking— many of them tied to digital fluency: social media, graphic design, video editing, or crafting prompts for generative AI...
And with AI, a real concern has emerged: the risk of a general dumbing down... Far from solving the Peter Principle, this dynamic may be accelerating it. Tools like virtual assistants, workflow tools and automation software are now taking over many considered “human” competencies: data analysis, report writing, project management.
Students are embracing this shift… and so are employees. Acknowledging it means admitting we’ve lost our traditional ways of transmitting knowledge—whether oral, written, or through critical reading. We’ve handed that knowledge over to machines—algorithms that store and process data for us.
In more “Bourdieusian” terms, what’s at stake is the transfer of cultural capital from humans to machines. Nothing less. And with it comes the risk of a sharp decline in critical thinking, as algorithmic bias goes unchecked and the era of “post-truth” (already way underway) takes hold. That means a prepackaged reality, delivered in seconds or less, to passive individuals willing to give up control over knowledge—and even control of knowledge itself.
A Gartner survey shows that 47% of professionals already delegate tasks to AI that they used to manage on their own five years ago[5]. Take the example of a financial analyst: by offloading calculations to an algorithm, their own expertise inevitably atrophies. And repetition over time only worsens that effect.
This transfer of knowledge to machines flattens individual skills, making employees less autonomous and more dependent. Combined with the Peter Principle, it’s an explosive mix. The downward leveling of management and executive roles identified by Laurence J. Peter could accelerate and spread to every layer of the organization. Worse, performance metrics themselves could quickly become skewed, promoting “imposter employees” into leadership roles..
A manager promoted based on past performance, but whose competence has withered due to AI reliance, may use smart tools to mask their shortcomings. But for how long? And at what cost to the organization?
This dual phenomenon—managerial incompetence and knowledge outsourcing—creates two major risks for organizations:.
First, systemic dependency. When AI becomes the main knowledge repository, any failure—cyberattack, faulty update—can paralyze operations. Imagine a logistics team that can’t function without its planning software.
Second, a loss of collective intelligence. Talents stagnate under weak managers, creativity dries up, and innovation suffers. A study by MIT has reported that overly AI-centric companies lose up to 20% of their adaptive capacity[6]. The dumbing-down effect becomes organization-wide.
A Deloitte study warns that 35% of companies fear losing critical expertise due to this overreliance[7]. And the risk is real—even inevitable—if this logic is pushed too far. The result? A weakened organization, where human intelligence fades and teams are left vulnerable to outdated algorithms or tech failures.
This is the heart of the challenge: managing AI dependence before it manages us.
AI AS A SOLUTION:
PREVENTING INCOMPETENCE & RETHINKING ORGANIZATIONS
Obviously, we shouldn’t fall into a one-sided reading of AI. It’s not about rejecting everything—or embracing it blindly—but rather finding a third path: one grounded in reason and measurable performance.
In this light, and faced with the dual phenomenon (Peter Principle × Knowledge Transfer), AI— when used wisely—can actually be a force for good, or at least offer concrete solutions...
First, data analysis allows us to diagnose and anticipate manifestations of the Peter Principle. For instance, machine learning algorithms can assess employees’ real competencies in their current roles (technical performance, interpersonal skills, resilience). This can help refine job and competency frameworks and even predict how likely someone is to succeed in a new role, should they be promoted or moved internally.
In fact, a McKinsey study found that companies using AI for talent management saw a 15% productivity boost by more accurately identifying promotion candidates[8]. So no more blind decisions based solely on past performance, internal networks, or favoritism: AI brings valuable objectivity and improves outcomes!
Moreover, it introduces a necessary touch of personalization in relation to the company’s actual needs: training. A potential manager identified as weak in leadership can follow a tailored development plan before promotion (via AI-powered platforms like Coursera, LinkedIn Learning, Skillsday, or our solution Skillsfabric). In this way, AI transforms a mechanical process into a thoughtful upskilling journey tailored to both company and employee needs. Another way to break the incompetence cycle.
AI also opens the door to reinventing hierarchy. The Peter Principle thrives in rigid structures where promotion is the only reward. But it’s no insult to say that not every talented person is cut out to be a manager. Companies like Google have been testing alternate tracks: technical experts can grow without managing teams, with AI measuring their impact through specific indicators (project delivery, innovation, etc.)[9].
Internal transitions and expert career paths (full or partial) are also on the rise—through mentoring, coaching, internal training, and more. These options are not only plentiful, but also widely appreciated by employees.
Ultimately, this model, in any case, values human intelligence without sacrificing it to technological dependence. It also proves to employees that their development and professional growth aren’t just corporate buzzwords.
Finally—and perhaps this is the most obvious and widely recognized benefit of AI—it can relieve managers of a few low-value, routine tasks (tracking, reporting), freeing them up to focus on what truly matters: inspiring and uniting their teams! Automating the unnecessary reduces the risk of failure linked to poorly managed overloads of responsibility, almost always tied to the human side of leadership.
THE PETER PRINCIPLE & AI: RISING TO THE CHALLENGE OF MEDIOCRACY!
The combination of the Peter Principle and the unintended consequences of AI is certainly cause for reflection—but more importantly, for action!
Of course, this needs to be put into perspective—after all, everyone eventually comes up against their own threshold of incompetence. But that threshold isn’t set in stone. If the person concerned chooses to tackle it head-on, the Pareto Principle (this time) might quickly prove just how powerful focused effort can be..
Whether it’s training, coaching, personal inquiry, or simply listening and showing empathy toward one’s team… everyone must find what works for them! But one thing is certain: facing reality, however uncomfortable, is the first step. From there, it’s about choosing to act. Much like therapy, really.
The solution may well lie in the world of professional sports, where some great coaches have understood one essential truth: helping an athlete simply make up for their weaknesses pales in comparison to training them again and again to hone their strengths—that’s often what makes the difference… and leads to excellence!
That’s also the role of a manager… to adopt this mindset and attitude toward their team. To rebuild confidence and help people shed the limits they impose on themselves—wherever possible.
To do so, managers must be proactive—not just by refining their technical skills, but by strengthening their emotional intelligence and relational abilities.
As for organizations, executives, and HR leaders, it’s time to move beyond lip service. They must embody and institutionalize core practices to guide internal mobility and promotion processes:
- Map current and expected competencies for each role
- Evaluate actual vs. required skills for the desired position
- Confirm authentic motivation and give an honest picture of the job and its deeper meaning
- Discuss the candidate’s strengths and weaknesses openly
- Provide personalized training plans
- Offer internal (HR, mentors, sponsors) or external (coaches, etc.) support if needed
- Build a culture of change, reflection, feedback, and open discussion—through regular rituals
- Stop sacrificing middle management (often neglected, as proven by younger generations who show little interest in pursuing it)
On all these fronts, well-calibrated and well-used AI can be a powerful ally in breaking the culture of incompetence—or at least clearing obstacles. Poorly implemented and misused, however, it can reinforce the Peter Principle’s negative effects by excluding promising talent for unfair reasons (incomplete data, algorithmic bias).
PwC even found that 62% of professionals fear that AI could deepen inequality if not properly regulated[10].
AI, when paired with the Peter Principle, is clearly a double-edged sword. On one side, it risks flattening skills and fueling dependence, heightening the loss of competence and cognitive ability. On the other, it offers answers: accurate diagnostics, targeted training, and reimagined hierarchies.
In some cases, organizations must overcome deeply rooted cultural resistance: some managers still see AI as a threat to their role or authority. The key is to reframe AI as a tool that empowers people—not replaces them. And above all, to make that message loud and clear.
The ultimate answer likely lies in balance: using AI to guard against the Peter Principle, without giving it full control. Organizations that combine technology with human-centered values will turn this challenge into opportunity—rising above mediocrity to aim for excellence.
In a capitalist world, where technological transformation goes hand-in-hand with rising performance expectations… that’s a luxury organizations can’t afford to ignore!
To go further, download our white paper:"AI vs HR: je t'aime, moi non plus!"
An organizational or AI project? Need to rethink your HR processes? Want to review your mobility or promotion policy? Contact us !
Discover ACT-ON STRATEGY.
[1] Laurence J. Peter and Raymond Hull, The Peter Principle, William Morrow & Co, 1969.
[2] Why New Managers Fail, Harvard Business Review, 2022.
[3] INSEE PREMIERE, En 2022, un adulte sur dix rencontre des difficultés à l'écrit, No. 1993, April 2024.
[4] French Ministry of Education, , Information Note No. 22.37, December 2022.
[5] AI in the Workplace Survey, Gartner, 2024.
[6] MIT Sloan Management Review, The Risks of Over-Reliance on AI, 2024.
[7] Deloitte, The Human Cost of Automation, 2023.
[8] McKinsey & Company, The Future of Work: How AI Can Boost Productivity, 2023.
[9] Laszlo Bock, Work Rules!, Grand Central Publishing, 2015.
[10] PwC, AI Ethics Survey, 2024.