When AI Measures Skills, Who Measures the Human? The Case for Behavioural Assessment in an AI-Saturated Talent Market
# When AI Measures Skills, Who Measures the Human? The Case for Behavioural Assessment in an AI-Saturated Talent Market
The talent assessment industry has a $400 billion problem. Not a lack of measurement tools, not a shortage of AI-powered platforms, and certainly not a deficit of skills taxonomies. The problem is simpler and more uncomfortable than any of those: the industry measures the wrong things, fixes the wrong gaps, and then wonders why fewer than 30% of companies are satisfied with the results.
This is not a provocative claim designed to capture attention. It is an evidence-based observation rooted in the most significant workforce research published in 2025 and early 2026, from sources including the Josh Bersin Company, SHL, and the World Economic Forum. Each of these institutions, from different vantage points, has arrived at the same troubling conclusion: existing approaches to skills measurement and development are not working at scale. Where they diverge, and where the real argument begins, is in their proposed solutions.
The thesis of this article is straightforward. AI-powered skills assessment is proliferating at extraordinary speed. The methodology behind it, however, remains anchored in a measure-gaps-and-fix-them model that has not fundamentally changed in decades. The missing variable, the one that determines whether any reskilling effort, AI-readiness programme, or talent development initiative actually produces lasting change, is human behaviour: how people show up, adapt, communicate, and lead in their specific context. Until behavioural assessment becomes the foundation rather than the afterthought, the assessment industry will continue to produce impressive data and disappointing outcomes.
The $400 Billion Question: Why More Measurement Hasn't Produced Better Development
In February 2026, the Josh Bersin Company published what may be the most consequential piece of corporate learning research in over a decade. The findings are stark. Despite approximately $400 billion spent annually on corporate learning and development globally, 74% of companies report that they cannot keep up with the pace of skills demand. Fewer than 30% are satisfied with the skill development outcomes they are achieving (Bersin, 2026).
Those numbers deserve a moment of serious reflection. They do not describe a system that needs optimisation. They describe a system with a structural flaw.
Bersin frames the problem primarily as a learning delivery challenge, calling for what he terms "dynamic enablement" through AI-native platforms that can deliver learning in the flow of work. The research is rigorous and the analysis is compelling. Yet the prescription focuses almost entirely on technical and role-based skills, treating behavioural capability as a secondary or derivative concern, something that will presumably improve once people have access to the right content at the right time.
Research consistently shows that this assumption does not hold. Knowledge acquisition and behavioural change operate through different mechanisms. A person can complete every AI literacy module available and still lack the communication patterns, collaborative instincts, or adaptive responses that determine whether they apply that knowledge effectively in a team setting. The 74% failure rate is not, at its core, a content problem or a delivery problem. It is an assessment baseline problem. If the initial measurement does not capture how a person actually behaves, the development pathway built on that measurement will be misaligned from the start.
The distinction matters enormously. Content delivery, no matter how personalised or AI-enhanced, operates on the assumption that the gap between current state and desired state has been accurately identified. When the gap analysis focuses exclusively on technical knowledge or cognitive ability, it misses the behavioural layer entirely. And that behavioural layer, the observable patterns of how someone responds to ambiguity, engages with unfamiliar problems, or supports others through change, is precisely what determines whether new skills are actually applied or merely acquired.
SHL, Adaptive Foresight, and the Cognitive Assessment Ceiling
SHL has been among the most active voices in shaping the 2026 assessment conversation. In January 2026, the company named "adaptive foresight" its Skill of the Year, defining it as the ability to anticipate change and act before disruption arrives (SHL, 2026a). It is a thoughtful concept, and the instinct behind it is sound: organisations need people who can see around corners.
But when we examine how SHL operationalises this concept, the limitations become visible. SHL's assessment model remains grounded in trait measurement and cognitive ability frameworks. Adaptive foresight, in their implementation, becomes something you test for, a cognitive capacity that an individual either demonstrates in an assessment environment or does not. The solution they sell is measurement technology, not a rethinking of what gets measured.
This matters because adaptive foresight, when you strip away the branding, is not a static trait. It is a behavioural pattern. It is what happens when a person encounters incomplete information, uncertain timelines, and competing priorities, and chooses to act rather than wait. It manifests differently in a product manager than in a frontline nurse. It looks different in a startup founder than in a public sector leader. Testing for it with a standardised cognitive assessment captures, at best, a narrow slice of the actual capability.
In a separate February 2026 publication, SHL analysed data from nearly one million individuals and concluded that only one in three workers currently has the skills to thrive in AI-enabled roles (SHL, 2026b). Their defined AI readiness factors include AI literacy, analytical ability, continuous learning orientation, and willingness to champion AI adoption. Read that list carefully. Every single one of those factors is behavioural in nature. AI literacy requires the behavioural willingness to engage with unfamiliar technology. Continuous learning orientation is, by definition, a sustained behavioural pattern. Willingness to champion AI is an observable behaviour expressed in meetings, in project choices, and in how someone influences colleagues.
Yet SHL frames and sells all of these through a cognitive and competency lens, not a behavioural development one. The distinction is not semantic. A cognitive lens asks: does this person have the capacity? A behavioural lens asks: does this person demonstrate the pattern, and in what contexts does it strengthen or diminish? The first question produces a score. The second produces a development pathway.
The Deficit Trap: Why AI-Driven Assessment Keeps Looking for What's Missing
The proliferation of AI-powered assessment platforms in 2025 and 2026 has been remarkable. New tools enter the market weekly, each promising more precise skills mapping, faster gap analysis, and better alignment between workforce capability and organisational need. The technology is genuinely impressive. The underlying methodology is not.
Nearly every AI-driven assessment platform on the market today is built on the same foundational logic: identify what skills employees lack, recommend training to fill those gaps, reassess, and repeat. This is deficit-fixing dressed in machine learning language. The algorithm may be sophisticated, but the question it asks is the same question assessment has asked for fifty years: what is wrong with this person's skill set, and how do we fix it?
The evidence supports a different starting point. Strengths-based development approaches, where the initial assessment identifies what a person already does well and builds outward from those patterns, consistently outperform deficit-based models. Gallup's extensive research across multiple decades has demonstrated that employees who use their strengths daily are six times more likely to be engaged and three times more likely to report excellent quality of life. The World Economic Forum's 2025 Future of Jobs Report reinforces this direction, identifying resilience, flexibility, agility, and leadership as the top employer priorities for 2030, and noting that 39% of core skills will change within five years (WEF, 2025). These are not gaps to be filled. They are human capabilities to be recognised, measured accurately, and developed intentionally.
The AI assessment market, by and large, has not absorbed this insight. The tools keep getting faster at identifying deficits without ever questioning whether deficit-identification is the right starting point. When the primary output of an assessment is a list of what someone cannot do, the development experience begins with inadequacy. When the primary output is a clear picture of how someone already shows up, adapts, and contributes, the development experience begins with agency.
This is not a philosophical preference. It is a methodological choice with measurable consequences for engagement, retention, and actual capability growth.
The Individual Signal Lost in Organisational Noise
There is a second structural problem with the current generation of AI-powered assessment, and it is less discussed but equally important. These platforms are designed primarily for enterprise-scale aggregation. Their core value proposition is organisational: a heat map of capability across the workforce, a dashboard showing where skill gaps cluster, a recommendation engine that suggests which departments need which training programmes.
This is useful data. It is also incomplete data. When assessment is designed primarily to serve organisational reporting, the individual signal gets compressed into aggregate patterns. The nuance of how one specific person communicates under pressure, or how another person's collaborative instincts shift depending on team composition, or how a third person's problem-solving approach differs when they are leading versus contributing, all of that richness gets flattened into a competency score that feeds a dashboard.
The consequence is that the person being assessed rarely receives insight that feels true to their actual experience. They receive a profile that describes them in generic terms, mapped against a competency framework they had no hand in defining, and accompanied by development recommendations that could apply to anyone with a similar score. The assessment becomes something done to them rather than something done for them.
This is where an individual-first orientation becomes not just a philosophical distinction but a practical advantage. When assessment begins with the person, when the primary output is a behavioural profile that the individual recognises as accurate and immediately useful, two things happen. First, the individual engages with the results rather than filing them away. Second, the development pathway that follows has a foundation of self-awareness rather than a foundation of organisational mandate.
The enterprise still gets its data. Aggregated behavioural patterns across a workforce are enormously valuable for strategic planning. But the aggregation is built from high-fidelity individual profiles, not from standardised scores that sacrifice precision for scalability.
Behavioural Readiness as the Precondition for AI Readiness
The central argument of this article is not that AI has no role in assessment. AI has a significant and growing role in how organisations understand capability, predict performance, and personalise development. The argument is more specific: behavioural readiness is the foundation without which AI-readiness programmes will continue to underperform.
Consider a practical scenario. An organisation invests in an AI-readiness programme designed to upskill its workforce for an increasingly automated operating environment. The programme includes AI literacy training, workshops on working alongside generative AI tools, and role-redesign initiatives. All worthwhile. But the organisation assesses readiness purely through a cognitive and technical lens: does this person understand AI concepts, can they use the tools, do they have the analytical skills to interpret AI outputs.
Six months in, adoption is uneven. Some teams have integrated AI tools fluently. Others are resistant. Some individuals completed every module but have not changed a single workflow. Others who scored lower on the initial assessment have become informal champions, helping colleagues experiment and adapt.
When we examine the data more closely, the differentiating factor is rarely technical skill. It is behavioural. The people who adopted AI tools effectively were those who already demonstrated patterns of openness to unfamiliar processes, comfort with iterative experimentation, and a tendency to share knowledge with peers. The people who resisted were not less capable cognitively. They simply had different behavioural orientations toward change, risk, and autonomy.
No amount of AI literacy training changes those behavioural patterns. They require a different kind of intervention, one that begins with an accurate behavioural assessment, makes those patterns visible to the individual, and creates structured opportunities for development.
This is precisely the space that the Tomorrows Compass 12 Skills framework is designed to occupy. The framework measures observable, developable behavioural patterns: how people communicate, how they adapt, how they lead, how they collaborate, how they solve problems. These are not abstract competencies mapped from a job description. They are durable human patterns that remain relevant regardless of which tools, platforms, or role definitions are in play.
The framework is AI-agnostic by design. It does not measure readiness for a specific technology or role. It measures the behavioural foundation that determines whether any new capability, technical or otherwise, is likely to be adopted, applied, and sustained. In a market where skills taxonomies are changing faster than assessment platforms can update their frameworks, durability is not a weakness. It is the most valuable feature an assessment can offer.
What Gets Measured Shapes What Gets Developed
There is a principle in measurement science that is often cited but rarely followed to its logical conclusion: the act of measurement shapes the thing being measured. In assessment, this plays out with significant consequences. If your assessment measures cognitive deficits, your development programme will focus on cognitive remediation. If your assessment measures technical skill gaps, your development programme will focus on training courses. If your assessment measures observable behavioural patterns, your development programme will focus on practical, context-specific behavioural growth.
The assessment baseline determines the development trajectory. This is not a minor methodological detail. It is the single most consequential design decision in any talent development strategy.
The WEF's 2025 report identifies the skills employers consider most important through 2030: resilience, flexibility, agility, motivation, self-awareness, curiosity, lifelong learning, empathy, active listening, and leadership (WEF, 2025). Every one of these is a behavioural pattern. Not a cognitive trait to be tested. Not a technical skill to be trained. A pattern of behaviour that can be observed, measured, developed, and strengthened over time.
When the world's most authoritative source on future skills tells us that human behavioural capability is the top priority, and the assessment industry responds with more AI-powered cognitive testing and skills gap analysis, there is a misalignment that deserves scrutiny.
The case for behavioural assessment is not a case against technology. It is a case for precision. It is the argument that if we are going to measure human capability, and if we are going to build development programmes on the results, we should start by measuring what actually drives performance in human terms: the observable patterns of how people work, communicate, adapt, and lead.
The Path Forward: Assessment That Starts With the Person
The assessment market in 2026 is not short on innovation. It is short on the right question. The right question is not "what skills does this person lack?" or "how AI-ready is this workforce?" The right question is: "What behavioural patterns does this person already demonstrate, and how can those patterns be channelled, strengthened, and developed for whatever comes next?"
This is not a minor reframing. It changes the entire architecture of how organisations approach talent development. It shifts the assessment from a sorting mechanism to a development catalyst. It gives the individual a starting point grounded in self-recognition rather than inadequacy. And it provides the organisation with a behavioural baseline that remains valid even as technical skill requirements shift quarter by quarter.
Tomorrows Compass exists to ask this question with scientific rigour and practical clarity. The 12 Skills framework is not a competitor to AI-powered skills taxonomies. It is the behavioural foundation that those taxonomies require but do not provide. Without it, AI readiness programmes will continue to produce the outcomes the data already shows: large investments, sophisticated measurement, and persistent dissatisfaction with the results.
The evidence is clear. The $400 billion is being spent. The tools are being built. The taxonomies are being refined. What remains missing is the human baseline, the behavioural assessment that tells each person not what they lack, but what they bring, and how to build from there.
That is the work that matters now. Not more measurement. Better measurement. Measurement that starts with the person and earns its place in every development decision that follows.
---
Sources:
- Bersin, J. (2026). New Research: How AI Transforms $400 Billion of Corporate Learning. Josh Bersin Company. https://joshbersin.com/2026/02/new-research-how-ai-transforms-400-billion-of-corporate-learning/
- SHL. (2026a). Revealed: SHL's Skill of the Year 2026. https://www.shl.com/resources/by-type/blog/2026/revealed-shls-skill-of-the-year-2026/
- SHL. (2026b). Six AI Shifts Reshaping HR: Questions HR Leaders Should Be Asking. https://www.shl.com/resources/by-type/blog/2026/six-ai-shifts-reshaping-hr-questions-hr-leaders-should-be-asking/
- World Economic Forum. (2025). Future of Jobs Report 2025. https://www.weforum.org/stories/2025/12/work-transformation-skills-agility-growth/

About the Author
Dr. Ercole Albertini
Founder, Tomorrows Compass
Dr. Eric Albertini is co-founder of Tomorrows Compass, with over 25 years at the intersection of leadership strategy, people development, and organisational transformation. His doctoral research synthesised 15+ global competency frameworks into a practical model for future-readiness, which became the foundation of the Tomorrows Compass assessment. He has built learning centres of excellence for one of SA's leading Financial Institutions, designed skills-based development programmes delivered across Africa, and published research on integrating spirituality into leadership development. Eric writes about what it takes to build leaders and organisations that don't just survive disruption, but thrive in it.
Discover where you stand
168 questions. ~25 minutes. A personalised report across 12 research-backed capabilities.
Take the Free Assessment