Tomorrows Compass vs IMD Future Readiness Indicator
The IMD Future Readiness Indicator and Tomorrows Compass Discover both use the phrase "future readiness" in their names. They are not competing tools. They are not even comparable tools. They answer fundamentally different questions, at fundamentally different scales, and a buyer who confuses them will end up buying the wrong instrument for their actual question, which is a more expensive mistake than any direct cost of the assessment.
What IMD measures
The IMD Future Readiness Indicator is published by the IMD World Competitiveness Center, which sits within IMD Business School in Lausanne. The Indicator ranks countries. It assesses national competitiveness across innovation capacity, talent infrastructure, technological adaptability, and macro-economic positioning, and produces an annual league table that is widely cited in trade-policy circles, foreign investment decision-making, and macro-economic strategy work.
The Indicator is built from a composite index of public statistics, longitudinal indicators, and surveys of senior executives in each country surveyed. The methodology is openly published, the rankings are debated robustly in the literature, and the Indicator is rightly respected as a serious macro-economic instrument. When the question is "where does Singapore stand on innovation capacity relative to South Korea, and how has that changed over the last five years," IMD's Indicator is the right tool.
What IMD's Indicator is not, by design, is a personal development tool. It is not an enterprise capability diagnostic. It is not a coaching aid. It does not measure individual professionals against a behavioural framework. It does not produce development priorities for a specific person preparing for a specific role transition. The unit of analysis is the country.
What Tomorrows Compass measures
Tomorrows Compass Discover is a 215-item behavioural assessment that measures individual professionals. It scores 12 future-readiness capabilities organised into three skillsets (Dynamic Adaptability, Strategic Problem Solving, and Agile Collaboration), and assigns each capability one of four strength bands: Development Priority, Baseline Strength, Established Strength, or Signature Strength. It synthesises capability scores with the delegate's Enneagram personality type to produce a personalised development blueprint. See the 12 skills for the full model.
Discover is a personal and team-level instrument. Its unit of analysis is the individual professional. Its output is a development plan: this capability is currently a Development Priority in a domain that matters for your role, and here is the type-specific development key for the next twelve months. It is not a national competitiveness ranking, and it would be useless as one. Averaging individual capability scores across a country would lose every signal that makes the per-capability bands actionable.
Why the confusion happens
Both tools use the phrase "future readiness" because both, in some sense, are about adaptability under change. The shared terminology is unfortunate but understandable: "future readiness" is a useful frame at multiple scales: national, organisational, team, and individual. The same words apply because the underlying concept is genuinely scale-invariant. A country can be more or less future-ready. A workforce can be more or less future-ready. A leadership team can be more or less future-ready. A specific senior leader can be more or less future-ready. All four statements are coherent. They are just not measuring the same thing or operating at the same scale.
The practical question for a buyer is which scale they are operating at:
If you are advising a national government on investment in skills infrastructure, on trade-policy positioning, or on long-horizon competitive strategy, IMD's Indicator is the right instrument. Discover would not even pretend to do this work.
If you are running a leadership development programme, building cohort capability across a senior team, coaching a high-potential through a career transition, or making a succession decision about a specific candidate, Tomorrows Compass Discover is the right instrument. IMD's Indicator has nothing useful to say about any of these decisions.
The two instruments live in genuinely different professional communities. IMD's Indicator is consumed by macro-economic policy advisors, trade economists, foreign-investment strategists, and government competitiveness units. Discover is consumed by L&D leads, executive coaches, organisational development practitioners, and HR business partners working on talent decisions. The fact that both communities use "future readiness" as a frame is a vocabulary collision, not a category overlap.
Can they be used together?
In principle, yes. A consultancy advising a national talent strategy might use IMD's macro-data to position the country relative to peers, alongside aggregated Tomorrows Compass cohort data to triangulate where individual-level talent capability sits relative to that national positioning. A national skills agency commissioning research on workforce future-readiness could combine the IMD national context with Discover's individual-level data to produce a richer picture than either tool alone would yield.
In practice, this is rare. The two tools live in different professional communities, are sold through different channels, and integrate poorly because they were never designed to be used together. The combined-use case exists in theory but is unusual in commercial practice. Most buyers correctly choose the tool that matches their scale and ignore the other one entirely, which is the right move when the scales are genuinely different.
The mistake to avoid is treating the IMD Indicator as if it could substitute for individual-level capability measurement, or treating Discover as if it could speak to national competitiveness. Both substitutions break, and both produce confused decisions.
What gets lost in the confusion
When a buyer ends up with the wrong instrument for their question, the cost is not the price of the assessment. The cost is the decision that gets made on the wrong information. A national skills agency that uses individual capability data to make a country-level competitiveness claim will overweight the signal from whichever cohort happened to be assessed and miss the macro factors the IMD Indicator is built to surface. A leadership development programme that uses national competitiveness data to design individual development plans will produce generic curricula disconnected from any specific leader's gaps.
Both errors are surprisingly common because the shared "future readiness" terminology hides the scale mismatch until late in the engagement. The clearest way to avoid the trap is to ask the question of unit of analysis up front: are we trying to understand a country, an organisation, a team, or a specific person? Once that is settled, the right instrument follows automatically. Asking the question late, after the assessment is already commissioned, is when buyers end up with data that does not answer the question they actually have.
A worked example
Consider a Cape Town-based consulting firm advising a South African government department on workforce-readiness investment for the next five-year planning cycle. The department needs two distinct things from the engagement.
The first need is national context. Where does South Africa rank on future-readiness indicators relative to comparable upper-middle-income economies? What gaps exist relative to peer countries on talent infrastructure, innovation capacity, and technological adaptability? This is an IMD question. The Future Readiness Indicator places South Africa in the league table, identifies the specific dimensions where the country is gaining or losing ground, and gives the department the macro-economic framing it needs for its policy submission.
The second need is individual-level capability data. The department wants to fund a leadership development programme for senior public-sector officials, and it needs to know which behavioural capabilities those officials should be developing to lead through the next five years of policy implementation. This is a Tomorrows Compass question. Aggregated Discover data across a cohort of senior officials produces a capability-gap profile. Perhaps Embracing Uncertainty and Cross-Cultural Collaboration show up as cohort-wide Development Priorities, that informs the curriculum design and the individual development plans for each official.
The consulting firm uses both instruments, on different parts of the engagement, with different stakeholders. The IMD Indicator informs the strategy document submitted to the minister. The Discover data informs the curriculum design and individual coaching plans. Each instrument is doing the job it was built for. Neither was forced to substitute for the other.
Methodology
IMD's Indicator uses a composite index built from public statistics, longitudinal indicators, and surveys of executives. The methodology is openly published, peer-reviewed, and refined annually. It is a mature instrument in the macro-economic measurement tradition.
Tomorrows Compass uses Phase A absolute scoring on a 215-item behavioural assessment with a 5-flag validity engine that produces a public response-quality verdict alongside the capability scores. The full methodology, including the maturity statement and the Phase A through B to C transition plan, is published at the methodology page. The current maturity statement is Provisional: instrument locked, pilot data collection in progress.
Comparing the methodological maturity of the two is a category error. IMD's Indicator is a mature macro-economic measurement instrument. Discover is a younger individual-level behavioural assessment with a transparently published maturity stage and a public transition plan. They are not in the same category and should not be compared as if they were. See Beyond Buzzwords for the full framework-credibility deep-dive.
Take the assessment
If you are operating at the country scale, IMD is the right tool. If you are operating at the individual or team scale, take Tomorrows Compass Discover. Best Future Skills Assessments in 2026 is the companion landscape piece, placing both instruments in the wider context of future-readiness measurement and helping a buyer triangulate which scale they are actually operating at before they commit to an instrument.
All methodology specifics referenced in this article reflect Tomorrows Compass's own framework, estimates, and modelling. Pilot validation is in progress; figures should be read as directional rather than peer-normed. Updated as our pilot data matures.

About the Author
Dr. Ercole Albertini
Co-Founder, Tomorrows Compass
Dr. Eric Albertini is co-founder of Tomorrows Compass, with over 25 years at the intersection of leadership strategy, people development, and organisational transformation. His doctoral research synthesised 15+ global competency frameworks into a practical model for future-readiness, which became the foundation of the Tomorrows Compass assessment. He has built learning centres of excellence for one of SA's leading Financial Institutions, designed skills-based development programmes delivered across Africa, and published research on integrating spirituality into leadership development. Eric writes about what it takes to build leaders and organisations that don't just survive disruption, but thrive in it.
Discover where you stand
215 items. ~35 minutes. A personalised report across 12 research-backed capabilities.
Take the Free Assessment