Dashboards glow green. Adoption rates look strong. Training completions hit 100%. Stakeholder acceptance is peaking. And yet – the change stalls. The problem isn’t the data. It’s the belief that metrics alone can steer human change.
In this blog, we argue for an evidence-led, not metrics-led, approach. Numbers are essential, but they are only one part of a broader decision system that must include purpose, context and lived experience.
When Metrics Mislead
We’ve seen many programs celebrate 100% training completion, only to discover six months later that those behaviours hadn’t shifted. In some instances, they’d gone backwards. The dashboard was green but the change never stuck.
Several patterns show up repeatedly in programs that “hit the numbers” but miss the mark:
- Goodhart’s law in action. When a measure becomes a target, it stops being a good measure. If success is defined as 100% training completion, teams will optimise for attendance, not for skill acquisition or behaviour change. The dashboard turns green while performance stays flat.
- Lagging indicators masquerading as progress. Adoption and utilisation are important but they typically trail the real determinants of success such as clarity of purpose, role readiness, manager advocacy and the removal of friction in core processes.
- Local optimisation, global harm. Individual workstreams can show strong throughput while the organisation experiences more complexity and change fatigue. Siloed metrics reward output, not outcomes.
- Averages that hide the story. A single satisfaction score can blend wildly different experiences. Averages flatten out the pain points that actually derail adoption like specific teams, locations or processes where the change collides with reality.
- Ethical and quality blind spots. Algorithmic insights trained on partial or biased data can reinforce inequities, overwhelm specific groups or reward short-term wins at the expense of trust.
Start With Purpose, Not a Dashboard
Change metrics should flow from a clearly articulated purpose: Why does this change matter, for whom and how will the organisation be better as a result? Without a purpose statement, teams backfill targets that are convenient to count rather than meaningful to achieve.
A practical way to anchor purpose is to draft a short “theory of change”:
- If we shift these behaviours or capabilities,
- then these business processes will perform differently,
- which will produce these measurable outcomes for customers, employees and the enterprise.
Only then should you choose the critical few metrics that trace that causal chain.
Build an Evidence Stack, Not Just a Dashboard
Instead of defaulting to a sea of KPIs, construct a simple, balanced ‘evidence stack’ that integrates quantitative and qualitative insight.
1) Leading indicators (Adoption & Capability)
- Includes early signals like sentiment checks or readiness surveys
- Can people perform the new tasks? (role-based proficiency checks)
- Do leaders enable the change (manager advocacy and coaching activity)
- Access to job aids, process clarity and system readiness
2) Real-time indicators (Experience & Friction)
- Includes pulse surveys, adoption dashboards, live feedback.
- Where does change collide with workload, tools or policy? (heat maps of pain points)
- Is this an impacted stakeholders’ “first time using X”, “handover to Y” (short pulse checks tied to specific moments)
- Listening channels: focus groups, skip-level conversations, floor walks
3) Lagging indicators (Outcomes & Impact)
- What are the performance shifts in the targeted process? (cycle time, quality, error rates)
- What are the customer or stakeholder outcomes linked to the change?
- Are the impacted stakeholders drifting back to old ways, reworking, implementing workarounds (Adoption and sustainability signals)
The rule of thumb: measure fewer things, more precisely, closer to where work happens.
Upgrade the Decision Cadence
Data is only useful if it changes what people do next. Establish a decision cadence that links evidence to action:
- Weekly “sense–decide–act” huddles at the workstream level that review the evidence stack, agree the smallest meaningful intervention and assign a single owner.
- Monthly integration forums that surface cross-workstream trade-offs, reroute resources to hotspots and retire activities that aren’t moving the needle.
- Clear stop rules for experiments (what threshold tells us to halt or scale?) to avoid change for change’s sake.
Two simple tools make this cadence stick:
- Pre-mortems: Before launch, ask “If this fails in six months, what likely went wrong?” Instrument the risks you surface.
- Red teaming: Nominate a small group to challenge prevailing assumptions and test whether reported improvements show up in real work.
Replace Vanity Metrics With Decision Metrics
Vanity metrics make a program look good to sponsors. Decision metrics tell you what to do. A few swaps to consider:
- From training completions to post-training task success rates in real workflows.
- From communication volume to message recall and “explain-it-back” tests with frontline teams.
- From system log-ins to end-to-end process outcomes (e.g., first-time-right).
- From generic satisfaction to capability and confidence by role, tied to specific tasks.
If a metric doesn’t routinely change a decision – scope, sequence, resourcing, coaching – it’s probably ornamental.
Marry Analytics With Context
Advanced analytics, process mining and AI can surface patterns humans miss: where work stalls, who’s stuck, which steps add no value. The temptation is to jump straight from signal to mandate. Resist that. Pair every analytic with “thick data” – the qualitative context that explains why the pattern exists.
For example, process mining might show that approvals bunch up with two managers. A few quick interviews reveal a policy conflict and unspoken risk concerns. The fix isn’t more reminders; it’s a policy tweak and a risk playbook. Analytics pointed to the right room; people supplied the key.
Watch for Unintended Consequences
Every indicator shapes behaviour. To avoid gaming, publish not only the numbers but also the intent behind them. Consider complementary checks:
- Throughput vs. quality. If you incentivise speed, track error rates alongside it.
- Adoption vs. load. If new steps increase adoption demands, monitor workload and change collision for the affected teams.
- Equity and inclusion. Disaggregate results by team, location or demographic where appropriate to ensure the burden of change isn’t landing on a few.
A Short Playbook for Evidence-Led Change
- Name the purpose. One paragraph, plain language.
- Draft the theory of change. If–then–which.
- Choose the critical few. No more than 8–10 metrics across the evidence stack.
- Instrument the moments that matter. Replace quarterly surveys with targeted pulses and observation where work is done.
- Run value slices. Pilot end-to-end in one segment; measure deeply; scale what works.
- Set a decision cadence. Weekly huddles, monthly integration, explicit stop rules.
- Tell the story, not just the score. Supplement dashboards with short narrative briefs: what we’re seeing, what it means, what we’ll do next.
The Mindset Shift
Data has elevated the discipline of change management. It has given practitioners a shared language with executives and product teams. But numbers do not absolve leaders from judgment, nor do they replace the human work of earning trust.
The real shift is subtle but decisive: from metric‑first to purpose‑first; from proving success to improving outcomes.
Data should inform better choices – not blind us to the people those choices affect. What do you think – are we still too obsessed with the numbers?
Want to learn more? It all starts with a conversation. Speak to us here.
