← Writing

September 16, 2025

Change Management Is Not a Communications Exercise

Most digital transformations fail not because the technology doesn't work, but because the human side is underfunded and misunderstood. Here's what effective change management actually looks like — and where most programmes go wrong.

Change Management Is Not a Communications Exercise

The majority of digital transformation failures I've been called in to diagnose share a common thread. It's rarely the technology. The platform works, the integrations are sound, the development team delivered what was asked of them. What failed was the organisation's ability to adopt it.

The post-mortem usually identifies "change management" as the gap. But when you dig into what change management actually looked like in those programmes, the pattern is consistent: a communications plan drafted six weeks before go-live, a series of training sessions the week before launch, and a senior leader's message in the company newsletter. That's not change management. That's change announcement.

The distinction matters, because organisations that conflate the two consistently underinvest in the work that actually determines whether a transformation sticks.

Why Transformations Fail at the Human Layer

Digital transformations require people to do their jobs differently. Sometimes the changes are operational — new systems, new processes, new ways of collaborating. Sometimes they're more fundamental — new skills, new accountabilities, new incentive structures. In either case, the change is asking something real from real people, and those people will respond to what they experience, not to what they're told.

A few failure patterns I encounter repeatedly:

Adoption treated as a launch event. Technology projects have go-live dates. Change doesn't. The assumption that people will shift their behaviour on the date the system goes live — and will maintain that shift with minimal support — is one of the most expensive misbeliefs in transformation programmes. Adoption is a process, not a milestone.

Resistance misread as obstruction. When people resist a change, the instinct is often to escalate — more senior mandate, stronger communication, tighter compliance monitoring. Sometimes that's right. More often, the resistance is telling you something useful: the change doesn't work the way it was designed for the people it affects, there's a legitimate concern that hasn't been addressed, or the change makes sense from the top but creates a real problem at the front line. Treating resistance as signal rather than noise is one of the most valuable skills in transformation leadership.

Insufficient attention to the middle. Senior leaders sponsor the change. Frontline teams adapt to it. Middle management — team leads, department heads, function managers — often gets the least deliberate attention and has the most influence over whether the change actually happens. They are the ones who translate strategy into daily behaviour for the people who report to them. If they're not genuinely on board — not just compliant but convinced — the transformation tends to stop at their level.

Benefits realisation divorced from change. Business cases for transformation programmes typically model the benefits of the new state. Those benefits only materialise if people actually work in the new state. Treating the technology delivery and the people change as separate workstreams — with separate owners and separate success metrics — is a structural guarantee that the connection will be weak.

What Effective Change Management Actually Looks Like

The programmes I've seen deliver durable change tend to start earlier, go deeper, and stay longer than the ones that don't.

It starts in the discovery phase. Understanding how the change will land — which groups will be most affected, where the legitimate concerns are, what the current state actually looks like for the people who'll be asked to change — is not something you can do retrospectively. Change impact analysis done early shapes design decisions that would otherwise be fixed by the time you realise they're wrong.

It's built on an honest stakeholder picture. Stakeholder mapping in most programmes is too optimistic. People are classified as "supportive" or "neutral" based on what they say in meetings with senior leaders, not on how they're likely to behave when the change is real. A more useful approach distinguishes between structural resistance (the change creates a genuine problem for this person or team that needs to be addressed) and personal resistance (concerns that can be resolved through better information and involvement). The former requires design changes; the latter requires engagement. Conflating them leads to the wrong response.

Change agents are identified and supported. Formal communications only go so far. The most effective channel for change is people hearing from trusted colleagues that the new way of working is better — or at least manageable. Identifying who the informal influencers are in each affected group and deliberately engaging them early is consistently one of the highest-return activities in a change programme. It's also one of the most frequently skipped.

Training is tied to the actual work, not the system. Most transformation training is system training: here is how to use the new tool. Effective change training is different: here is how your job works now, here is what's different, here is how to handle the situations you'll actually encounter. The gap between "I know how to use the system" and "I know how to do my job with the system" is where most adoption failures live.

Support doesn't end at go-live. The first few months after a major change goes live are when the adoption trajectory is set. Teams that get visible, accessible support in this period — a central team tracking issues, clear escalation paths, rapid response to blockers — tend to build momentum. Teams that feel abandoned after go-live tend to find workarounds, revert to old habits, or both.

Measuring Whether Change Has Stuck

This is an area where most programmes are weaker than they should be. Measuring adoption is not the same as measuring change.

Adoption metrics — how many people have logged into the system, how many training sessions were attended — are useful leading indicators but tell you nothing about behaviour change. The metrics that matter more are harder to collect: are people actually doing their jobs differently? Are the intended process changes happening? Are the early benefits materialising?

A framework I use with clients:

Leading indicators — system logins, training completion, early usage of new processes. These tell you whether people are engaging with the change.

Behavioural indicators — observed changes in how work is actually done, manager assessments of team practices, workflow data from the new systems. These tell you whether the change is being applied.

Outcome indicators — the business metrics the transformation was meant to move. These tell you whether the change is working.

Programmes that track only leading indicators tend to declare success prematurely. Programmes that wait for outcome indicators alone lose the ability to course-correct before it's expensive to do so. Running all three in parallel gives you the picture you actually need.

Building Internal Change Capability

Many organisations import change management capability from consultancies or contractors for each transformation, then start from scratch the next time. This is expensive and slow, and it means the organisation never builds the muscle.

The alternative — investing in an internal change capability that persists across programmes — compounds in value. People who understand how the organisation changes, where the informal power lies, what has worked and what hasn't, are assets that outside partners can't replicate. Building that capability requires deliberate investment, but organisations that have done it consistently have a structural advantage in how quickly and reliably they can move.

This doesn't require a large team. I've seen organisations with a single skilled internal change lead outperform organisations with large external change workstreams, simply because the internal person understood the culture well enough to diagnose problems and adapt in real time, rather than applying a standard methodology and hoping for the best.

The Practical Starting Point

If you're running a transformation programme, or about to start one, a few questions worth asking early:

  • When does change management start in this programme, and who owns it?
  • Have we mapped the stakeholder landscape honestly — including who has structural reasons to resist?
  • Who are the informal influencers in the most affected groups, and how are we engaging them?
  • What does the support model look like in the first ninety days after go-live?
  • How will we know, six months after launch, whether the change has actually stuck?

Digital transformation is ultimately a change in how people work. The technology enables it; it doesn't deliver it. The programmes that treat people change as a core workstream — resourced and managed with the same rigour as the technical delivery — are the ones that produce the business outcomes that justified the investment in the first place.

If you're working through how to structure the change dimension of a transformation programme, I'm happy to share what I've seen work — and what I've seen fail.