← Writing

June 10, 2025

Getting Your Organisation Ready for AI

Most organisations try to adopt AI before they're genuinely ready for it. Here are the four foundations — data, skills, governance, and culture — that need to be in place first, and how to assess honestly where you stand.

Getting Your Organisation Ready for AI

There's a pattern I keep encountering across different sectors and organisation sizes: significant investment in AI tools, mediocre returns, and a conclusion that AI "isn't delivering." In almost every case, when you trace the failure back to its root, the AI itself isn't the problem. The organisation wasn't ready for it.

Readiness isn't a gatekeeping concept — it's not about being perfect before you start. But there's a meaningful difference between organisations that have done the preparatory work and those trying to shortcut it, and that difference shows up consistently in outcomes. The organisations getting real value from AI in 2025 didn't get lucky with their vendor selection. They built the right foundations first.

Here are the four things I look at when assessing whether an organisation is genuinely positioned to get value from AI investment.

Data: The Non-Negotiable Foundation

I've written before about data quality as the precondition for AI in marketing. The same principle applies across every function. AI models — whether they're being used for analytics, automation, or content generation — are only as good as the data they're working with. They will find patterns in whatever you give them, including your errors and inconsistencies.

But data readiness is about more than quality. Three dimensions matter:

Accessibility. Can the data that AI would need to do useful work actually be reached? Many organisations have rich data locked in legacy systems, inconsistently structured across business units, or behind integration gaps that were never worth fixing before AI made them costly. An AI initiative is often the first time anyone has needed to consolidate data that's been siloed for years. Discovering those gaps mid-project is expensive.

Completeness. AI works best when it has sufficient volume and coverage to learn from. Thin data — a short time series, a product catalogue with inconsistent attributes, a CRM with patchy contact history — produces unreliable outputs. Before investing in AI capability, it's worth being honest about whether the underlying data is rich enough to support it.

Governance. Knowing what data you have, where it lives, who owns it, and what you're permitted to use it for is foundational. GDPR and equivalent regulations have made data governance a compliance matter, but it's also a practical one: AI projects that surface data governance issues mid-flight lose months to remediation.

The honest assessment: most organisations overestimate their data readiness. A brief audit — which data sources exist, which are clean and integrated, which are accessible — is a worthwhile investment before any AI procurement conversation.

Skills: Literacy at Every Level

The skills gap in AI adoption is usually framed as a technical problem — not enough engineers, not enough data scientists. That's real, but it's the easier part to solve. The more limiting constraint I encounter is a lack of AI literacy in the people who need to direct, evaluate, and oversee AI systems.

Effective AI deployment requires at least three skill profiles across the organisation:

Technical capability. The people who build, configure, and maintain AI systems. Depending on what you're building, this might mean data engineers, ML engineers, or developers comfortable with AI APIs and tooling. For many organisations, this is a mix of internal capability and specialist partners.

Domain-AI translation. People who understand both the business problem and enough of the AI landscape to connect them. This is often the hardest profile to find — and the most important. A marketing director who can evaluate whether an AI vendor's claims are credible, or a finance analyst who can frame a forecasting problem correctly for a data science team, creates enormous leverage. You don't need many of these people, but you need some.

Oversight and evaluation capability. AI systems produce outputs that need to be checked, challenged, and refined. The people doing that work — reviewing recommendations, catching edge cases, deciding when to trust the model and when to override it — need enough understanding of how the system works to do that well. This is a skill that most organisations aren't deliberately building.

The risk of ignoring the skills question is that AI tools get deployed without the human judgment around them that makes them safe and effective. That's where most of the high-profile AI failures I've seen originate — not bad models, but inadequate oversight.

Governance: Lightweight but Real

AI governance has developed a reputation for being either excessive bureaucracy or security theatre. Neither extreme is useful. What's actually needed is a small number of decisions made explicitly and early, so that AI projects don't have to relitigate them each time.

The decisions that matter most:

Risk appetite. What can AI systems decide autonomously, and what requires human approval? A useful mental model: start with AI that recommends and humans that decide, then selectively expand AI autonomy in areas where the failure modes are well understood and low-risk. Knowing your risk appetite upfront means you can design oversight into AI systems from the start, rather than bolting it on after something goes wrong.

Data usage rights. Which data can be used to train or inform AI systems? Which customer or employee data requires explicit consent before being processed by an AI? These questions need answers before a project starts, not after a vendor asks for a data export.

Accountability. When an AI system produces a bad output — a biased recommendation, an incorrect analysis, a harmful communication — who is responsible? Clear accountability structures prevent the diffusion of responsibility that tends to follow AI incidents.

Vendor due diligence. Not all AI vendors handle data the same way. Some train shared models on your data; some don't. Some have meaningful security certification; others have a marketing page. A lightweight vendor assessment process that asks the right questions (as I outlined in my martech AI piece) is worth codifying rather than reinventing per project.

Culture: The Hardest Foundation to Build

Data, skills, and governance are difficult but tractable. Culture is harder, and it matters more than organisations typically acknowledge.

A few cultural conditions that consistently distinguish organisations that adopt AI well from those that don't:

Willingness to change processes, not just add tools. AI is most valuable when it changes how work gets done, not just when it automates the existing steps. Organisations that treat AI as a layer on top of unchanged processes tend to capture a fraction of the available value. The ones that ask "if AI can do this, what should we do differently?" tend to capture much more.

Tolerance for incremental progress. AI adoption is an iterative process. The first version of an AI system rarely performs as well as the tenth. Organisations that demand transformative results from a pilot — and pull the plug when the pilot delivers incremental improvement — rarely build the compounding capability that later produces the transformative results. Setting realistic expectations at the senior level is an underrated governance task.

Psychological safety around failure. AI experiments produce failures. Models give wrong answers, edge cases surface, users find unexpected ways to interact with systems. Organisations where teams are penalised for surfacing those failures tend to suppress them, which means the failures compound rather than being corrected. A culture where people are willing to report that something isn't working is a prerequisite for iterating intelligently.

Leadership credibility. AI adoption that isn't visibly supported by senior leadership tends to stall at the pilot stage. Not because people are obstructionist, but because competing priorities will always crowd out initiatives that don't have a clear mandate from the top. I've seen technically strong AI implementations wither because the executive sponsor moved on. That's a culture and governance failure, not a technology one.

A Practical Assessment

Before your next AI investment, a few questions worth answering honestly:

  • Which data sources would this initiative depend on? Are they clean, accessible, and permissioned?
  • Who in the organisation will evaluate and oversee the outputs? Do they have the literacy to do that well?
  • What are the failure modes, and who is accountable if they occur?
  • Is the relevant leadership visibly and actively supporting this?
  • Are we prepared to change the process, or are we automating the existing one?

If the answers reveal gaps, that's not a reason to stop — it's a roadmap for where to start. The organisations that build these foundations before committing to large AI platforms tend to move faster and waste less once they do commit, because the preparatory work eliminates the delays that derail less prepared efforts.

If you're working through an AI readiness assessment or trying to build the case internally for foundational investment before platform investment, I'm happy to share frameworks from what I've seen work.