← Writing

April 8, 2024

Harnessing the Power of AI in Marketing Technology Companies

AI's impact on marketing technology isn't evenly distributed. Some capabilities are genuinely transformative — others are still more promise than practice. Here's what I've seen actually work.

AI in Marketing Technology: What's Working, and What's Still Hype

Every martech vendor has "AI" on its homepage. Every strategy deck mentions machine learning. The term gets used so broadly now that it's largely lost its signal value — which creates a real problem when you're trying to make actual investment decisions.

In my work with marketing technology companies and the brands that deploy their tools, I've had a front-row seat to what AI is doing for marketing operations in practice — and where the gap between pitch and reality is widest. Here's my honest read.

Where AI Is Genuinely Adding Value

Customer segmentation and personalization. This is the most mature application, and the one where ROI is easiest to measure. AI-driven segmentation — drawing from behavioural data, purchase history, and browsing patterns — is materially better than the manual or rules-based approaches it replaces. The organizations getting the most from it aren't just building finer-grained segments; they're using AI to adjust messaging and timing at a level of granularity that simply wasn't operationally feasible before.

Where I see this done well, it tends to share a common architecture: a unified customer data platform (or something that functions like one), a clear set of outcomes to optimise for, and a team that's willing to challenge its assumptions about what customer groups actually want. The AI provides analytical horsepower, but the commercial judgement still has to come from people who understand the business.

One pattern I see consistently: the data infrastructure needs to be in reasonable shape before AI can do useful work here. Organizations that jump to AI personalization before their data is clean and integrated tend to get disappointing results and conclude that AI "doesn't work" — when the actual problem is upstream. Fixing the data problem is unglamorous, but it's the precondition for everything else.

Predictive analytics for campaign planning. Using historical data to forecast what's going to resonate — when, and with which audience — is an area where well-trained models genuinely outperform human intuition at scale. Churn prediction has become particularly strong: identifying customers likely to disengage early enough to do something about it consistently delivers measurable retention improvements.

Beyond churn, I've seen effective applications in lifetime value forecasting — knowing which newly acquired customers are likely to become high-value, and adjusting acquisition spend accordingly — in identifying seasonal or contextual triggers that precede conversion, and in flagging the point at which promotional depth starts cannibalising margin. These aren't glamorous use cases, but they're ones where the signal is clear and the feedback loop is tight enough to iterate meaningfully.

The pattern that works: narrow scope, clean data, specific success metric defined before launch. The pattern that fails: broad mandate ("use AI to improve our analytics"), no baseline, no clear owner for acting on the outputs.

Automation of repetitive execution. Email sequencing, A/B test management, bid optimisation in paid media — these are areas where AI has largely replaced human decision-making, for good reason. The decisions are high-frequency, data-driven, and don't require contextual judgment. Freeing marketing teams from this work so they can focus on strategy and creative direction is one of AI's cleaner wins, and most organizations are still underexploiting it.

Where I see the most headroom: organizations that have automated one or two channels but haven't connected them. The next step isn't usually adding more automation per channel — it's building the orchestration layer that coordinates across email, paid, and owned channels so they're working toward the same customer outcome rather than operating as independent optimisation engines.

Evaluating Vendor AI Claims

This deserves its own section, because the market has made it genuinely difficult to distinguish real capability from marketing language. A few tests I apply when evaluating AI claims in martech tools:

Ask for the counterfactual. Any vendor can show you results from customers using their AI. The relevant question is what results look like compared to a controlled baseline — without the AI, or using a rules-based equivalent. Good vendors can answer this. Vendors who can't usually have a reason.

Ask who owns the model. There's a meaningful difference between a vendor that has built proprietary models on aggregated customer data across their entire install base and one that is essentially passing data through a third-party API. Neither is automatically better, but the implications for data privacy, model drift, and long-term differentiation are quite different.

Ask what the AI is actually doing. "AI-powered" often means a logistic regression trained on engagement data, which is perfectly useful but not the same as a large language model or a reinforcement learning system. Understanding the actual mechanism helps you evaluate whether the capability matches your use case — and whether the pricing reflects the technology.

Ask about failure modes. Any AI system will occasionally produce bad recommendations. How the vendor surfaces those failures, allows you to correct them, and uses corrections to improve the model is often more informative than how they describe the upside.

The Areas Where I'd Apply More Caution

AI content generation. Genuinely useful for first drafts, for scaling content volume, and for generating variants to test against each other. Not a replacement for editorial judgement, brand voice, or anything requiring genuine originality. The marketing teams I've seen get the most from generative AI treat it as an accelerant for human creators rather than a substitute for them. The teams that treat it as a cost-cutting shortcut tend to end up with cheaper-looking output that performs accordingly.

The more nuanced risk is brand erosion. AI-generated content can be difficult to distinguish from human-written content at a glance, which makes it easy to approve it without the scrutiny you'd apply to a writer's work. Organizations that don't build robust editorial review into their AI content workflows tend to discover the downside when something goes wrong — usually a piece that's technically competent but tonally off, or that makes a claim nobody would have signed off on if they'd read it carefully.

AI chatbots for customer support. Better than they were two years ago, but still brittle at the edges. Handled well, they reduce tier-1 load and provide faster responses at lower cost. Handled poorly, they damage customer relationships in ways that are expensive to repair. The implementation quality varies enormously. Organizations that treat chatbot deployment as a "switch it on and save money" exercise tend to regret it fairly quickly.

The implementations I've seen work have a few things in common: clear scope (the chatbot is explicitly designed to handle a specific set of query types, not everything), a low-friction escalation path to a human agent, and a feedback loop that routes difficult or mishandled conversations back to someone who can improve the model's behaviour. The ones that fail usually skip one or more of those — most often the escalation path, which is where the customer relationship damage tends to occur.

What Makes the Difference

In my experience, the gap between organizations that get measurable value from AI in marketing and those that don't comes down to three things:

Data quality. AI amplifies both the strengths and the flaws of the data it's working with. Getting this right is the least glamorous part of any AI initiative, and the most important. "Good enough" data quality for human analysis is often not good enough for AI — the models will find the patterns in whatever you give them, including your errors and inconsistencies.

Specific problem framing. "Use AI to improve marketing" is not a strategy. "Use AI to reduce email unsubscribe rates by identifying fatigued segments before they churn" is a project. The more precisely you define the problem, the more tractable the solution — and the clearer the success metric. I've seen AI pilots fail not because the technology didn't work, but because nobody agreed on what success meant before it launched.

Measurement discipline. Too many AI marketing initiatives fail to establish a proper baseline before launch, making it impossible to demonstrate or learn from the results. This matters both for proving ROI internally and for iterating intelligently. I've seen strong AI implementations lose organizational support simply because nobody tracked the right things from the start. Set your measurement framework before you turn anything on.

There's a fourth factor worth naming: organizational capability. AI in marketing requires people who understand both the business and the tools well enough to configure, evaluate, and iterate on them. Organizations that treat AI as a plug-and-play solution and don't invest in building this capability internally tend to find themselves permanently dependent on vendors to make changes, which limits how quickly they can move and how much they can learn.

Looking Ahead

The next meaningful shift in AI for marketing is moving from tools that analyse and recommend to systems that act — running campaigns, adjusting strategy, and closing the loop between insight and execution with limited human intervention. I've written more about this direction in my piece on agentic AI. The data foundations and AI literacy being built now will determine which organizations are positioned to take advantage of it when it matures.

If you're evaluating AI capabilities in your marketing technology stack — or trying to separate genuine value from vendor claims — I'm happy to share what I'm seeing across the market.