I wrote recently about why influencer marketing measurement is broken. The short version: most companies track reach, impressions, and affiliate codes, then wonder why the numbers don’t match the impact they can feel but can’t prove. The attribution models we’ve inherited from performance marketing weren’t built for how creator influence actually works.

If you want a practical starting point alongside this framework, I also built a creator marketing ROI calculator that models direct revenue, HDYHAU-influenced signups, and blended CAC together.

But identifying the problem is the easy part. The harder question is: what should you actually measure instead? And the answer, frustratingly, is that it depends on what you’re trying to achieve.

That might sound like a cop-out. It isn’t. It’s the single most important insight I’ve picked up from running creator programmes across multiple companies. The reason so many teams struggle with influencer ROI isn’t that they’re measuring badly. It’s that they’re measuring the wrong things for the goal they’re actually pursuing. A brand awareness campaign and a direct acquisition campaign require completely different measurement approaches. Treating them the same is where everything falls apart.

The goal determines the measurement

Here’s what I mean. Say you’re a SaaS company running a creator campaign with the primary goal of getting your name in front of a new audience. You’re targeting a niche community, working with a few trusted creators in that space, and the objective is awareness. People knowing you exist and associating your brand with something credible.

In that scenario, obsessing over affiliate code redemptions is pointless. You’re measuring a conversion metric for a campaign that wasn’t designed to convert. Of course the numbers will look underwhelming. But that doesn’t mean the campaign failed. It means you’re holding the wrong scorecard.

A Morning Brew and Tracksuit study from earlier this year surveyed over 250 marketers about how they’re using creators. The tension they found is telling: 82% of marketers say they use creators primarily for brand awareness, but when asked about their overall marketing goals beyond creators, 54% cite direct sales as a top priority. Only 29% actually use creators to drive sales. There’s a disconnect between what companies want creators to do (drive revenue) and what they actually deploy them for (top of funnel). And 42% of respondents cited measurement as their single biggest barrier.

What creators are deployed for vs what companies actually measure

That gap isn’t going to close until teams get more intentional about matching their measurement approach to their campaign goal. So let me walk through how I’d think about this at each level of the funnel.

When the direction is right but the execution is wrong

Before I get into the measurement frameworks, I want to talk about a pattern I’ve seen play out more than once, because it perfectly illustrates what happens when measurement thinking goes sideways.

Imagine a company that’s had real success with a partner and creator programme. They’ve built relationships with creators in their niche, they’ve seen the ecosystem effects compound, and leadership is bought in. The channel works. So the decision gets made to go big. Double or triple the budget, commit serious money to creator partnerships.

That’s the right instinct. But here’s where it goes wrong.

Instead of scaling the programme by building out more tiers, diversifying across creators of different sizes, and investing in the measurement infrastructure to track impact properly, the budget gets concentrated. Maybe it’s $300,000 going to ten big-name partners, with the expectation that each one will deliver a measurable CAC payback within six or seven months. The partners are essentially being asked to do the same thing they’ve always done, but every month, at a much higher price point, with direct conversion expectations attached.

Why linear scaling doesn't work for creator programmes: concentrated vs diversified approaches

I’ve watched this scenario unfold, and it creates problems on multiple levels.

First, you’re putting all your eggs in one basket. Ten partners, no matter how influential, is a concentrated bet. If two or three underperform (and in any portfolio of creator partnerships, some always will) your entire programme looks like it’s failing. A diversified approach with creators at different audience sizes and engagement levels gives you far more resilience and far more data to learn from.

Second, you’re applying conversion metrics to what’s fundamentally an awareness and trust channel. Yes, the best partners will drive trackable conversions. But expecting each one to hit a specific CAC target on a monthly basis is forcing creator marketing into a performance marketing box. It’s the exact measurement mistake this entire article is about. The value of a top-tier creator isn’t just in the affiliate codes they redeem. It’s in the credibility transfer, the community trust, the conversations they spark that you’ll never see in your analytics. Those things are real. They just don’t show up on a spreadsheet with a seven-month payback column.

Third, it puts the marketing team in an impossible position. The person running the programme knows the channel works differently from how it’s being measured, but they’re being held to metrics that don’t match the reality. So they end up caught between what they know to be true about how influence operates and what they’re being asked to prove in a board report. That’s a recipe for a good marketer burning out and a good channel getting killed.

The frustrating thing is that the underlying belief is correct. Creator and partner marketing does work. Investing more in it is the right call. But “invest more” doesn’t mean “concentrate budget on a handful of big names and demand direct conversion metrics.” It means build the programme wider, measure it smarter, and give it time to compound.

This is where the tension between marketing and the rest of the business often lives. Someone outside the marketing function, whether that’s a founder, a CFO, or an ops lead, looks at the numbers and sees a clean line: we spent $X on partners, we got Y conversions, so if we spend 10X we’ll get 10Y. It’s logical. It’s how you’d think about it if you were modelling a paid ads channel in a spreadsheet. But marketing doesn’t scale linearly like that, and creator programmes especially don’t. There’s a whole layer of nuance underneath those top-line numbers. The programme design, the creator mix, the audience overlap, the content cadence, whether the partnerships are mature enough to bear that level of investment. You can’t see any of that in a pivot table.

This is why you need experienced marketing practitioners making these calls, not just people who are good with numbers. I don’t say that to gatekeep. I say it because I’ve lived the consequences. Part of the job as a head of marketing is exactly this: balancing the strategic decisions about where to increase spend, when the timing is right, and whether the programme is ready for it. Sometimes doubling down on partners absolutely makes sense. But you need the right mix in your spend, the right measurement framework underneath, and the right stage of maturity in your programme to support it. Everything has nuance, and the nuance is where the value of a good marketer lives.

If you’re a founder or exec reading this, here’s my honest advice: trust your marketing team on the how. If they’re telling you the programme needs diversification and that the measurement approach needs to match what the channel actually does, listen to them. The instinct to go big is good. The instinct to make it look like a paid ads spreadsheet will undermine everything.

Measuring brand awareness campaigns

When the goal is awareness, you’re trying to answer a deceptively simple question: are more of the right people hearing about us because of this creator activity?

The instinct is to look at reach and impressions. And look, those numbers aren’t useless. They give you a rough sense of distribution. But they’re directional at best. A post that reaches 500,000 people tells you nothing about whether any of them were your target customer, whether they noticed your brand, or whether they’ll remember it tomorrow.

Here’s what actually works for measuring awareness impact.

Brand search lift is one of the most underrated proxies available to you. The logic is simple: if a creator talks about your product and it resonates with their audience, some portion of those people will go search your brand name. You can track this in Google Search Console or Google Trends. Run a creator campaign, then watch what happens to branded search volume in the days and weeks that follow. If you see a lift that correlates with the content going live, that’s a real signal. It means people were moved enough by what they saw to actively seek you out.

At Narrative, this was one of the clearest indicators we had. When a well-aligned creator published a piece about us, we’d see a noticeable bump in brand searches, often within 24 to 48 hours. The size of the bump didn’t always correlate with the creator’s audience size, which reinforced everything we learned about smaller creators driving outsized impact. A niche photographer with 15,000 followers in exactly the right community would often generate a bigger search lift than a creator with ten times the reach but a broader, less relevant audience.

Brand tracking tools like Tracksuit can measure awareness, consideration, and preference over time across your category. This isn’t cheap, and it’s more practical for companies with meaningful budgets, but it gives you a longitudinal view that point-in-time metrics can’t. You’re not asking “did this one post work?” You’re asking “is our brand awareness trending up over the period we’ve been investing in creators?” That’s a much more useful question for a channel that compounds over time rather than converting in a single click.

Qualitative signals still matter enormously. Are people mentioning your brand unprompted in community groups, forums, or subreddits? Are you showing up in conversations you didn’t start? At Narrative, I’d regularly monitor photographer communities on Facebook and Reddit. Seeing someone recommend our product without being prompted, without being part of our creator programme, was one of the strongest signals that the ecosystem was working. You can’t put that in a dashboard, but it’s real, and it matters.

Measuring consideration and traffic

The middle of the funnel is where things get more trackable but also more deceptive. This is the “people are checking you out” phase. Visiting your site, reading your content, looking at pricing, maybe signing up for a newsletter.

The standard approach is UTM-tagged links, and they work fine for what they measure. If a creator includes a tracked link and someone clicks it, you’ve captured that visit. The problem, as I’ve written about before, is that most creator-driven traffic doesn’t come through a tracked link. People see a creator’s content, then Google your brand name later, or type your URL directly into their browser. Your analytics shows “direct” or “organic search” traffic. The creator gets no credit.

This is the dark funnel problem. The portion of the buyer journey that happens in private channels and untrackable spaces. Slack conversations, WhatsApp groups, DMs, word of mouth at events, someone leaning over to a colleague and saying “have you tried this tool?” All of that is real influence. None of it shows up in your attribution model.

So what do you do about it?

HDYHAU tracking (How Did You Hear About Us) remains the single highest-ROI measurement tool you can implement. I’ve talked about this extensively in the measurement post, but it’s worth repeating here because it bridges the gap between awareness and consideration like nothing else can. A simple self-reported field on your signup or contact form gives you signal that no amount of analytics configuration can replicate. At Narrative, HDYHAU data consistently showed that creators and community were driving a far larger share of our pipeline than our analytics dashboard suggested.

Referral traffic patterns are worth watching even beyond UTM links. Look at your referral sources after a creator campaign goes live. Are you seeing traffic from platforms where the content was published? Even without UTM tracking, a spike in referral traffic from YouTube or Instagram in the hours after a video goes live tells you something.

Engagement quality on your site is another angle. When creator-referred visitors land on your site, do they behave differently from paid traffic? At Narrative, we noticed that visitors who came through creator content tended to view more pages, spend more time on the site, and convert at a higher rate than paid ad traffic. We couldn’t always attribute them to a specific creator, but the behavioural pattern was consistent and unmistakable. If you’re running creator campaigns and your overall site engagement metrics improve during that period, that’s signal.

Measuring conversion and revenue impact

This is where most companies start, which is the problem. They jump straight to “how many signups did this creator drive?” without building the awareness and consideration measurement underneath. The result is that creators look expensive on a per-conversion basis because you’re only capturing the small fraction of their impact that flows through a directly trackable path.

But you do still need to measure conversion impact. Here’s how I’d approach it.

Affiliate codes and tracked links are the baseline. They’re imperfect (I’ve explained why at length) but they capture real signal. Someone who uses a creator’s discount code almost certainly came from that creator. That’s valuable data. Just don’t make the mistake of treating it as the total picture. It’s one data point in a broader measurement stack.

CAC by channel is where the strategic insight lives. At Narrative, we tracked our customer acquisition cost at the channel level and overall. This let us see whether adding creator investment to the mix was improving or worsening our blended CAC. Over time, what we found was that increasing our creator spend actually brought blended CAC down, even though the creator channel in isolation looked expensive when measured purely on affiliate conversions. The reason is that creators were warming up audiences that later converted through other channels. They made everything else work better.

This is something Rand Fishkin has talked about a lot. The idea that some channels are “assist” channels that rarely get last-click credit but dramatically improve the performance of the channels that do. Creator marketing is, in my experience, one of the strongest assist channels that exists. But you’ll never see that if you measure each channel in isolation.

LTV comparison is an angle that’s overlooked way too often. Are the customers who come through creator content retaining better than customers from other channels? Do they have higher lifetime value? In most cases I’ve seen, the answer is yes, because they arrive with trust already established. They weren’t persuaded by an ad. They were recommended by someone they trust. That leads to a fundamentally different customer relationship. If your creator-referred customers have 20–30% higher LTV, your CAC calculations look completely different.

HDYHAU-to-revenue mapping closes the loop. If you’re collecting HDYHAU data at signup and you can match that to downstream revenue in your CRM or billing system, you now have a direct line from “this customer told us they found us through Creator X” to actual revenue. It’s not scalable in a fully automated sense, but it’s the most accurate view of creator revenue impact you can get. At Narrative, this data was what gave us confidence to keep investing in the programme when the affiliate numbers alone might have had us pulling back.

The full-funnel view

The real unlock isn’t perfecting measurement at any single layer. It’s building a stack that gives you signal across the full funnel, from awareness through to revenue, and looking at it as an integrated picture.

Ekimetrics, who work with brands like Estée Lauder on marketing measurement, frame this as three layers: desirability (did creator content drive people to search for your brand?), consideration (did it drive them to your site or product page?), and conversion (did it drive actual sales?). Each layer uses different data sources and metrics. The insight isn’t in any single number. It’s in how the layers connect.

I’d simplify their framework into what I think of as a measurement stack for creator programmes:

The four-layer measurement stack for creator marketing

Layer 1: Awareness signals. Brand search lift, social mentions, community chatter, share of voice in your category. Are more people becoming aware of you?

Layer 2: Interest signals. Site traffic patterns, HDYHAU data, content engagement, email signups. Are people actively investigating you?

Layer 3: Conversion signals. Affiliate codes, tracked link conversions, HDYHAU-to-revenue mapping, CAC trends. Are people buying?

Layer 4: Retention signals. LTV by acquisition channel, NPS from creator-referred customers, repeat purchase rates. Are creator-referred customers sticking around?

You don’t need a massive analytics infrastructure to start measuring this way. A lot of it can be done with Google Search Console, a HDYHAU field, your existing analytics, and a spreadsheet. It’s not elegant, but it works. I know because we made it work at Narrative with a team of three.

What about Marketing Mix Modelling?

MMM (Marketing Mix Modelling) is the sophisticated version of what I just described. Instead of eyeballing correlations between creator activity and business outcomes, MMM uses statistical models to quantify the incremental contribution of each marketing channel, including creators.

The industry is moving in this direction fast. Google’s open-source MMM tool Meridian, released in early 2025, has lowered the barrier significantly. And companies like Ekimetrics have built specific capabilities around integrating influencer data into MMM frameworks.

For most companies reading this, full MMM is probably overkill right now. But it’s worth knowing that the measurement infrastructure is catching up. The challenge has always been that influencer marketing generates less structured data than paid media. A Meta ad gives you clean impression, click, and conversion data. A YouTube sponsorship gives you a rough view count and maybe some affiliate clicks. MMM platforms are getting better at ingesting the messy, multi-touch data that creator campaigns produce. Give it another year or two, and I think this will be much more accessible for mid-market companies.

In the meantime, the layered measurement stack I described above is essentially a manual version of what MMM automates. Start there. Build the data habits. When the tooling catches up, you’ll be ready.

The measurement mindset shift

The biggest change that needs to happen isn’t about tools or frameworks. It’s about how marketing teams think about creator measurement in the first place.

Most teams evaluate creators the way they evaluate paid ads, as a performance channel where every dollar should trace to a measurable outcome within a defined window. That framing guarantees disappointment because it ignores the 80% of creator value that lives in awareness, trust, social proof, and long-tail content that keeps working months or years after it’s published.

The companies getting the most from creator marketing are the ones that have shifted their measurement mindset from “prove every dollar” to “build confidence directionally.” They use layered measurement to build a clear picture of whether their creator investment is working. Not by tracking every conversion, but by seeing consistent signal across awareness, traffic, conversion, and retention metrics over time.

That’s a harder case to make in a board meeting than “we spent $X on Creator Y and got $Z in revenue.” But it’s a more honest one. And in my experience, the companies that learn to measure creator marketing on its own terms, rather than forcing it into a performance marketing box, end up investing more confidently and getting better results.

I had a conversation recently with someone running marketing at a large B2B enterprise SaaS company, and he shared an approach I thought was really smart. His team runs community events and meetups, the kind of activity that’s nearly impossible to tie directly to new customer acquisition. Instead of trying to force those events into a CAC calculation (and inevitably making them look expensive), they fund them out of a customer retention budget under a completely different team. The logic is that these events strengthen relationships with existing customers, who then become advocates who bring in new business. It works because they’ve recognised that trying to measure community and ecosystem activity purely through a new-customer lens misses the point. Existing users attending these events alongside potential users is one of the most powerful sales tools you can have, because a happy customer telling a prospect “this product changed how I work” is worth more than any case study you could produce.

That kind of creative thinking about where activity sits in your budget, and how you measure its impact, is exactly the mindset shift I’m talking about. Not every marketing activity needs to sit in the same measurement box. The best companies find ways to account for the full value of what they’re doing, even if it means the budget comes from a line item that doesn’t say “acquisition.”

I talked in the ecosystem marketing guide about how ecosystem effects compound over time. The same is true for measurement. The longer you track these signals, the clearer the patterns become. Month one might be noise. Month six starts to look like a trend. Month twelve, you’ve got a dataset that fundamentally changes how you allocate budget.

Where I’d start if I were you

If you’re reading this thinking “OK but what do I actually do Monday morning,” here’s the practical version.

First, get clear on the goal of each creator campaign you’re running. Not the overall goal of your creator programme, but the specific goal of each campaign or partnership. Is it awareness? Consideration? Direct conversion? The measurement approach follows from that decision.

Second, implement HDYHAU on your signup flow today if you haven’t already. This takes fifteen minutes and will give you more actionable data than any analytics tool you could buy.

Third, start watching branded search volume around your creator campaigns. Set up a simple process, even if it’s just checking Google Search Console weekly, to see whether creator activity correlates with search lift. Over time, this becomes one of your most reliable leading indicators.

Fourth, track CAC at the channel level and overall. The blended CAC trend over time, as you scale creator investment, is more telling than any single campaign’s conversion numbers.

Fifth, look at the quality of customers, not just the quantity. LTV by acquisition source. Retention rates. NPS scores. If creator-referred customers are fundamentally better customers, that changes the entire ROI equation.

And sixth, be patient. Creator marketing compounds. The measurement needs time to show the compounding too. Give yourself at least six months of consistent tracking before making big strategic calls about whether the channel is working.

This is the measurement gap I’m trying to close with a project i’m working on. The data to measure creator marketing properly exists. It’s just scattered across a dozen tools and requires too much manual work to stitch together. More on that soon. But in the meantime, the framework above will get you further than most companies ever get with their creator measurement. Start messy. Refine as you go. The signal is there if you’re looking in the right places.