An overhead view of a dining table with four pizzas and various drinks.

Attribution Modeling: Assignment of Value or Measurement of Value

Tim Wilson
,
Senior Director of Analytics
,
Feb 14, 2022

Most marketers are familiar with the quote attributed to John Wanamaker:

"Half the money I spend on advertising is wasted; the trouble is, I don't know which half."

In modern terms, this is “the attribution problem.” And, for the past two decades, marketers (and many analysts) have believed that the increasing reliance on digital marketing would finally solve that problem: since digital allows the tracking of individual users across multiple interactions and all the way to their conversion then, surely, that should provide sufficient data to quantify the value of each of those interactions.

Unfortunately, underpinning this expectation is a fairly profound confusion of the difference between “assignment of value” and “measurement of value.” Google’s recent announcements regarding “data-driven attribution” muddy the waters further, unfortunately, in that they imply measurement when, in reality, they are still largely doing assignment…but with complexity inside a black box.

Is this simply semantics?

While it may feel like “assignment of value” is pretty much the same thing as “measurement of value,” they are profoundly different things.

Assignment of value is a reflection of an organization’s choices as to the rules they want to use to distribute credit across channels, campaigns, or touchpoints. “Last touch” is a choice that is neither more nor less “right” than “first touch” or “linear” or “time decay” or even “data-driven.”

Measurement of value is actually quantifying the incremental impact of a channel, campaign, or touchpoint. In theory, this would be the “true value,” but absolute truth is unknowable, so, rather, it’s an “estimate of the true incremental value” delivered.
These are two very different things.

The parable of the pizzeria

There is a somewhat famous parable about a pizzeria and how it promoted itself. I don’t know the original source of this tale, but Rand Fishkin recounted it eloquently in a recent post, and the following is directly lifted from his telling:

One day, Lorena, owner of Lorena’s Pizzeria, hired three capable go-getters to paper the neighborhood with promotional material. She provided each of her new marketers a stack of color-coded flyers (red, green, and white) with the pizzeria’s menu and a unique discount code. Lorena reasoned that if business went up, not only could she attribute sales (via the discounted pricing) to the promotion, she could determine how much each of her three new employees were contributing via the three discount codes.

After a month of distributing flyers, Lorena reviewed the sales numbers, called in her papering team, fired the two passing out the red and white pamphlets, and gave the green-flyer-distributing employee a massive bonus. After all, the sales data showed that green flyers had contributed almost 50% of the pizzeria’s monthly sales! Conversely, the red and white discount codes were used in fewer than 5% of orders each.

At the end of the year, Lorena’s accountant reviewed the business’ receipts and came to her with mixed news. Total transactions were up ~10%, but because the discount code was used so often, overall, revenue was flat. Lorena was shocked. How could the pizzeria’s sales be up only 10% when nearly half the transactions used the new, green, discount-coded flyers?

She had to know, so the following day, Lorena left the pizzeria in disguise and trailed the green-flyer distributor. What did she find? The marketer barely took twenty steps out of the restaurant’s door, and quietly slipped a green flyer to anyone whose footpath suggested they were on their way to the pizzeria.

This parable illustrates how “attribution” is often an assignment of value rather than a measurement of value. Lorena attributed sales based on a choice she made about how to assign value: crediting the sale to each marketer based on the use of their unique discount code.

To measure value, she would have needed to know how many of the customers who presented the green flyer would have purchased anyway, even if they had not been handed a flyer. In this extreme scenario, her assignment of value was clearly much higher than the actual incremental value the green flyer marketer delivered. And, it’s entirely possible that the red and white flyer marketers actually raised awareness and consideration of her pizzeria—driving some subsequent (incremental!) diners who no longer had the flyer, but for whom the flyer sparked them to give Lorena’s a try.

While this is an extreme and somewhat silly example, it represents a reality of how many marketers and analysts think about attribution. In this case, Lorena actually identified an underlying flaw in how she had chosen to attribute credit. And, essentially, the model she chose was a “last (tracked) touch” model: the color of the flyer is analogous to a campaign tracking code appended to URLs on a website.

Could Lorena have chosen a better attribution model? She really had no additional touchpoints being tracked, so, really, she was stuck (an unpleasant reality: just because we have data doesn’t mean we have all the data we would like to have). If she had other data regarding how her customers had been exposed to other marketing activities, she could have chosen different models.

But, which one would be “right?” Ponder the question for a moment and you will see: none of them would explicitly attempt to measure the incremental value of any of her marketing. That can only come from comparing the actual number of slices ordered to the number of slices that would have been ordered if there was no green flyer.

In statistical terms, this is called the “counterfactual.” To perfectly measure that would require the creation of an alternate universe, which is…not practical. Luckily, there are imperfect options that can be good enough (and, certainly, better than mere “assignment”).

Beyond recognizing that assignment is not measurement

The point of this post is primarily to call out that it is dangerous to think that any attribution model, be it last touch, first touch, linear, U-shaped, J-curve, inverse J-curve, “data-driven,” or something else is true measurement of incremental value. These are all mechanisms that assign value.

Measuring value comes from using randomized controlled trials (RCTs), which are the gold standard on that front, or media/marketing mix modeling (which attempts to estimate the “counterfactual” as a base value, or intercept, of the model).

Describing those is beyond the scope of this post, but below is a summary of the distinction:

image3 2

Ultimately, these can be used together: once there is valid measurement of the incremental value from a channel, different attribution models can be explored to see if there are any that reasonably seem to approximate those results, and those can be used (with periodic checking and validation) as a quicker, easier, (but noisier) estimate of the measured value.

Eager to learn more? Contact us to talk with our experts about asking “more right” questions to move beyond common attribution problems.

Tim Wilson
,
Senior Director of Analytics
,

Read More Insights From Our Team

View All

Take your company further. Unlock the power of data-driven decisions.

Go Further Today