Ask any academic – they will tell you that controlled testing is the gold standard for determining incrementality (such as an increase in sales caused by an advertising treatment).
I’d respond, “Yes, except when it’s not done properly, which is most of the time!”
Let’s talk about ad effectiveness measurement via surveys. Usually, for testing digital ad serving, a pixel is in the ad, it fires when the ad is served and pings the survey panel operator’s servers. Then, a survey invitation is sent. A control cell of respondents is created post-hoc by sending invitations to those matched demographically.
Sounds fairly straightforward, but here’s a problem I see all too often: the most important variable to control for is often missing from the study plan. What variable is this? Brand predisposition.
Start to control for brand predisposition
Consumers who are predisposed to a brand pay more attention to its advertising, click its links, consider it, buy it, are loyal to it, etc. Frankly, this characteristic, consumer predisposition, is the most predictive variable of whether or not a given consumer will convert in a campaign time window – more than whether or not they were exposed to an ad. Hence, if you don’t control for prior predisposition, you can get seriously inflated answers.
The reason the bias tends to go in one direction (overstatement of effects) is that marketers target ads to those who are more likely to be interested in their offering. As a result, the exposed cell will have a higher proportion of those predisposed to the brand versus the control cell, producing artificially higher conversion rates among the exposed (in addition to true incrementality) unless you explicitly control for this variable.
The problem doesn’t stop there. My white paper research proves that those pre-disposed to choosing a given brand are also five times more likely to respond to its advertising! There’s the double whammy: brand predisposition not only affects the baseline of conversions, but it also affects the responsiveness to advertising. For both reasons, if your exposed and control cells are not balanced on brand predisposition, you are likely to think your advertising campaign is working much better than it really is.
Adjust your results to reasonable reach levels
Now here’s a second adjustment you probably have to make that you might not have thought of. When you trigger survey invitations based on ad serving, your exposed cell is equivalent to 100% reach which never occurs in real life. Hence, the absolute magnitude of the lift is not reflective of what was spent on the ad campaign.
You will need to adjust results down to a more reasonable reach level in order to calculate a measure of ad response like ROAS (return on ad spend), which is defined as incremental revenue generated from incremental advertising divided by the incremental advertising amount. If you mistakenly projected lift in sales from a cell that was 100% exposed that you then divided by actual spending, again, you would be inflating the effectiveness of ad exposure. On the other hand, when 100% in the test cell are exposed, across tactic exposed cells versus respective control cells, it is a fairer way to compare the potential of one ad tactic or creative to another.
These points are more than cautions for the tests you design, they are also points for the marketer to clarify when they receive lift study results from media partners like the walled gardens. You need to know what you are getting and if the results are comparable – or what you have to do to make them comparable.
Methods closer to randomized control tests, where the IDs are enumerated ahead of time and sorted into test and control are better, but harder to execute. Google’s ghost ads method is closer to RCT. However, Google’s method makes it impossible to control for, or even measure, brand predisposition, so there is still no guarantee that predisposition bias is eliminated.
Yes, properly designed tests can be a gold standard for determining incrementality of an ad treatment but as they found out in 1849 in California, mining for gold is hard work.