Measuring how well an ad performs is a perennial industry issue. Simply counting clicks doesn’t necessarily correlate with the desired outcome (typically an install). A more sophisticated approach is what’s required.
For Grant Simmons, VP of Client Analytics at mobile data attribution company, Kochava, it’s a matter of taking a data-led, test and learn approach.
Incrementality has become a buzzword of late and many of those asking the questions are unaware of the commitment with both time and money.
While the holy grail of marketing may be to answer, did my advertising have an impact? There is more than one way to answer that question. What incrementality measures is the benefit of ads and determining which consumers took action because of the advertising.
Too often, we have equated a consumer interaction with an ad with direct correlation to an action (event), but correlation does not equal causality.
Incrementality testing (aka lift or causality) is often confused with attribution, but a click does not drive an install, as is commonly discussed in the ecosystem. Too often, we have equated a consumer interaction with an ad with direct correlation to an action (event), but correlation does not equal causality. There are many factors that drive an install, and we’ll never know all of them.
To measure true incrementality oftentimes involves segmenting an eligible audience from which you carve out a holdout or control group. This group is suppressed and receives no advertising. You then advertise to the other half and compare conversion rates.
This is where incrementality testing starts to get tricky. There are subgroups and ghost ads to create a holdout (your control group) equally to your testing population. There are opportunity costs in creating a known universe of devices. And while it’s entirely possible, it might not be practical for most businesses.
In lieu of incrementality, there are a number of more cost-effective alternatives and more like sophisticated A/B testing to determine campaign impact through performance. Independent and verified data sets, such as through an independent data marketplace, offer a unique and viable alternative to help determine the success of your advertising. With such a data set, you can measure performance (as opposed to lift) using an audience with a known history as a baseline.
These options include:
- Time series analysis: This type of analysis involves alternately turning marketing off and back on to establish a baseline and to see incremental lifts from networks. Although effective, there is an opportunity cost in turning off all marketing temporarily.
- Comparative market analysis: Analysts define a designated marketing area (DMA) to find geographical pockets that behave similarly. They then surge the marketing in one DMA and refrain from the other. There is a strong chance of seeing conversion rate differences between the two DMAs but also an opportunity cost in surging marketing in one DMA and withholding efforts in the other.
- Time-To-Install Quality Inference: This analysis compares the time of engagement vs. the quality of user graphs to easily understand what is causal or non-causal. While there is no opportunity cost, this type of analysis is less precise than others.
- Forensic control analysis: This type of analysis is a modeling exercise in which a control group is created that mirrors the exposed group after a campaign has run. The response and performance is weighted up or down based on an algorithm (created from predictive variables). While there is no opportunity cost, copious amounts of data are required to create the model universe.
Test & learn
Having a known universe of devices between a group of consumers exposed to ads or not is what’s difficult to obtain with incrementality testing. While incrementality testing is possible, its feasibility is another story. Know your threshold for testing and perhaps consider some of the options outlined above to measure success. Overall, adopting a test-and-learn mentality is what leads to successful marketing.
Grant Simmons, VP of Client Analytics, Kochava