The ins and outs of incrementality testing
Dec 12, 2018
Incrementality testing has become known as the best way to evaluate your ad spend and eliminate cannibalization.
In fact, it has been described as “the best measurement for ad success” by Retargeter, and Liftoff claims it can “transform your marketing campaigns.” With such a glowing reputation, more and more marketers are looking into incrementality as the next way to improve their performance.
Incrementality is a complex strategic process, but it's one that's worth your time. In this article, we’ll take a close look at what incrementality testing is, the basics of how it works, and how this method can improve marketing strategy. To achieve this, we’ll share key insights from three industry experts who can show us the universal value of incrementality testing.
These experts are focused on different app categories - Gaming, E-commerce and Social; Jam City is a leading mobile entertainment company with popular games such as Cookie Jam and Harry Potter, Poshmark is a fashion marketplace where you can buy and sell clothes, and Viber is Rakuten’s cross-platform instant messaging app.
So, let’s take a closer look at what the experts have to say, and see how incrementality tests can be used to inform your marketing spend.
What exactly is an incrementality test?
Incrementality testing is a mathematical approach to advertising that helps you measure an incremental lift, showing you the true impact of your marketing campaigns.
Let's take a look at a simplified example: if you segment two audience groups (let’s say, Group A and Group B) that show the same behavior, and then only run campaigns for Group B, you can see the impact of those ads on your conversion rates vs. a control group.
If Group A (a control group without ads) had 100 installs, and Group B (the exposed group with ads) had 120 installs, here’s how you would find the uplift and the percentage of incremental installs:
- Lift is the increase from Group A to Group B (20 installs, 20% increase)
- Incrementality is the percentage of Group B that converted due to marketing spend (20 installs, 16.7% of Group B total)
You can now accurately calculate whether your total spend is worth an additional 20% of installs. If it’s too costly, you could theoretically pause ad spend and expect to keep 83.3% of installs.
With these results, you should apply a confidence level to estimate how certain you are that this test group is representative of what you’ll see on a larger scale. Note that as you scale up your campaigns, there’s always a chance that your findings won’t reflect future results, so your plan needs to be adaptable.
But before you start thinking about how to move forward, there are a few key things to keep in mind when setting up an incrementality test.
Control your variables
Firstly, it’s vital that your groups are statistically equivalent and have identical conditions, such as the same testing period.
Without the right segmentation, you might attribute uplift incorrectly due to variables that you didn’t control. As with any scientific study, having more variables leaves greater room for doubt, potentially making your results unreliable.
Outline your primary outcome
It’s important to decide on your primary outcome before implementing a test.
In our earlier example, the conversion event was installs, but perhaps you want to measure the influence ads have on in-app purchases instead. If your app sells shoes, each shoe purchase is a conversion event, so your primary outcome would be the uplift in shoe sales.
The impact on your primary outcome will inevitably shape your future marketing spend, so it’s important to know what you’d like to measure and why. If Group A bought 10 pairs of shoes and Group B bought 20, the primary outcome now shows a 100% lift and 50% incrementality. From this, you could also determine that your incremental users are highly likely to make a purchase. Suddenly, by changing the primary outcome, that ad spend appears far more valuable.
With these points in mind, let’s take a look at two different methods when implementing an incrementality test, each with their own risks and rewards.
Pausing spend vs. blasting: Two test models to consider
Incrementality testing is all about finding a baseline figure that measures the impact of your campaigns, then developing a hypothesis based on your results.
During Adjust’s Mobile Spree conference, Jam City’s Director of User Acquisition, Winnie Wen, explained how to implement a standard incrementality test:
“First and foremost, you can pause all your marketing spend. I know this sounds really scary, but what’s important to us is to get a baseline for organic installs. Whatever baseline metrics we’re trying to achieve, just pause whatever activity and get that baseline. Slowly introduce each marketing channel and measure the uplift of the value of the KPIs you’re trying to get a read on.”
Alternatively, you can “blast” your marketing spend to find a ceiling - essentially how far you can succeed.
Michelle Huynh, Director of Growth at Poshmark, provided an example of this method and how it helped them move forward with growth targets:
“TV is a relatively new channel for us and it’s offline, so it’s pretty hard to measure. We took one week’s worth of budget and decided to spend it all in one day. It was a little bit risky, but we were looking at the blended cost and an hourly chart that we have, looking at the week-over-week just to see if there’s any spike. We were able to prove that TV users are incremental, and a little bit expensive, but this helped us move forward a lot. We were able to scale TV spend efficiently and it’s now one of our biggest channels.”
Although it required a somewhat risky move on their part, incrementality testing can help you progress towards your long-term goals. To really make use of incrementality, it’s important to continually develop your hypothesis with your latest results.
Actionable insights: Uplift and scaling marketing spend
Once you’ve successfully implemented an incrementality test and established your baseline, what’s next? When looking at the results of a lift test, Michelle pointed out a common mistake that occurs when analyzing incrementality test data:
“Any time when you run a lift test or incrementality test, you should always look at your marginal cost. That’s very important if you’re thinking about quality and quantity. Chances are most people are looking at a blended average, but that’s not the right way to look at it. When you’re running an incrementality test, you want to see your marginal cost. If it’s very expensive, it might not make sense for your business to scale up that channel. Essentially, you might be accelerating your burn rate without even knowing it.”
By looking at the marginal cost, you can mathematically determine whether scaling up on any particular channel is the best option for your business. As you can see, discovering the marginal cost of incremental users allows you to logically scale ad spend with confidence.
But, if you don’t have a baseline that shows your organic reach, how can you be sure that a campaign isn’t cannibalizing organic growth? This is another reason incrementality tests hold incredible value for mobile marketers.
“Back in 2016, we decided to go and invest our marketing budget and spread across a specific country. [...] Within the first month we spent a set amount - acquiring more than half a million installs. We felt it went well, so we doubled the amount, and acquired more than a million installs. But then we stopped and analyzed the entire picture, and found something remarkable: we didn’t increase the total number of installs, we just redistributed the pie [...] We were just acquiring our organic traffic. And this triggered us to go on a journey to find incrementality.”
After A/B testing to measure your organic traffic, Moshi had a few other suggestions on how to move forward once your data reveals cannibalized traffic:
Firstly, he suggests assigning a team member to focus on organic installs: “many UA team members are only incentivized by their paid results, and this method can prompt managers to cannibalize organics.” As such, having someone focused on organic helps to realign the company’s overall acquisition targets.
This person needs to be given the authority to stop ad spend on cannibalized traffic, which is only possible once your paid strategy is aligned with your company’s growth goals. This also ensures that there is an actionable plan leading to genuine growth, rather than relying on paid traffic to reach your targets.
When looking to eliminate cannibalization, Moshi explained the value of using parameters to detect existing users with new details:
“The marketing partners’ technology behind the campaign audience optimization is sophisticated and able to take into account hundreds of different factors to identify the desired audience … Thus, sending the marketing partner the right feedback about the desired audience, while not sending the feedback on loyal returning users, is the smart way to make this separation between your audiences and put your tracker to work.”
Adjust has a huge set of dynamic macros (placeholders) that deliver data to clients via real-time callbacks. These give you the tools to understand your audience and combat cannibalization.
As we wrap up, let’s review the incrementality tips and tricks provided by industry experts:
- Find your baseline with A/B testing, measuring incremental lift
- Have a clearly outlined primary outcome
- Look at the marginal cost, not just the blended average
- Use your results to scale-up appropriate channels and stop cannibalizing traffic
- Adapt your hypothesis: results may differ when applied to a larger audience
- Ensure your company’s growth targets don’t reward cannibalization
- Utilize your mobile measurement partner’s macros
It’s hard to argue against the benefits of incrementality testing. Not only is it the most accurate way to deduce the actual cost of acquired users, but it can also prevent you from cannibalizing your organic traffic. The process provides valuable insight into your marketing spend and allows you to achieve an important goal for all UA managers: acquiring the right set of users at the right price.
If you’d like further insight from Adjust’s Mobile Spree conference, check out the resource hub for a breakdown of every major event. You can also learn more about incrementality from Adjust's Product Research Manager, Michael Paxman, with his post in AdExchanger.