Blog Navigating a new era of mobile marketing...

Navigating a new era of mobile marketing measurement with incrementality testing

In the bustling realm of mobile app marketing, it's not always a walk in the park to tell your organic user activity apart from the installs and events that owe their existence to your marketing spend. This is especially true in an industry that is advancing with the emergence of privacy compliance and subsequent loss of user IDs. Paying for installs that were destined to happen anyhow? That’s no one’s ideal scenario!

In an era where traditional measurement methods fall short due to data privacy changes and evolving technological landscapes, there's a growing demand for next-generation solutions that can accurately measure and optimize app marketing campaigns. Luckily, there’s an easy solution. Forward-thinking growth marketers are harnessing the power of incrementality to dissect the true impact of their strategies.

Separating fact from fiction with incrementality

Think of incrementality as a Sherlock Holmes of marketing–it sleuths out the growth that can be exclusively credited to your strategic marketing maneuvers, beyond the existing brand charisma. It's like having a flashlight for your campaigns, shedding light on the exact cost of activities that occurred because of your marketing magic.

Whether it's a particular channel, tactic, or an all-encompassing campaign, incrementality is all about proving the worth of a single factor and strategically isolating it from factors beyond your control. It’s also the detective of insights for what’s not working so hot. Armed with this wisdom, you can fine-tune your strategies and scale the most lucrative channels and campaigns.

In a post-device ID world where user privacy is essential, this is especially valuable. Rather than leaning on user-level data, marketers can leverage incrementality testing for accurate, detailed insights into campaign performance with the help of aggregated data from similar apps, replacing the traditional segmented user groups.

Kate Minogue, Head of App and Gaming, Marketing Science EMEA at Meta, spoke with Adjust’s Alex Pham on the value of incrementality, saying, “What we want to avoid is a situation where you're attributing all of your impact to an ad that was served to the most people, or an ad that happened to be at the very, very end of a touchpoint but didn't really make a difference. That's where experiment design and incrementality come in; It's using scientific methodology to really validate and assess what would've happened if this ad wasn't present.”

A/B testing vs. incrementality testing

At its core, incrementality testing follows the same concept as A/B testing by comparing a test group to a control group. Let's take a look at a simplified example.

Imagine you are running a marketing campaign for your app and want to test its incremental lift on installs. Group A acts as your control group; it’s the benchmark for installs. These users have not been exposed to your ads. Group B is those who are exposed to your ads. Group A had 100 installs while Group B had 120 installs. With this information, you can calculate two key insights.

How to calculate incremental lift: Lift is the increase from Group A to Group B. In this example, that is equal to 20 installs, or a 20% increase. The formula for this calculation is installs from Group B minus installs from Group A, divided by the installs from Group A. This number is then multiplied by 100 to get the percentage of incremental lift.

Formula for measuring incremental lift

How to calculate incrementality: Incrementality is the percentage of test Group B that converted due to marketing spend. In this example, those 20 additional installs make up 16.7% of Group B’s total installs. The formula for this calculation is incremental lift divided by installs from Group B. This number is then multiplied by 100 to get the percentage of incrementality.

Formula for measuring incrementality

Turn incrementality data into insights

In the above example, your campaign has a positive incremental lift. With this information, you can now accurately calculate whether your total spend is worth an additional 20% of installs. If it’s too costly, you could theoretically pause ad spend and expect to keep 83.3% of installs.

However, you will not always see a positive incremental lift. You may also find that a campaign’s incremental lift remains neutral. In this case, a campaign may be generating sales but does not have incremental value. Consider changing the creative or updating your target market to gain a positive incremental lift.

Finally, you might see a negative incremental lift. While it’s not common, a marketing campaign may have a negative impact. For example, an overly aggressive retargeting campaign could turn off potential users. Or paid marketing efforts could cannibalize organic wins. To eliminate cannibalization, partner with a mobile measurement partner (MMP) and set parameters that detect existing users. In the case of a negative incremental lift you should halt the campaign and rethink its concept.

“Incrementality is important across the board in your full media strategy,” says Kate Minogue. “What we see on our acquisition side is that if an advertiser is unsure about their attribution logic, or how accurate maybe that they've chosen are, then they can use incrementality to give an explicit view of the real impact and to validate that attribution.”

How to measure incrementality with ease

Before setting up a marketing incrementality test, there are a few key things to keep in mind. It’s vital to decide on your primary metric and objective before implementing an incrementality test. What would you like to measure and why?

Meta’s Kate Minogue advises to, “Always, always, always have a hypothesis when you're running a test. And I'd even go a step further than that and say before you run the test, have a plan for what you're going to do if you get positive, negative, or even flat results.”

Let’s say you want to measure an ad’s influence on in-app purchases (IAPs) instead of using installs as the conversion event like in our earlier example. For instance, if your app sells shoes, each shoe purchase is a conversion event, so your primary outcome would be the uplift in shoe sales.

If the control group [Group A] has 10 in-app purchases and Group B made 20 purchases, the primary outcome now shows 100% incremental lift and 50% incrementality. Based on this, you can determine that your incremental users are highly likely to make a purchase. By changing the primary outcome in this use case, incrementality reveals that ad spend appears far more valuable.

Michelle Huynh, Director of Growth at Poshmark, provided an example of how incrementality measurement helped them move forward with growth targets:

“TV is a relatively new channel for us. We took one week’s worth of budget and decided to spend it all in one day. It was a bit risky, but we were looking at the blended cost and an hourly chart, looking at the week-over-week just to see if there was a spike. We then proved that TV users are incremental and a little bit expensive, but this helped us move forward a lot! We were able to scale TV spend efficiently, and it’s now one of our biggest channels.”

It’s hard to argue against the benefits of incrementality testing. Not only is it the most accurate way to deduce the actual cost of acquired users, but it can also prevent you from cannibalizing your organic traffic. The process provides valuable insight into your marketing spend and allows you to achieve an important goal for all app marketers: maximizing your marketing spend without cannibalizing organic triumphs.

Interested in learning more about incrementality with Adjust, and how to utilize this knowledge to grow your app? Check out our guide to user acquisition, our guide to media mix modeling, or reach out to your Adjust rep to discuss incrementality.

Be the first to know. Subscribe for monthly app insights.