Insight incrementality testing

Inside InSight: Bringing statistical clarity to budget decisions with incrementality testing

Performance marketing, budget, and campaign optimization decisions rarely hinge on a single metric. Attribution identifies the attributed source behind conversions. Incrementality complements this by estimating the incremental effect of marketing changes, comparing actual results to expected baseline performance. Together, they connect measurement with causal impact. 

Understanding the increasing importance of incrementality testing as a measurement methodology for app marketers and advertisers, Adjust launched InSight in 2024. Part of our Recommend pillar, InSight builds and improves upon conventional incrementality measurement methods by using machine learning and AI to power data-driven, ROI positive decisions.  

In an environment where user journeys are increasingly multi-channel and complex, the need for robust and reliable incrementality testing is clear. In this blog, we take a look at InSight and explore the ways testing is meaningfully unmasking significant uplift and organic cannibalization.

Incrementality in operation

Incrementality is often discussed conceptually. In practice, it is about evaluating real budget changes and determining whether they produced measurable impact. It can be about putting more budget into a different partner, but it can also be about the timing of your campaign, i.e. what does your ROI look like one week vs. the next. 

When a campaign budget increases or a new campaign launches, performance typically moves. The question is whether that movement reflects incremental growth or redistribution of existing demand. InSight isolates that effect and expresses it with a 95 percent confidence interval, helping teams separate signal from noise.

This is where statistical significance becomes central; lift percentage shows magnitude, significance shows reliability. This data informs whether a result justifies scaling or requires further testing.

What statistically significant lift looks like

Adjust recently compiled a benchmark from completed InSight tests in the U.S. across iOS and Android, and the results highlight why statistical validation matters when budgets change.

Based on this analysis, on iOS, budget increases resulted in statistically significant incremental lift in roughly one third of cases for the largest platforms, and around one fifth of cases across several other major networks. Lift appears frequently following spend increases, but statistically validated lift is more concentrated. 

Android results introduce an additional dynamic. Following budget increases, statistically significant organic cannibalization appeared in a meaningful share of cases across multiple platforms. In some instances, roughly one quarter of budget increases produced significant cannibalization effects.

incrementality test example

Viewed side-by-side, these patterns illustrate the value of causal modeling. On one platform, increased spend produced net-new growth with statistical confidence. On another, it shifted existing demand rather than expanded it. Both outcomes are actionable. Significant lift supports confident scaling, while organic cannibalization signals the need for reallocation or refinement.

From statistical signal to budget action

Data becomes truly valuable when it informs a decision. That is why InSight does not stop at modeling lift or cannibalization. It translates statistical output into operational clarity.

Recent updates to the Results and Recommendations view reflect how performance teams actually work. When marketers open a test, they first scan the headline indicators. Incrementality percentage, statistical significance, and model accuracy metrics such as MAPE provide an immediate read on whether the outcome is stable and actionable.

This high-level view answers a simple question. Is this result reliable enough to act on?

From there, teams move into the advanced data table. It is here that granular breakdowns influence budget shifts, bid strategy adjustments, and partner mix decisions. Statistical significance becomes the threshold that separates directional movement from defensible action.

For teams running frequent budget changes, this shortens the distance between analysis and execution.

Incrementality as part of the Adjust ecosystem

InSight does not operate in isolation. It connects with the broader Adjust suite, allowing teams to streamline spend measurement, centralize reporting, and monitor performance signals in one environment.

This ensures that incrementality testing complements attribution rather than competing with it. Attribution provides channel-level visibility and InSight validates causal contribution. When using both as part of measurement analysis, teams have all the information needed to scale with greater confidence.

To learn more about how Adjust can grow your app business sign up for free or request a demo today.

Be the first to know. Subscribe for monthly app insights.

Keep reading