What is a data discrepancy?
What is data discrepancy?
A data discrepancy occurs when two or more comparable data sets for the same metric or time period show inconsistent values across platforms or data sources. In mobile marketing, data discrepancies commonly appear when comparing reports from ad networks, analytics tools, and mobile measurement partners (MMPs). For example, one dashboard might report 5,000 installs while another shows 7,000 for the same campaign.
These discrepancies do not necessarily indicate incorrect data. Instead, they reflect differences in how platforms measure and report user activity. Understanding these differences is crucial for accurately interpreting campaign performance and making informed decisions.
Why do data discrepancies happen?
Multiple platforms, such as ad networks, app stores, and MMPs, track the same user journey. Each applies its own definitions, attribution rules, data collection methods, and processing logic, which means the same activity may be recorded differently.
For example, install counts can vary depending on when the event is recorded, such as at download or first app open (install). Attribution differences can also affect reporting, with some platforms assigning credit to the most recent engagement and others linking events to the original install source.
Other factors, including time zone settings, user identification methods, privacy frameworks like App Tracking Transparency (ATT) and SKAdNetwork (SKAN), and differences in tracking implementation, can further influence how data is reported.
)
Common types of data discrepancies
Data discrepancies can generally be grouped into a few categories, although a single discrepancy may involve more than one.
- Measurement discrepancies arise when platforms define or count metrics in distinct ways, such as installs or conversions.
- Attribution discrepancies occur when platforms assign credit using different models, windows, or timing rules.
- Reporting discrepancies result from how data is processed and displayed, including time zones, filtering, or delays.
- Tracking discrepancies stem from incomplete or uneven data collection due to implementation issues, consent limitations, or integration gaps.
How to identify data discrepancies?
Identifying data discrepancies begins by comparing the same metric across platforms over the same time period. The goal is to detect mismatches and determine whether they are expected or require further investigation. Marketers can do this by reviewing equivalent metrics, aligning reporting periods and time zones, and checking attribution settings or filters that affect how results are calculated. Unusual spikes, drops, or irregularities in reporting can also indicate potential discrepancies.
How to reduce data discrepancies?
Data discrepancies cannot be eliminated entirely, but they can be minimized by improving how data is measured and managed. This includes standardizing metric definitions, aligning attribution models and windows, and ensuring tracking is implemented correctly. Clear data governance practices, such as naming conventions and documentation, also help maintain data quality. Centralizing reporting can further simplify comparisons by providing a single, consistent view of performance.
Why data discrepancies matter
Data discrepancies directly affect how marketers make decisions. When the same metric reports conflicting values, it becomes harder to determine which campaigns or channels are driving results. This uncertainty can lead to misinterpreted data and inefficient budget allocation, with teams over-investing in some channels while underestimating others. Even small variations can make it difficult to make comparisons consistently, especially when metrics such as cost per install (CPI) or return on ad spend (ROAS) change according to the source.
Without a clear understanding of these discrepancies, teams may spend time reconciling reports instead of acting on insights, slowing decision-making and reducing confidence in data.
Data discrepancies and Adjust
With Adjust Measure, marketers can track performance from first touch to conversion by attributing installs and events to the sources that drive them. By connecting data from ad networks, app stores, and other sources, Adjust provides a reliable, unbiased source of truth for evaluating results, even when external platforms apply different definitions or attribution logic. Datascape brings this data into a single environment for analysis, allowing teams to explore trends, compare results, and identify where discrepancies originate.
Adjust also supports data quality through its Fraud Prevention Suite, which proactively and preventatively filters out invalid activity such as fraudulent installs or manipulated clicks, helping marketers make decisions based on real data.
To learn more about how Adjust can help you analyze and interpret your data with confidence, request a demo today.
Never miss a resource. Subscribe to our newsletter.
Keep reading
)