Why filtering short click to install times isn't the answer

Andreas Naumann

Posted Apr 18, 2019

Out latest benchmarks revealed that Click Injection accounts for almost half (47%) of all mobile ad fraud. And It’s still believed that the “best way” to fight Click Injection fraud is to reject all attribution for any installs that happen a few seconds after a click.

The idea here is that it's impossible to download and open an app (typically over 100MB in size) within such a short space of time. But there’s a problem with this approach. In this article, we’ll explore why this is the case, better ways to detect fraudulent clicks and how to prevent click fraud.

2 KPIs for fraud filters: False positives & false negatives for fraud detection

When you create a histogram with 1-second wide buckets, a source that delivers Click Injection traffic will usually show an uncharacteristic bump in click-to-first-open time within the first 3-5 seconds, like in this graph:

Let’s compare this with an example of normal traffic, where there's no bump to be seen:

Looking at both graphs, it’s obvious that there's nothing to differentiate between genuine installs and Click Injection traffic past the initial few seconds - at least, from those uncharacteristically high amount of installs converting quickly from click-to-first-open.

Mitigation ‘experts’ might say that conversions with click-to-first-open times of under 15 seconds (or any other number derived from the app size, and the perceived minimum possible time to go through the full conversion funnel) are the only fraudulent activities that occurs, but there are two problems with this approach. First, you’re cutting out some legitimate installs within this timeframe. Second, and more importantly, you’d be ignoring the vast majority of false negatives that occur over an attribution time frame.

An advertiser who trusts a so-called ‘fraud mitigation expert’ might believe that a Click Injection fraud scheme is entirely mitigated by not paying for installs that take less than X seconds to convert from click to first open of the app. So, filtering all that out will essentially do the job.

Ultimately, that large initial bump only makes up ~3-5% of all fraudulent installs. So, if you reject or chargeback these installs, you’re only taking care of about 3-5% of total Click Injection fraud, failing to account for 95-97%. This is because more devious (and more common) methods that exploit Content Providers don’t ever result in short click to install times.

In cases like this, a fraudster almost receives a double payout - first from the high volume of undetected click fraud, but also from likely recurring investment from advertisers who think the channel is working for them. Detection isn’t a substitute because this method simply doesn’t work. We urge clients to shut down sources in total and not start with the reporting or chargeback of an uncertain ratio between correct and false negatives. The problem of reporting false positives as well is just the icing on the cake.

Fraud detection: Watch your conversion rates to fight click fraud

It’s good practice to keep an eye on conversion rates when thinking about fraud detection, as this can be used as an indicator of fraudulent tactics used to impact campaign results. (The fraud scheme with the strongest impact on CR is practically any form of Click Spam.)

A healthy display campaign will normally have a conversion rate of 1-20%, depending on the quality of its targeting, creatives and the product advertised. Anything lower and the campaign might be targeted outside of the advertiser specifications, or use creatives that are non-sanctioned or deceptive to the user. Extremely low CRs of 0.1% or less leaves little room for conclusions as to why, besides Click Spam.

Let’s examine a fictitious campaign from a top-level breakdown. If you want to reproduce this for your own campaign, I suggest you follow my steps on the most granular source breakdown in order to identify the single culprits. Here, click-through rate (or CTR) is defined as clicks divided by impressions:

SOURCE Impressions Clicks CTR Installs CR
Organics 0 0 n/a 87,358 n/a
Social Network A 839,101 7468 0.89% 899 12.04%
Social Network B 12,856,232 112,435 0.87% 2,125 1.89%
Video Network A 24,437,616 1,656,235 6.78% 165,235 9.98%
Video Network B 48,169,945 13,577,024 28.19% 49,681 0.37%
Perf. Network A 0 685,170 n/a 715 0.10%
Perf. Network B 0 59,695,791 n/a 6,618 0.01%
Perf. Network C 0 6,331,581 n/a 27,800 0.44%
Perf. Network D 1,725,367 36,266 2.10% 1,038 2.86%
Total / Average 88,028,261 82,101,970 4.85% 341,469 3.46%
Perf. Network D 1,725,367 36,266 2.10% 1,038 2.86%
Total / Average 88,028,261 82,101,970 4.85% 341,469 3.46%

To help our calculations, we’re going to assume a CTR of 1% for the networks that were not able to deliver impression metrics. 1% is not exceptionally good, but also not bad. This will lead us to the following table:

SOURCE Impressions Clicks CTR Installs CR
Organics 0 0 n/a 87,358 n/a
Social Network A 839,101 7,468 0.89% 899 12.04%
Social Network B 12,856,232 112,435 0.87% 2,125 1.89%
Video Network A 24,437,616 1,656,235 6.78% 165,235 9.98%
Video Network B 48,169,945 13,577,024 28.19% 49,681 0.37%
Perf. Network A 68,517,000 685,170 1.00% 715 0.10%
Perf. Network B 5,969,579,100 59,695,791 1.00% 6,618 0.01%
Perf. Network C 633,158,100 6,331,581 1.00% 27,800 0.44%
Perf. Network D 1,725,367 36,266 2.10% 1,038 2.86%
Total / Average 6,759,282,461 82,101,970 5.23% 341,469 3.46%

As we see here, our campaign’s reach has skyrocketed from 88m impressions of ad media (an impressive number to begin with) to 6.7 billion - which would make up 89.52% of the world’s population. It simply doesn’t add up.

Let’s calculate the cost of the campaigns and derive the effective CPC and CPM prices (eCPC and eCPM) of the sources involved. That way we can verify which campaign contributors are delivering legitimate traffic. These are the maximum prices a network partner would be able to pay the traffic source, or publisher, for displaying a creative or enticing a user to click on an ad, and still break even.

To get closer to the actual price for a sources inventory you should deduct the margin a network would expect to generate revenue (which could be anywhere between 1% and 30%). In case there are several layers of networks and exchanges, each of them will deduct their own margin to the price.

Source Impressions Clicks CTR Installs CR eCPC eCPM Cost
Organics 0 0 n/a 187,358 n/a $0.000000 $0.000000 $0.00
Social Network A 839,101 7468 0.89% 899 12.04% $0.300951 $2.678461 $2,247.50
Social Network B 12,856,232 112,435 0.87% 2,125 1.89% $0.047250 $0.413224 $5,312.50
Video Network A 24,437,616 1,656,235 6.78% 165,235 9.98% $0.249414 $16.903756 $413,087.50
Video Network B 48,169,945 13,577,024 28.19% 49,681 0.37% $0.009148 $2.578423 $124,202.50
Perf. Network A 68,517,000 685,170 1.00% 715 0.10% $0.002609 $0.026088 $1,787.50
Perf. Network B 5,969,579,100 59,695,791 1.00% 6,618 0.01% $0.000277 $0.002772 $16,545.00
Perf. Network C 633,158,100 6,331,581 1.00% 27,800 0.44% $0.010977 $0.109767 $69,500.00
Perf. Network D 1,725,367 36,266 2.10% 1,038 2.86% $0.071555 $1.504028 $2,595.00
Total / Average 6,759,282,461 82,101,970 5.23% 441,469 3.46% $0.076909 $2.690724 $635,277.50

Now let’s compare the eCPC and eCPM prices of what social network sources could earn per click or impression with potential earnings of a publisher running with any of the low converting performance networks. With a couple of data points, it makes no sense from a marketer’s point of view. Let’s look at why.

  • Social Network A: Seems to have found a well-defined audience for the campaign, earning quite a competitive price for their ad impressions/clicks.
  • Social Network B: The inventory Social Network B delivers on here is still potentially monetizing well, This should be the benchmark for regular performance based traffic as well. Targeting is likely to not be more granular than hitting the correct country and language for the offer.
  • Video Network A: Good CPC performance and exceptional CPM performance,-in this example the channel with the highest cost delivered the best performing campaign on this side of the funnel. In this case, it is paramount to also check the quality of the post-install metrics like retention, sales, IAPs etc. A campaign with this amount of contact-to install performance should have a positive ROI.
  • Video Network B: The combination of exceptionally high CTR and sub-par CR on this campaign entry is reflected in the curious spread between eCPC and eCPM. Resulting in a respectable CPM but unmanageable CPC.

It is likely that this campaign is being manipulated by triggering clicks without the actual intention or interaction of the user; in other words, clicks being triggered on 50% view or click fired automatically on the video end card. This results in a much higher CTR than expected and also a much lower CR than is typical for a well-targeted video campaign. The end result is an unclear amount of poached organics through a mild amount of Click Spam.

Performance Networks A/B: This is the phenotype of Click Spam in action. The assumed reach calculated from the conservatively assumed CTR is staggering and shows no correlation to the campaign spend ($16k reach 6 billion pairs of eyes while $413k reach “only” 24,5 million pairs of eyes). The respective CPC prices of 2/10th of a US cent and 2/100th of a US cent should make it exceptionally clear that no publisher would run this campaign to monetize their app or website.

Whenever you are being told you should not pay attention to the clicks as you are not paying for them… pay extra attention to the clicks.

Performance Network C: The prices to monetize content with are still sub-par to the competition, and the reach is clearly exaggerated. But this is one of the cases where it would surely pay off to dig deeper into more granular data to figure out which individual sources should be kept and which dropped.

Performance Network D: This source is competitive. Optimization should open up potential improvements in all directions, while a slight increase in CPI should be able to positively influence volume. This example is from a network that is not only delivering quality traffic constantly but also is one of the few that is completely transparent about their sources. Opening up the names of sub partners and direct in-app inventory alike.

It pays to look out for the transparent networks with a strong pedigree if you want to genuinely scale your UA campaigns.

If you want to learn more about how to prevent click fraud, follow the link to our Mobile Publisher Fraud whitepaper. You can also read up on Adjust’s Fraud Prevention Suite and the Coalition Against Ad Fraud (CAAF).

Want to get the latest from Adjust?