App Marketing

Fighting back fraud: Why you should watch your conversion rates

Andreas Naumann
Fraud Specialist
Topics

It’s good practice to keep an eye on conversion rates (or CR) when thinking about catching fraud, as it can be used as an indicator of fraudulent tactics used to impact campaign results. (The fraud scheme with the strongest impact on CR is practically any form of click spam.)

A healthy display campaign will normally have a conversion rate of 1-20 percent, depending on the quality of its targeting/creatives/the product advertised. Anything lower and the campaign might be targeted outside of the advertiser specifications, or using non-sanctioned or deceptive (to the user) creatives. Extremely low CRs of 0.1 percent and less leave little room for conclusions as to why, besides click spam.

Let’s examine a fictitious campaign from a top-level breakdown. If you want to reproduce this for your own campaign, I suggest you follow my steps on the most granular source breakdown in order to identify the single culprits. Here, click-through rate (or CTR) is defined as clicks divided by impressions:

SourceImpressionsClicksCTRInstallsCR
Organics00n/a87,358n/a
Social Network A839,10174680.89%89912.04%
Social Network B12,856,232112,4350.87%2,1251.89%
Video Network A24,437,6161,656,2356.78%165,2359.98%
Video Network B48,169,94513,577,02428.19%49,6810.37%
Perf. Network A0685,170n/a7150.10%
Perf. Network B059,695,791n/a6,6180.01%
Perf. Network C06,331,581n/a27,8000.44%
Perf. Network D1,725,36736,2662.10%1,0382.86%
Total / Average88,028,26182,101,9704.85%341,4693.46%

To help our calculations along we are going to assume a CTR of one percent for the networks that were not able to deliver any impression metrics. One percent is not exceptionally good but also not bad. This will lead us to the following table:

SourceImpressionsClicksCTRInstallsCR
Organics00n/a87,358n/a
Social Network A839,10174680.89%89912.04%
Social Network B12,856,232112,4350.87%2,1251.89%
Video Network A24,437,6161,656,2356.78%165,2359.98%
Video Network B48,169,94513,577,02428.19%49,6810.37%
Perf. Network A68,517,000685,1701.00%7150.10%
Perf. Network B5,969,579,10059,695,7911.00%6,6180.01%
Perf. Network C633,158,1006,331,5811.00%27,8000.44%
Perf. Network D1,725,36736,2662.10%1,0382.86%
Total / Average6,759,282,46182,101,9705.23%341,4693.46%

As we see here, our campaign’s reach has skyrocketed from 88m impressions of ad media (an impressive number to begin with) to 6.7 billion - which would make up 89.52 percent of the world’s population. It doesn’t add up. A campaign that reached nearly every mobile device on the planet that was active in the last four months twice in the same month would have cost a mere... $636,000. These results don’t make sense.

Let’s calculate the cost of the campaigns and derive the effective CPC and CPM prices (eCPC and eCPM) of the sources involved. That way we can verify which campaign contributors are delivering legitimate traffic. These are the maximum prices a network partner would be able to pay the traffic source, or publisher, for displaying a creative or enticing a user to click on an ad, and still break even.

To get closer to the actual price for a sources inventory you should deduct the margin a network would expect to generate revenue (which could be anywhere between one and 30 percent). In case there are several layers of networks and exchanges, each of them will deduct their own margin to the price.

Now let’s compare the eCPC and eCPM prices of what social network sources could earn per click or impression with potential earnings of a publisher running with any of the low converting performance networks. Again, it becomes obvious with a couple of data points that it makes no sense from a marketer’s point of view. Let’s look at why.

Social Network A: Seems to have found a well-defined audience for the campaign, earning quite a competitive price for their ad impressions/clicks.

Social Network B: The inventory Social Network B delivers on here is still potentially monetizing well, This should be the benchmark for regular performance based traffic as well. Targeting is likely to not be more granular than hitting the correct country and language for the offer.

Video Network A: Good CPC performance and exceptional CPM performance,-in this example the channel with the highest cost delivered the best performing campaign on this side of the funnel. In this case, it is paramount to also check the quality of the post-install metrics like retention, sales, IAPs etc. A campaign with this amount of contact-to install performance should have a positive ROI.

Video Network B: The combination of exceptionally high CTR and sub-par CR on this campaign entry is reflected in the curious spread between eCPC and eCPM. Resulting in a respectable CPM but unmanageable CPC. It is likely that this campaign is being manipulated by triggering clicks without the actual intention or interaction of the user; in other words,clicks being triggered on 50% view or click fired automatically on the video end card. This results in a much higher CTR expected and also a much lower CR than is typical for a well-targeted video campaign. The end result is an unclear amount of poached organics through a mild amount of click spam.

Performance Networks A/B: This is the phenotype of click spam in action. The assumed reach calculated from the conservatively assumed CTR is staggering and shows no correlation to the campaign spend ($16k reach 6 billion pairs of eyes while $413k reach “only” 24,5 million pairs of eyes). The respective CPC prices of 2/10th of a US cent and 2/100th of a US cent should make it exceptionally clear that no publisher would run this campaign to monetize their app or website.

Whenever you are being told you should not pay attention to the clicks as you are not paying for them… pay extra attention to the clicks.

Performance Network C: The prices to monetize content with are still sub-par to the competition, and the reach is clearly exaggerated. But this is one of the cases where it would surely pay off to dig deeper into more granular data to figure out which individual sources should be kept and which dropped.

Performance Network D: This source is competitive. Optimization should open up potential improvements in all directions, while a slight increase in CPI should be able to positively influence volume. This example is from a network that is not only delivering quality traffic constantly but also is one of the few that is completely transparent about their sources. Opening up the names of sub partners and direct in-app inventory alike.

It pays to look out for the transparent networks with a strong pedigree if you want to genuinely scale your UA campaigns.

Ready to start fighting back on fraud? Education is the first step towards getting rid of it for good. Download our mobile fraud guide to find out what else you need to be looking out for to keep you data clean.

Want more anti-fraud tips?

Sign up to our mailing list to hear more on the best ways to keep fraud away.

Gadzooks!

You've done it, you're now signed up to our mailing list! Watch out for updates as they happen.