Preventing mobile user acquisition fraud is not easy.
It’s not just that fraud schemes are a moving target (though they certainly are!). It’s also that any intervention you set up often will, out of necessity, affect the dataset powering UA decisions. A bad algorithm won’t just be ineffectual but could be directly destructive.
One approach to fraud prevention is “device ranking”. We’ve gotten a lot of questions about what this is, how this works, and perhaps most saliently, why we’re taking a stand against it in our product vision.
In this post, we’d like to explain how device ranking works, and why it’s a terrible idea.
What is device ranking?
Essentially, these systems create a long blacklist of mobile devices that were somehow, at some point, implicated in a fraud scheme.
This type of device profiling was very effective on the desktop Web for a long time, which is probably where these platforms take their cues from. Tagging individual devices works well when you’re looking for, say, retail or credit card fraud. In that scenario, an individual fraudster is typically using a specific device, and you can uniquely identify that device over time.
Mobile user acquisition fraud (UA fraud) is a completely different beast. In these schemes, the individual device is irrelevant to the fraud scheme at hand.
So there are three main problems to consider with device ranking. What are these?
1. Device ranking is generally ineffectual at preventing even the simplest approaches to fraud.
Mobile app developers will typically test their apps during development by running a simulator - where the app can run within a simulated mobile environment, behaving as it would on a real device, but running on a desktop computer.
This same type of software can be used in order to fake clicks, installs, or even post-install conversions altogether. This is the simplest type of fraud: “click” an ad within a simulator, install the app in the simulator, and cash in the payout. The process can be highly automated and usually runs within a datacenter or on a cloud service.
Here’s the kicker: every time you restart the simulator, the device will be “new”. For starters, it’ll have a brand new, randomly generated device ID. They might even have a randomly assigned device type, deciding on the fly whether to pose as a German-language iPhone 6 or a Japanese Samsung running Android.
So you might be able to catch a simulated device by looking at the device itself… once. Next time it’s used to fake an engagement, your profile won’t match.
2. Device ranking is more likely to reject legitimate organic conversions than prevent fraudulent activity.
As we’ve discussed in our study, fraud in mobile user acquisition campaigns takes on two primary forms. Either the conversion is completely simulated, as described above. In the other case, real devices are straight-up “hijacked” to fake engagement with ads that were never displayed.
This second approach is what we call “click spam”.
In the former case, profiling individual devices is pointless: fraudsters will simply reset the simulator every single time, thus invalidating any profiles. The only profile that these devices could match is that they’re a “new” device. You could blacklist all “new” devices, but then you’d catch far more legitimate devices - such as those that have been recently purchased or reset to factory settings.
In the latter case, the device doesn’t factor into it - it’s only hijacked for a short amount of time. It would be a mistake to give credit for the conversion, but that doesn’t mean that the device is illegitimate or that the user’s behaviour should be rejected!
If you were to reject devices based on being hijacked by a fraudster, you’ll underestimate the performance of the channels that truly drove the conversions.
Either way, the device just doesn’t have any lasting relationship with the fraudulent activity. So trying to reject devices is hitting the wrong angle altogether. Most likely, it will undermine the legitimate conversion data that marketers use to make decisions.
3. Device ranking is a significant intrusion into every user’s privacy.
Analytics doesn’t have to conflict with respect for user’s privacy, but device profiling requires a heavy-handed and extensive monitoring of every single user that a vendor tracks.
In “device ranking” schemes, your activity in any app that you use contributes to a vendor-wide profile - that you aren’t allowed to know about, and which you have no control over. And there’s no way to profile or rank devices in a privacy-conscious way: you can’t know which devices to filter before you’ve already monitored them.
This is especially jarring after the release of iOS 10, where Apple made the Limit Ad Tracking settings even stricter. Most people understand that their activities are aggregated and analyzed by the services they use.
But would you really explain to people outside the industry that your app contributes to a huge monitoring operation profiling each and every single device across many of the apps that are installed? A simple “pub test” proves that it’s not palatable.
So what’s the takeaway?
Looking at individual devices has its merits and use cases. We can imagine, for example, that a big mobile game publisher might want to keep track of specific devices and gauge the likelihood that it’ll be involved in cheating. Similarly, in retail and credit card fraud, the device and the user are frequently more synonymous, and it makes sense to look for the device to find the fraudster.
But it’s clear to us that fraud has nothing to do with the individual devices. Trying to prevent fraud by looking at devices is like trying to find an errant sheep by analyzing wool sweaters.
There are solutions to fraud prevention that already work. These include a combination of distribution modeling, filtering for “anonymous” IPs, and clever analysis of device types.
Instead, marketers should be aware of how fraudsters typically operate, which is to say: at scale, and with many “devices” under their belt. These masses of devices are not individually detectable, but the statistics behind specific sub-publishers or campaigns can stand out. Abnormally high proportions of certain device types is one example; another is click-to-install times that are more randomly distributed than the average. None of these relate to individual devices, and there’s no need to broadly profile individual users.
In UA fraud, the devices are not the culprit, nor are they means to finding the fraudster. Ranking them is completely missing the point.