Building a solution by defining the problem: Mobile fraud theory, part 1
Posted Aug 1, 2018
Fraud is always changing.
At least, that’s what it might feel like. However, in our view, mobile ad fraud is quite limited in what it can do. Though there are always new exploits being developed by fraudsters to steal an app’s marketing budget (or to try to get around our own Fraud Prevention Suite) that doesn’t mean the underlying systems vary. Such a distinction is very important when trying to develop a fool-proof solution against the problem.
At Adjust, we think about fraud differently. We view exploits (like SDK Spoofing) as ‘methods’ - the ways in which a fraudster can operate in order to commit theft. But, at its roots, mobile ad fraud can only work in one of two established structures, or ‘types’.
This might come across as an unfamiliar mindset, even unnecessary to some. However, in our view, the problem of mobile ad fraud - once defined - can be dealt with much more effectively. Instead of arguing over semantics, the industry as a whole can forge ahead, dealing with fraud in a co-operative way.
So, in this article we categorize ad fraud in new terms, giving some clarity to a tangled issue. Scroll on to read more.
There are only two types of fraud
In all types of fraud, a fraudster can spoof one (or both) of two types of ‘signal’ used in attribution. The two types of signal they can spoof are ad engagements, like views or clicks, and app activities, like installs, sessions and events.
As such, we’ve created a distinction between types of fraud that spoof ad engagements or the user’s in-app activity.
The former is known as Spoofed Attribution. The latter is called Spoofed Users.
Why make this distinction?
Whenever we discover a new method of fraud, we start an investigation into it by figuring out which type of signal this form of fraud wants to take advantage of.
For instance, one method of Spoofed Attribution began as ‘Click Spamming’, but as time went on we discovered more advanced methods, ‘Click Injection’ among them. Though both methods steal attribution, they work in different ways.
However, by understanding that the two work within the same system, it became easier to apply solutions that dealt with both. By basing the two in a single definition - Spoofed Attribution - it was much simpler to work in terms of fraudsters stealing attribution, and not getting them mixed up with other schemes.
Combining the previously mentioned spoofed signals gives us a matrix:
Everything in Type I is considered genuine traffic, where real users are driven to interact with an app by an advertisement they have actually engaged with.
Type II describes Spoofed Attribution - where a fraudster spoofs ad engagements for real users, with the aim of stealing credit for a user that either organically interacted with the app or was driven by a legitimate advertisement. This type is also known as ‘stolen attribution’ or ‘poaching’.
Type III and IV defines Spoofed Users: this type of fraud focuses on simulating the behavior of a user’s in-app activity. By spoofing installs and events for non-existent users, fraudsters can steal ad budgets aimed at rewarding app-based conversions. ‘Botting’, ‘bots’ and anything related to ‘fake users’ are all associated with this type of fraud.
Currently, fraudsters can easily fake ad engagements for any users they’ve fabricated. So, whenever we see spoofed app activity it’s always coupled with fraudulent engagement data. As such, we’ll group types III and IV together for the sake of simplicity.
When discussing fraud, it’s useful to think of these ‘types’ (such as Spoofed Attribution) as the ‘what’ and ‘methods’ (like Click Spam) as the ‘how’.
So, what do these ‘methods’ look like in practice? Let’s split them into their respective types, so we can understand with more clarity what each method does.
Methods of Spoofed Attribution include Click Spam and Click Injection. With Spoofed Users where the activity is faked, we see Simulators, Device Farms and SDK Spoofing. When applying our above matrix, the configuration of fraud looks like this:
Now let’s cover each type in a little more detail.
As we mentioned earlier, spoofed ad engagements started out with simple Click Spamming and its variations like ‘click stacking’, ‘views as clicks’ or ‘preloading’ These methods function by sending as many clicks to an attribution company as possible and gaining attribution for users by randomly matching device IDs or fingerprints.
Advanced methods (like Click Injection) create fake clicks during the download of an app, claiming attribution with an impossible to beat ‘last click’.
The first cases of Spoofed Users we detected involved simulators on cloud computing services running Android apps that were pretending to be real users. On iOS, we specifically identified device farms in southeast Asian countries where real devices and actual humans created non-genuine app activities.
Recently, we’ve seen a much more devious method: SDK Spoofing. This cuts the cost for creating fake user interactions by only faking the requests made from an app to servers of attribution companies and app publishers, instead of actually running the app. Fraudsters have broken encryptions and hashed signatures, which has led to an arms race between fraudsters and researchers.
We see that simulators, cheap labor and bots can all be used to create fraudulent app activities. They are all different methods used to commit the same type of fraud.
Why define fraud?
We’ve spent the length of this article defining ad fraud - but to what purpose?
Essentially, as we’ve been working to try to stamp out each individual method, we’ve identified certain patterns. The more we think about and define what these mean, the easier it becomes to stamp out fraud in future, while being more ready to educate everyone on what fraud really looks like.
We’ve found that asking questions such as “What is the method of this fraud?”, “How did they get this user activity into our system?”, “How does Click Spamming really work?” are much deeper, and give us more to work with, than simply asking “is this fraud?”
If you begin by stating that there is a problem, and then look at the individual methods applied, you’re able to build much more of an assertive understanding. Starting with “this is that method”, moving to “this is the countermeasure”, and finally, “this is the yes-no filter”, creates a proactive process to fight it.
If you don’t get beyond the methodology, you only have one answer to one question: “yes, this is a problem”. In today’s world, that isn’t enough.
Fraud is constantly changing. However, we know the limitations, and by working towards the source, asking questions, and figuring out that it can only develop within a certain system, we can create stronger solutions for the benefit of everyone.
This article is the first in a series to look at fraud in depth. In part 2, we move beyond definitions by taking on the distinction between detection and prevention. Keep your eyes on the blog or sign up to our mailing below, and get the new post as soon as we publish.