App Marketing

Detection, prevention and what makes a good fraud fighting filter: Mobile fraud theory, part 2

Topics

Moving beyond the types of fraud we discussed in part 1 of our theory series, it’s time we looked at the lack of distinction between fraud solutions, and exactly how we apply our methodology to create a stronger system.

Prevention can be as murky a topic as fraud itself, and there remains a large amount of confusion between prevention, detection and rejection - often to the detriment of advertisers who want to run campaigns without interference.

Fraud detection and prevention are also two essential elements of the Suite - so let’s define what both of these are. If you missed part 1 of our series, click here to catch up.

What is fraud prevention?

Now that we have established a common language to talk about fraud we can begin to talk about preventing it. So, first of all, what do we mean when we say prevention?

Adjust defines fraud prevention as rejecting attribution to known methods of fraud. In that, we fundamentally differ from other players in the mobile ad ecosystem.

As mentioned, we believe that fraud prevention should always follow the path of first detecting a type of fraud, then researching the method used before finally creating a logical filter for its unique characteristics.

In contrast to our approach, the industry often uses detection, prevention and rejection in the same context interchangeably. This creates uncertainty from a lack of expertise, which is furthered by malicious intent to keep the market in an ongoing state of confusion.

When applying our definition of prevention we see that it’s only attribution companies who are in a position to apply filters effectively. Third-party tools can only show detection metrics after the fact, unless an attribution company allows them to interfere with the attribution.

This can incentivize them to muddy the terminology.

However, other attribution companies also do not embrace our definition of prevention.

Initially, none of the other players were willing to perform any filtering of attribution in fear of cutting into the revenues of their network partners, which would damage relationships and potentially endanger important client referrals. Only after Adjust pioneered the practice and established a standard to communicate rejections to its partners did some competitors follow suit.

Even though some competitors have added filters to their “fraud prevention” suites, there are still central differences between our technology and theirs. At the core it is about two things:

  1. The willingness to constantly research new methods and find their unique markers.
  2. The willingness to assume liability for rejected attributions, removing the client from the discussions with networks.

To understand how Adjust disagrees with its competitors we need to take a closer look at fraud detection and fraud filters.

Understanding what makes of fraud detection different

We’ve just learned that fraud prevention has to begin with fraud detection, as we cannot fight what we cannot see. It’s crucial to define “detection” as the process of statistical analysis of ad engagements and app activities. Please bear in mind that statistical analysis is different from logical analysis, which we’ll cover in the filter section below.

Fraud detection is categorized the same way as fraud, in that it can also be split by two distinct types. Either detection looks for non-existent engagements or spoofed users.

For Spoofed Attribution, detection often focuses on traffic quality markers such as origin IPs of clicks, conversion rates and general click to install time (CTIT) distribution. Detection for spoofed users looks at user behavior. Do users purchase or retain? Do their interactions follow a pattern of a normal (human) user?

Detection is a great way to identify the presence of a type of fraud, and to start research on the method used. This is because detection isn’t concerned with the “how” the fraud got into the data set in the first place.

A simple example would be a client looking at his dashboard and seeing conversion rates of 0.05% from Network A. He does not know which engagements aren’t real, but he knows the chances of fraud being the cause are high.

Another example would be an in-house BI team looking at purchase rates for users from different publishers. They don’t know which exact users aren’t real but they can clearly spot publishers with abnormally low LTV.

Ranking devices with a machine learning algorithm is also a form of fraud detection. A neural network can cluster users by a large number of attributes, allowing to detect patterns normally not easily picked up by humans. Scoring users can tell us if users from certain partners show abnormal behavior. How those users we tracked as real to begin with is not yet addressed.

These examples show why Adjust sees detection as a necessary first step of fraud prevention, and not a cure-all solution. In fact, we believe that promoting standalone detection can have severe negative effects on the ecosystem.

The problem of prevention as standalone solution

Imagine your credit card company telling you about a fraudulent cash withdrawal from the other end of the world. But instead of doing anything to stop it they merely send you an update of every further transaction they have classified as fraud. Blocking the card is left to you to deal with.

Advertisers that use fraud detection tools feel very similar. Having an “after the fact” alert for abuse creates an ecosystem prone to knee-jerk reactions, and also creates an unfortunate acceptance for “base-level fraud”.

At the end of every month advertisers have to dig through reports for every network partner and their sub-publishers, flagging those that crossed an arbitrary threshold of detected fraud. Then they have to try to convince the network of those numbers and agree on refunds or discounts for future campaigns. This process is cumbersome and highly unpopular for both sides involved. The only other way advertisers can react to the information is to blacklist certain sub-publishers or to stop working with a partner altogether. In the end, advertisers almost always start accepting “a few” percent fraud before using either option due to their arbitrary nature.

It goes without saying that the only “acceptable” amount of fraud should be 0%. Currently, if you brought this up at an event you’d be laughed at, but that’s only because sub-par fraud measures are so rampant in our industry.

For networks this process is equally frustrating, as their publishers often get paid out long before advertisers try to reclaim fraudulent traffic. It also doesn’t give networks a chance to programmatically remove the malicious publishers as aggregated reports are hard to consume and lack actionable data.

The only move that could be made to prevent paying out to fraudulent publishers are active filters that communicate their rejections to networks and advertisers in real time. Let’s look at what they are in more detail.

What makes a good fraud filter?

Without filtering out fraudulent data legitimate traffic sources can lose attributions, distorting the client’s data set and devaluing downstream metrics. As such, filtering attributions to reject fraud should be the single outcome of any fraud prevention system.

However, not all filters are created equal. Many solutions display a lack of understanding of fraud methods by a poor set of filtering criteria. Moving beyond the black box mentality requires an overhaul of what most fraud filters should set out to do.

Below are the four pillars that set the foundation for Adjust’s filtering system. They are:

  • A low false positive rate
  • A low false negative rate
  • Logical
  • Transparent

We define a fraud filter as a set of logical conditions that don’t produce false positives or negatives, or (at least) produces a very low rate of them. The logical conditions need to be explainable and transparent, meaning the question “why exactly was this attribution rejected?” has to have a clear, understandable and relatable answer. A good filter should also be based on logical facts and rely on mechanisms that the fraudster does not control and therefore cannot circumvent.

Adjust’s view on filtering fraud

Since we reject ad fraud, we have to be willing to assume the responsibility for any attribution stopped by our system, and defend each one and every one to its partners.

Having that conversation with networks over fraud needn’t happen when fraud is automatically stopped, but we assume the burden that we have to get it right, every time.

Fraud prevention should not just be a marketing ploy, or a means to muddy the water - it’s a serious responsibility. If done correctly, anti-fraud solutions will help to advance the entire mobile ad ecosystem. If done without the proper attention to detail and the research necessary, it will end up as the snake oil of our industry.

This is the second part of our series taking on a new perspective on ad fraud. If you’d like to read part 1, click here. Part 3 covers a wholly new topic - whether machine learning can match human filtering logic.

Want more on mobile ad fraud?

Join thousands of mobile marketers signed up to Adjust’s mailing list by entering your email below

Success!

Keep an eye on your inbox for updates as they happen.