A common sense approach to mobile ad fraud

Samuel Harries

Aug 1, 2019

With reports estimating losses anywhere between $6.5 - $19 billion from ad fraud every year, it’s essential for marketers to learn the red flags that indicate possible attacks. This vigilance comes at no extra cost and will help you deduce which tools are needed to protect your marketing spend. In this webinar, Andreas Naumann, Head of Fraud at Adjust, is joined by Matt Fisher, Product Manager, Analytics & Attribution at Liftoff, and Dennis Mink, VP Marketing at Liftoff. Using their collective experience in mobile ad fraud detection and prevention, they outline how marketers can fight back with a common sense approach.

Webinar highlights: Combatting fraudsters with common sense

From focusing on CTIT (Click to Install Time) to asking the right questions, this webinar provides several insights on how your awareness can limit a fraudster’s opportunities. Here are three essential highlights.

Monitor CTIT to detect signs of Click Spam and Click Injection

When looking at CTIT, there are two extremes that can indicate an attack. As Matt points out, “Low CTITs are indicative of click injection [because] they are firing up a click after the install has been completed to claim an attribution. On the other end, really long CTITs are indicative of click spam, where fraudsters send thousands of clicks in order to steal organics or another source that drives legitimate traffic.”

While these metrics are great for detecting an issue, Andreas explains why you wouldn’t use them to filter click fraud: “There [are] definitely going to be installs with a conversion rate longer than an hour, it will happen quite often. It's just that the statistical relevance needs to be there.” He adds that you should see “80% converting in under an hour for apps with reasonable sizes, but if you have 80% of your install outside of that hour, it’s a dead giveaway that something is completely wrong.”

To learn more about click spam and click injection from Andreas, click here. You can also discover how our Fraud Prevention Suite protects your app from click fraud, here.

Pre and post bid best practices

Matt defines best practices when trying to stay vigilant against mobile ad fraud pre and post bid. Pre-bid, he encourages marketers to “Ignore any requests that are missing device IDs, App Store ID, or otherwise malformed or truncated BRs. Despite sounding like a no-brainer, you’d be surprised at how many people aren’t doing this.” He explains that this makes up around 13% of “no bids” filtered from Liftoff’s systems: “We filter them out from the very top of the funnel so don’t even consider them as a legitimate bid request. It doesn’t even process on our systems.”

On the post-bid side, Matt explains the importance of optimizing for CPA: “If you are optimizing for downstream events, it’s more difficult to fake those actions and events. You are automatically limiting your spend on what would otherwise be fraudulent traffic.” Looking at the right granularity in your performance metrics also gives you a greater chance of limiting fraudulent activity: “A lot of people look at entire publishers or app IDs, and what we’ve found is that it’s important to look at exchange-publisher ad format combinations because there are different vulnerabilities per ad format and exchange that fraudsters can exploit.”

Buying and IO terms

When signing an insertion order or buying from a network, it’s vital that you ask questions that determine their susceptibility to fraud. Transparency is key, and it’s important to define positives rather than negatives. For example, as Andreas points out, “If you say you don't want click spending, spoofing or click injection traffic, you open yourself up to abuse by not listing every possible fraud type. While signing this paper, you might not know them all.” As a solution, Andreas suggests definitions focused on positive outcomes: “I would require my advertisement to be seen by real people, stating that I want ads to be displayed for this long, and this much must be visible.”

To learn all the insights this webinar has to offer, watch the video or read the full transcript below. Our mobile marketing glossary also has definitions for several fraud-related terms, such as click spam, click injection, and SDK spoofing.

Full transcript

All right, awesome. Why don't we get rolling? We've got a great discussion today on mobile ad fraud. Both Andreas and Matt are going to sort of dig into, like, what's going on, what are we seeing in the industry, and they're going to be talking about good sort of strategies and tactics, most of which are without any tools, just understanding what to look for and how you can go about identifying fraud and steps you can take to mitigating fraud. So with that being said, Andreas, let me turn it over to you.

Actually, you know what, I'm sorry. My one job here, right, to introduce you guys. For everybody that's on the webinar, please, ask questions throughout. So within the go to webinar control panel, there is an area where it says questions, post your questions there. I will be fielding questions throughout. So if you have a question, I'll throw it out to Andreas or Matt as we're going along. We don't need to save them all up. And so I would encourage you to do that. And I see...Sam, how are you? Ariel, Uval [SP], nice to meet you guys. Ariel does that [inaudible 00:00:58]. So, thanks for the comments Ariel. Okay, with that being said, Andreas, I'll turn it over to you.

Sure. So, no introductions? We jumped right in? Perfect. Love it. So one of the things that I usually start with is this whole question of what actually is fraud. And there's a lot of buzzwords going around, and there's a lot of differences in nomenclature. And many people use different words, actually mean the same thing. And I usually don't like to just list things by name without explaining them a whole lot. But one thing is for certain, and that is something that I like starting with, is fraud is everything except what you as a buyer actually want to buy. If you specify you want to buy an audience engagement through advertisement and you provide the creatives to be used for that, then the important thing is that advertisement meets with customers, and that those customers get enticed by that advertisement, and they take the desired action. And all the action taken and all the performance metrics that you can read out after the install happens, all of those are not necessarily the only thing that are important because you need to make sure that advertisement drove those decisions, otherwise, you will end up being the fraud. Can we jump to the next slide? Awesome.

So, usually, what I do is talk about technical fraud. However, there's also this thing called compliance fraud or uncompliance. So I want to make sure that we talked about both of them, uncompliance briefly because at my function or in my function at Adjust, this is something that I don't get in touch with a whole lot because we usually don't know the contracts between buyer and seller, meaning we don't know about the contractual rules and those are being breached non-compliance. It's more or less contract breach what is going on there, obviously fraud, but something that I can't talk to a whole lot. On the other hand, we have technical fraud, and that is what we can help with quite a lot, and that is where I know best what is going on. And technical fraud, to us, is anything that tries to manipulate how we collect data or which data we collect and tries to either fake the whole user journey or the ad engagement part of the user journey. And, if we want to go down technical fraud, then there's more differentiation between the different fraud types.

And what we've start with obviously is the good part where everything is real, so we have a real ad engagement and we have real in-app engagement or in-app activity. So we have actually a legit advertisement leading to users taking action. Then we have the part where the user action is actually real. So we have a real user, real device, and they take a real action, but they have not been convinced to do so by advertisement. So the ad engagement has been spoofed or faked, meaning that the user didn't react to advertisement. They took the action that they took because of some other form of convincing. It might be a TV ads, it might be the word of mouth, it might be billboards, whatever, but it hasn't been digital mobile advertisement.

Numbers three and four are a bit more complicated. So, here, the in-app activity has been spoofed and that can be the install or that can also be post install. The idea here is that we might or might not have a real beginning of the journey. So, if we look at number four, we have a real beginning, a real user actually engaging with advertisements. But after that engagement at some points, spoof data enters the scene or the user uses a bot that manipulates the app, something like that. Or in the worst case, which is in sector three, where everything is completely fabricated, completely spoofed where we have fraud that basically creates performance and ad engagement out of thin air, and that is the most malicious out there because if everything is fabricated, then the sky is the limit and not even that really, everything else is a little more manageable.

Can we jump one ahead, please? When we look at the technical fraud schemes, there has been quite some evolution in the last couple of years. Especially in the last three years, things have gotten quite a lot more sophisticated. And when we're talking about spoofed ad engagements, usually click spam is the biggest one out there. It has been around since 1997, at the very least, and it has lived on the internet and in different iterations, but it has never really died. So it started off as a CPC fraud scheme where clicks were directly paid, so making more clicks made more money. But, later on, when performance marketing hit the scene, people realize that spending as many clicks on to as many unsuspecting users as possible nets you a random chance of conversion of those users taking action afterwards. And if the number is game, gets big enough, then those becomes quite a good payout overtime. And, like I said, this has been around for 20 years minimum, and the real…

So we [inaudible 00:07:27] from click spam to click injections. And the genius bit here is that click injections are not cashing in on a random chance anymore, but click injections basically nets close to 100% conversion and success rate, and that is because the fraudsters do not fabricate a click anymore on the hopes that somebody will convert, but the click is being fabricated after the user already made the choice to download and use an app.

So there's several exploits or actually it's two exploits on the Android operating system that allow that. One being what is called the content provider, the other one called...I'm blanking out here, that's terrible, the other one is a package and broadcast that is being exploited. And, the details, we don't have to go very deep, but the thing is the malicious app on a user's device gets notified when a user makes the decision to download an app. So when they click the install button in Google Play, or when the app finally was downloaded onto the device, and then those malicious apps can inject the click right then and there and attribution providers might fall for that one because usually last click attribution wins. And the idea is that those clicks come in before the first opening of the app before the first session and therefore lead to attribution.

Andreas, I thought Google closed that exploit?

It is closed on the newest versions of the Android operating system, but it is open on the older ones still because there's no retro updates for those, especially older non-Google devices don't see a lot of updates after about the first year, 18 months. However, Google provided what is called the Google Play install referrer API, which is a mouthful, but it's a very important tool to verify that an install happened on Google Play, to verify in which session that happens. That's the referrer part. And it also offers additional timestamps to look at and those would be the install begin time and install finished time. And having those timestamps, we can make sure that we do not attribute to clicks injected after the user click the install button in Google Play.

When we jump to the next bits, here we have the evolution from what we call fake installs. But usually this is also referred to as bot installs. Spoofed installs gets also thrown around for both things. This is our internal nomenclature, so this might be off from what you have read otherwise. But the idea here is that fake installs in the past have been basically one of two things, either installs from a device farm where an actual human operates several devices or with the intent of stealing advertisement budgets, or the other one that is a lot more scalable is creating installs on fake devices and emulations. So what we have there is somebody using software in a server environment or in a virtualized environment and having that software emulate dozens and dozens, and hundreds and thousands of devices. And the beauty of that emulation is that usually that emulation software will have the data available for thousands of unique different devices, device types, and then, all the fraudster needs to do is generate accounts, and generate advertising ideas, and then you can make those devices, download apps, install apps, use apps. And that has been quite scalable, and it was very well developed, but then detection methods obviously came into play. So this became less and less profitable.

And then spoofed installs, as we call them, came to the scene. And what we have here is basically the spoofing of the communication between the client and the server. And that basically means the information that usually that an MMP or AAP, you know, depending on which nomenclature you're subscribe to, would have an SDK in an app, and that would be on a device. And this SDK would send specific data points, home to the client, home to the SDK owner, so in the case of adjusted, would go to our servers, and there's usually one or two other measurement SDKs in an app that would also send data home and the fraudsters would go and intercept that data that is being sent, analyze it, figure out how it is put together, and then start with what is called replay text.

So they would use part of that data that they have recorded before, and they would basically figure out which data is being sent, which is technically not that hard because everything is sent in one URL with several parameters attached to it. And you have to figure out what is each parameter, and in some cases, this is increasingly easy. You will have the manufacturer name and the parameter, and you will have the device name and the operating system. All of that is pretty much human readable. And then there's a couple of ones that are harder to figure out. But what you would be doing as an attacker is take the static parts of the URL, keep that, find out what are the dynamic parts, what is set with actual device data, and then figure out what device data you can put in there, and how you can manipulate it, and then you can create replay text. And that is even more scalable and less costly than the other types of fraud that we've discussed before.

So this has been taking off quite a bit. And the only good way to defend against that as we found is to make sure that the SDK is a lot more secure and data can't be injected. We ourselves went for cryptographic solution that signs every request that our SDK sense, and therefore we can figure out if this is the correct thing actually happening, if it's the right app with our SDK, and it's running on a real device and sending data. And, yeah, that's how we defend against that, but this is one of the bigger risks.

Can we jump ahead? Awesome. So I just brought a couple of numbers. I usually don't do this. I'm usually the one that says, "Please don't go around showing people numbers like this," because people always get the sense of, "Okay, SDK spoofing is the biggest risk in gaming, 30% of games companies see this. This is a huge problem." And this is not necessarily how fraud works. Whenever you see numbers that say X percent of fraud in a country, in a vertical, in a timeframe, whatever, that really just means what have people found and what was the status quo in that moment. That doesn't have any use as predictors for the future because this is not happening because the fraudsters play clue. The fraudsters don't say, "I'm going to do SDK spoofing in India. And I'm going to target ecommerce apps." This is not how it works. They go for who gives me the biggest budget, who's not looking at what type of traffic they're getting, and where can I make the most money without having a lot of opportunity costs. And therefore, those numbers will fluctuate quite a bit depending on where people spend money carelessly or out of necessity. But where there's big budgets and where there's less control, the fraudsters will make more money, and that is what they will target. They will not target countries, or verticals, or make decisions in that way.

However, I guess it's still interesting to see what has been happening in the past so that we can retrospectively look at trends. But it's no predictor quality.

Andreas, point well taken. Methods are always changing, and shifting, and going after budgets. And, you know, I would imagine there might even be a seasonal impact to the types of fraud that we're seeing in the app categories that are hitting harder different times of year. Have you ever actually taken a look at...? So I understand that this is a snapshot, have you ever actually trended this? You know, like, over the last year, you know, month over month, like, have you seen it shifting, dollar shifting from one type of fraud in the app category to another over time?

I mean, what you can see very well over time is how new fraud schemes start out, probing the waters, and then at some point, when somebody really cracks the code, they take off. That is visible, but since there's not a lot of...nothing stayed the same for a longer amount of time. I'm doing this since 2016 with Adjust, and we have continuously evolved our fraud prevention suite, and we have added more and more filters and more and more capabilities, and therefore the numbers shift all the time because the market shifts and our tooling shifts. So, seeing a real trend out of that short term, like, over the course of a month, yes, longer term, not really.

Yeah. Okay. By the way, audience if you have questions, please ask. Actually, I do see one question, if I could throw this out there. Let's see, this is coming from James. How much of this fraud regardless of method ultimately is detected and gets refunded to advertisers, and how much of it goes without notice?

That is a hard thing to answer. First off, those numbers are addressed from our fraud prevention suite, which means all of this has been prevented and none of this has been paid to the fraudsters because our approach is preventative. So our fraud detection happens before attribution, meaning everything that is deemed fraudulent never grows to be a deduction off a budget. But this also means that this is only happening or this is only recorded for clients that actually use our product. So we don't have detection on the clients that don't use it. That would be quite expensive, to be honest. So we have a blind spot there. We're not seeing everything. We're just seeing everything from what we look at, and that is about 60% to 62% of the traffic running through our system every day goes through the fraud prevention suite. So this is already a big chunk of everything we see. But we don't see what we don't see, so I can't speak to that. And naturally with other numbers that you might see in the market, they might be actually detection numbers of whole traffic, but then the question really is how much can marketers claw back from what they now know is fraud. That is usually a pretty hard thing to do especially if time has passed.

Great. So Andreas, I think, did a great job of setting up and defining what is fraud and the different types of fraud that we're seeing in the marketplace. Let me switch gears a little bit and talk about how you can actually start to prevent fraud. The way we think about addressing fraud internally at Liftoff is in a two-stage process. So we have the pre-bid and post-bid fraud prevention. And, you know, a lot of people are more familiar with the post-bid realm where we're looking at evaluating performance metrics and flagging anything that looks too good to be true. And I really wanted to emphasize the importance of the pre-bid side because there's a lot of low-hanging fruit there. And the idea is if you can cut out more fraud at the very top of the funnel, you prevent spend, waste of spend on fraud and you can have more fraud downstream.

So next slide, please. So to kind of give a little more color to that, I will give you some examples of what we look at for pre-bid, on the pre-bid side, and I encourage everyone to take these learnings and replicate them in house as well. So, right off the bat, you absolutely need to ignore any bid request that are missing device ID or app store ID or otherwise malformed or truncated. It sounds like a very obvious no brainer, but you'd be surprised how many people don't do this. So, right off the bat, you know, ignore anything that is missing information or truncated. This represents about 13% of all of our no bids that we filter out from our systems.

Meaning...and so, Matt, so when you say filter out from our systems, you're saying we just will never bid, we'll just ignore these bid requests coming from ad exchanges if there's no device ID, or app store ID, or it’s a truncated ID. We're just like, "No, we ignore it entirely. Most likely, it's fraudulent."

Absolutely. We filter them out from the very top of the funnel, so we don't even consider them as a legitimate bid request. It doesn't even get processed in our system, so we don't even, you know, prepare to place a bid and spend on this. So, actually, we cut them out from the very top.

So, additionally, what you want to look at is any, like, what we're calling low confidence or suspicious bid request. So these are bid requests coming from devices that have too many publisher apps or send too many bid requests per day. You know, there's a realm of reasonable activity, both from the number of apps that we're seeing and the number bid request. And if that number gets too high, you know, it is a sign of spoof or faked activity that fraudsters are using servers or illegitimate sources to create this traffic.

Additionally, we look at geo movement of devices. You know, there's no reason why, for example, a device should be seen at in multiple countries in a single day. We also look at the association of devices with anonymous IPs. These are Tor exit nodes, VPN servers, anything that obfuscates the true identity of the device. You know, it's pretty intuitive that if the device needs to obfuscate their identity, then there's really strong reason to be suspicious of that traffic.

Next slide, please. So, you know, we also look at the meta-data of these bid requests. We look at app versions. You know, we can filter out anything that...an app version that is too old or does not exist. Similarly, with the SDK version for LAT rate, which is limited ad tracking, you know, and possibly we don't bid on limited ad tracking devices. We also look at, you know, distinct ratios of the IP and other aspects of the bid request. So, for example, we can look at the ISP that the bid request is coming from. And for a given country, ISPs have different relative rates of prevalence. And if we see too many bids come in that break that distribution that we expect, that's something that we can block, do not bid on and filter out as well.

So, you know, I gave some examples, and if you can see it on the graph, on the right there, a time series of the plethora of reasons why we don't bid or with no bid on certain bid request. As you can see, the landscape is constantly changing. So, for example, down on the bottom there, you know, too many bid request and too many pubs accounted for about 5% to 10% of all of our no bids, and around October that kind of fell off. And, you know, that's indicative of, you know, the landscape changing and the type of traffic that we're seeing. Additionally, you know, as we clean up and call our sources, we will see less or, you know, various amounts of this suspicious traffic coming through.

So the takeaway here is, you know, get aggressive, get creative, and make sure that you have really good coverage across the board and cut out as much suspicious and fraudulent traffic from the very beginning as possible.

Just to interrupt, one sec, Matt. I just want to make sure someone posted, but they're not sure if they're the only ones experiencing the sound keeps cutting off for them. But if the sound is cutting off for you, can you just like post a question within the question and say, "Yes," so I'll know if it's a widespread problem. No one else has said anything. Okay. John Davis, thank you, sound is good. Appreciate that. No issues, Justin. Okay. Yeah, we're good. Let's keep going. And I do...yeah, no problem. Sounds good. So, Caitlin, it looks like you're the only one with the sound issue. And then also, just note we do have a few questions that I'll be weaving in shortly.

So on the post-bid side, the number one thing you can do to minimize fraud is to optimize for CPA or anything that is downstream events. But here is if you are optimizing for downstream events, it is more difficult to fake those actions and those events, and you should be automatically minimizing your spend on would be otherwise fraudulent traffic. The other thing you want to look at is anything that is, you know, too good to be true performance metrics. Here, it's actually really important to look at the right granularity. When you're evaluating these performance metrics and blacklisting, a lot of people look at entire publishers or, you know, app IDs. And what we found is it's actually really important to look at exchange publisher ad format combinations because there are different vulnerabilities per ad format and exchange that fraudsters can exploit. And, additionally, you want to be as precise with your blacklist as possible because you want to cut out only the bad part of traffic or the affected traffic and preserve the spend and reach you can get by leaving what is otherwise good.

So, Matt, just to clarify, so you're saying that to identify, like, too good to be true performance, you need to look at the exchange, the publisher and the ad format. So can you say like, for example…can you just give us an example?

Yeah, absolutely. So instead of just looking at, let's say, you know, “New York Times”, they show ads...instead of looking at evaluating the performance metrics across all ads shown on the “New York Times” app, you want to look at New York Times on purchase [SP]. So “New York Times,” double click, for interstitial and look at performance metrics, just for the combination of those three. And I'll give you a few examples of metrics that we look at. So we need to look for extremely low conversion rates, less than 0.1%, you know, really high click through rates and around CTITs, so that's click time to install time. So that's the delay between when a user clicks on an ad and when the download is registered.

We care about the two extremes here. So really low click to install times or CTITs are indicative of click injection as Andreas mentioned earlier, where the fraudsters are looking for an actual signal that an app was installed, firing off a click signal after that install has been completed to claim attribution. On the other end, for really long CTIT times, that's often indicative of clicks spam. So, you know, the [inaudible 00:29:40] are sending out thousands of clicks in hope of stealing the attribution from either organic traffic or another source that is legitimately driving that traffic. And because they're just spamming those clicks seemingly at random, that click to install time tends to be a lot longer.

Andreas, curious, what do you guys use for like... what's your, you know, like, what's...Like, we use tenths in under 10 seconds, you know. It's just like, "Hey, it's too fast, you know, the install is happening or the click to install time is happening way too fast or greater than one hour. What do you guys use at Adjust?

We filter out on the so called install begin time that is available through the Google referrer API. So when we know that the user click the install button in Google Play at a certain time, then engagements that come in after. The user will very unlikely click willingly with intent on advertisement for an app that they've already downloaded. So that is pretty much a no brainer. However, what we are doing is completely filtering out. Matt is giving is good advice. That's two different things. So looking for those things that he lists here makes total sense because they will tip you off that something is wrong. You wouldn't use those metrics to filter. There is definitely going to be installs with a conversion rate longer than an hour. And that will happen quite often actually. That is totally fine. It's just the statistical relevance needs to be there. If it's happening, the majority of cases that is bad. You should see 80% converting in under an hour, at least for apps with reasonable sizes, up to two gigabytes, two and a half gigabytes, but if you have 80% of your install outside of that hour, that is a dead giveaway that something is completely wrong. So this is the common sense approach. So if you see this and you see it a lot, then something is wrong. It doesn't mean that you can filter out on it. But it means you are tipped off and you should definitely take action.

Absolutely. Next slide, please. So, we're going to do a deep dive on conversion rates, and Andreas can walk us through that. But, in general, we want to look at conversion rates north of 1%. You know, in the previous slide, I shared that extremely low conversion rates are less than 0.1%. It is where, you know, we draw the line for being fraudulent. And, in the next slide, this is the distribution of conversion rates or, you know, CTI, CR, CBR, there's many names to this metric. But, as you can see, this is real data that, you know, Liftoff is buying. And, as you can see, most of the publishers have an aggregate conversion rate of around, you know, 1% to 3%, and that's a very healthy metric. It's really only when you get to the far left of that distribution, almost seemed like off that chart there, is when you really run into problems. And I give it to Andreas to walk us through how to think about that extremely low metric and why it's actually problematic.

Oh, can we actually go two slides back. I just want to stick with the conversion rates for a bit because I get asked for benchmarks quite often. And, like, properly benchmarking conversion rates is hard because it comes down to quite a lot of factors. The audience size, the product that you're selling, the creative that you're trying to sell that product with, the relevance to the audience that you reach, well, all those factors make your conversion rates do one thing or another. What we're saying here, and I completely agree with Matt, is that conversion rates higher than 1% is what you are looking for, and naturally you should get, again, quite weary when things start looking to be too good to be true. So there's not going to be...a third of all people that click your banner are not going to install the app, unless they're incentivized to do so and that quite aggressively.

But 1%, 2%, 3%, 5% is all manageable depending on audience, and targeting, and quality of the product, quality of the creative. When we hit the realm of 0.5% to 1%, I would argue this is definitely a campaign that can use some improvement. This can actually happen deliberately. If I take a very big audience, I don't necessarily want to target people that will certainly take an action, but I'm targeting as much as I can because I also want brand awareness as an effect, going down with the conversion rate is somewhat possible. And you would basically have a cluster bump approach to your marketing and then also has its merits. But if you're not intending to do that, this is definitely something where you should already be aware that this needs improvement. Below 0.5% to 0.1% is definitely something to look into because this is a really terrible conversion rate. So, this might point at the campaign being wrongly targeted, there might be malfunction, fault setup, something might be at fault, making sure that the performance is really not picking off here anymore.

And below 0.1%, I would argue it's quite hard to reach that with human traffic because, again, we believe people have seen advertisement and people have clicked on that advertisement, and they should be clicking it with intent. This is not forced clicks where we're taking all of this out just to get traffic. So people see the advertisement and say, "Wow. This is interesting. I'm going to click it." And if less than 1 in 1000 actually take action after clicking the advertisement that so many people found enticing, that is weird. If you have a creative that says, "Hey, click here and you get 50% off," and then on the store page, it says, "No, you're too late," this might happen, but other than that, this is not something that usually happens.

And now I would like to jump to ahead. Thank you. And this is basically the same thing. It's just with more numbers, and that makes it more obvious what it is we're talking about when we're talking about very low conversion rates. So what do we have here, and I know it's a lot to take in, and I don't pride myself with drowning people in numbers, but what we have here is an actual cut out of a statistical page, like, out of our dashboard that I've actually looked at and that I use to train one of my people on the team. And the idea here was to look at a month of aggregated data.

So, as you can see, this is quite an interesting client because they spent nearly half a million a month on marketing. And they work with different sources, obviously, the big self-attributing ones, we have two video networks in here, we have three different performance networks in here, and one RTB network. And all of them behave quite differently, and there's different prices. And before anybody gets any ideas, I'm coming clean right here right now. I falsified the data so that the CPIs would come out nice and clean. So I played around the bits with the numbers to make this easier to swallow. And what we can see is that that different data is available in different channels, and the performance networks actually all didn't show any impressions, which I usually would argue is a bad sign to begin with because you want to be aware of your full funnel metrics whenever you buy traffic or performance somewhere. I mean, you wouldn't do brand advertisement without people telling you how many eyes you reached, how many impressions did you create, so why would you accept that for performance advertisement? I don't see it, and that actually comes into play in a bit.

We have the conversion rates, and those vary quite a lot between the different channels and same with the clickthrough rates. That is very telling. And we see actually the cost which is important, but then we see the reach as well through the impressions. And from that, we can make a number of assumptions, and we can calculate a couple of things. If we can jump to the next slide, then we going to...usually when I do this as a talk, I actually take half an hour to go through this table, so apologies if this is too fast. But what we have calculated here already for all the sources is the effective CPC and effective CPM pricing, meaning that the cost that advertiser actually pays to their supply, this is the breakeven cost. So, the publishers that are actually employed need to ask for less than these prices to make a living because the network that you're buying from also wants to earn money. There's a margin on top of this, or you have to cut the margin out of that media cost.

And when you look at the media cost, then you can already get an idea of who is paying which prices and what can you determine from that. Facebook is usually paid on a CPM basis, UAC is mostly on CPC pricing, the video networks and performance networks usually gets CPI price, and RTB networks comes down. You can have media costs campaigns where you basically run an agency model, or it can be a fixed CPI price, and they do the arbitrage, and there's different ways. But for the different prices, you get different results. And what is already painfully visible here with the effective CPC pricing is that for the performance networks, the publishers are working for 100 of the money that they could be earning if they would be working with the RTB platform, or with the video networks, or with any of as a sales sign up publisher in the GDN or Facebook audience network. Anything is better than working with the performance networks out of the view of the publisher. And, usually, the idea is the publisher gets paid a CPM or CPC price, so they can plan their earnings properly and they don't have to be doing the arbitraging themselves, getting paid on a CPI and actually carrying all the risk. That is usually not how it works.

The next step that is going to happen here is that we will extrapolate the... thank you, the click through rate. And we have been quite generous here with 1%. Usually, healthy click through rates are somewhere around 0.6% to 0.8% currently, I want to say. That is good. Like 1% is a really solid conversion rate. So if we would go and choose a smaller one, then we would even have more impressions. And, now, this becomes quite interesting because if we look at the reach that we have to buy, so how many impressions would actually need to happen if we benchmark against the other networks that actually do have impressions, then one performance network sticks out like for some right here because we have spent off $91,000 over the course of a month, and for those $91,000, we reach nearly 31 billion impressions.

Unlikely is what I would call this, trying to be non-offensive, but if we compare that with what Google or Facebook can deliver, which we would arguably say are the two biggest ones on this list. So for $66,000, which is about two-thirds of the spend on the performance network, we can get 80 million people or close to 80 million people seeing that advertisement. For just a third more of the cost, we can I...I can't even do the math, but it's more than 1000 times the performance and the reach. So why would anybody ever spend with Facebook if that would be true? So we're coming back to what Matt said way earlier is, if it is too good to be true, then you should be going around asking questions.

And, now, the big question here is...and it's not a question, it's obviously fabricated what is going on here, but the question that you would go and ask this network is why can't you make your publishers reach 31 billion people at the price point that is less than 1,000 of what those publishers could be earning if they would be working with a self-sign-up network? That doesn't make any sense. So doing those calculations and just having an overview of what is the effective media cost for a publisher that is running with those traffic sources that you're buying from and how many people would you actually have to reach to get to the performance that you're getting because the amount of installs is not bad. You're getting twice the amount of installs for a third more of the cost, but at what cost really because it is quite obvious that this channel is cannibalizing organics quite drastically, I would argue.

In case I have lost anybody in the audience, Kevin, you there, sorry. You see the question, so if anybody is lost there, we can do another session and draw this out a bit.

Yeah, don't you worry about it, Andreas. We have a lots of questions. People are thoroughly confused by you. No, no, just kidding. Not at all. Not at all. Really great contents. I'm just holding back on questions.

I'm definitely done with this part. So this would be an actual break.

Okay, cool. So, and we've got 12 minutes left. So, just so everyone knows, I think there's a little bit more slides. So, we should try to balance out what the other content you want to share, as well as answering questions.

Yes, sir. So, I would maybe rush through the best practice things on the next slides. I guess those are the next ones. Yes. So, when signing an IO or just generally trying to buy off of a network and that goes double for the performance networks, I would make sure to ask questions, a lot of them, figure out what is their business model, what's the models of operation that is happening, who are the publishers, how are the relationships, how is everything working like fully. Transparency should be there. You shouldn't be buying, what is it, the cat in the bag, you shouldn't be doing that.

When setting rules for what is accepted traffic, what is not traffic, I'm a lot more a fan of defining positives than negatives. If you go around and say, "I don't want click spending, I don't want spoofing of any kind, I don't want click injection traffic," you open yourself up to abuse by not listing all the things. And most likely, while signing this paper, you don't know of all the things. So I would go ahead and define, "Well, I want my advertisement to be seen by real people, and I want advertisements to be displayed at least for this long, and this much must be visible," like all the good measurement guidelines that you can read up from the IAB. There's a lot of good content in there to use to make a positive list of what it is you want and then make sure that if you're not getting what you want, then there's at least a room for discussion.

But if, for instance, we see something where conversion rates drop below 0.1%, you could argue that this is not advertisement seen by humans and directed with humans, so this is fraud and then charge backs need to happen. I would also say starting small and building up, like developing the relationship together with the budget is a good idea to fill out the waters. And, yeah, check with your peers what they know, what they heard, be weary, be on your guards and never stop asking questions.

And on the next slide, I have another piece for a third party fraud tools because I get asked about those a lot. And I'm actually not saying that they're bad, like, at all. There's a lot of complementary ones that work well. Like, I got to be honest about this, like our fraud prevention tool that we built, that I built, does a lot of things really, really well. It doesn't do all the things. So whenever there's additional things that can be had, those should be had, and I'm trying to build more and more things, but I only have that much time and resources. So, I don't begrudge anybody getting additional help, but, again, whatever you buy should be transparent. You should know how it works because you will have to defend those decisions that this company makes against your supply partners. And you want to make sure that your supply partners are treated fairly and not badly.

If you work with your supply partners for years and years and all of the sudden they get scrubbed for things that actually weren't fraudulent, but it's going to put quite some strain on the relationship. And when the relationship like that breaks, it's very cost and time intensive to make it happen again. So I would argue against black box approaches. Like, if you don't know how it works and your partners don't know how it works, then there's not a lot of trust in it and not having trust in a business relationship is not beneficial to anybody. And you should really figure out what it is they're doing, how it works, and hopefully it is transparent enough that the people that are being judged know what is happening to them, and then it's a worthwhile investment I would say. And that's it for me, I guess. Matt, you have another slide right there. Oh, no, questions already.

Questions, questions. That was great. That was really good. So for those who are still here, Andreas and I met I think three years ago now. Is it possibly two years ago, or three years ago? Three and a half hour bus ride, we sat next to each other. The whole time I was like, "Hey, teach me everything about fraud." And ever since then, it's always very enjoyable to listen to you talk about it, always very insightful. I love that you really take, like, a very broad perspective, very high level. You could go into the weeds, you can pull yourself right back out of it.

So, okay, enough of that. I'm going to throw some questions here. I know that we've got a number of questions. Some of them I kind of lost the context, so sorry, guys. Okay, a few things, going back to the click to install rates, does the 1% click to install rate benchmark take into account view through conversions like on display or engaged view conversions like you'd find on YouTube?

Depends on how they're being set. So if a view through is actually treated as a view through, then not. If the view through actually triggers a click, which it shouldn't, then it will fall into the same measurement.

Okay, great. This question is from...I apologize if I don't pronounce your name properly, Huya [SP], is it true that advertisements in Android stores have a bigger conversion rate in the range of 45% than you see elsewhere?

No, I can't say. If it comes to let's say Google traffic, we don't see the split between the different sources. We don't see what is GDN, what is a YouTube. So, to us, it's just one channel with no granularity. I can't speak to those conversion rates. I know that obviously Google search converts a lot higher, so that usually triggers the conversion rate overall of the channel to be a lot higher than you would expect, but we don't see any granularity. Matt, do you see more?

That's actually the first time I've heard a claim like that or something similar. Although, you know, just hearing the figure of 40%, you know, I kind of raised my eyebrows at that, you know. That seems, you know, suspiciously high. So, you know, regardless of channel and then source, you know, when you start creeping up in that range, I'd be hesitant, especially of anything claiming 40% conversion rate.

I understood it as 40% more as other channels, but, yeah, that's…45% conversion rate or upwards of it is just like, I don't know, anything that gets that...maybe a transactional email, that will get a 45% open rate. Only thing you've ever seen. All right, a few more questions. This one is from Andrew Fong. Do you remove view based installs when looking at CTIT?

Yes, because, yeah, again, as Andreas said that there should be no click. So CTIT is not really defined on view based installs especially in non-starter. So, yeah, absolutely, I would definitely remove any view based clicks from looking at that metric.

Great. This question comes from Nathan Levine, is there any way to understand what percentage of inventory is being cannibalized by a fraudulent network from our organics or other paid channels?

Oh, that is a very unpopular answer. Yes, you can figure that out, but you have to stop channels for a good while and then start them again to see how they impact all the other channels, if organics go up, down, if other channels go up or down. Cannibalization and uplift can be measured, but it's a very drastic approach. You have to actually stop whole campaigns for a week and then start them up again.

Yeah, we've had a number of customers over the years do exactly that, large companies, okay, we're going to turn off a network and see what impact it has. And then they'll go, sort of methodically go through that approach. So I know that's been a popular approach. This question comes from Ariel Russack [SP], do you have any dynamic algorithm to define what should be a normal campaign behaviour or do you use set rules for everyone equally, like blocking everything below 10 seconds on CTIT, for instance?

For most of the stuff that we do, we use deterministic filters. So, actually, yes, no rule sets that are not arguable. If the click comes in after the user clicked the install button in Google Play, then there's no argument about that really. Same for, let's say, filtering out anonymized traffic. If that traffic is coming out of the Tor network and 90% of the traffic of one source comes from the Tor network or from VPN, then it's a no brainer to turn this filter on and filter all of those out. The one thing where we have a probabilistic approach is for what we call distribution modelling and that is targeting click spam traffic because those rules, that the majority of installs should happen in less than 60 minutes conversion time, and that you should have conversion rates of above 1%. This is all nice, but you can't make rules out of those, otherwise you will create tons of false positives, ruining relationships and then crushing your campaigns. So that is where we have a more sophisticated approach, like a proper algorithm that takes care of things. Explaining that is usually a 45 to 60 minutes thing in its own [inaudible 00:55:54] and explain it in two sentences because I'm going to sell myself short there.

Yeah, to answer real quickly, what we do at Liftoff is kind of a hybrid approach where we have a similar probabilistic model where we look at, you know, what we're calling performance metrics. And then...so, we look at, you know, what's a probability of a certain source having a, you know, CTIT or conversion rate of x. So, we supplement that with really, you know, hard lanes in the sand or below those thresholds or above those we then, you know, take action against. So I think within the majority of the range or the distribution where, you know, these performance metrics could lie, there's, you know, some amount of gray area, you know, but we definitely have distinct thresholds that we then apply. You know, for example for the CTIT, you know, one distinct threshold that we were looking at is, you know, less than 10 seconds. And we treat that differently than anything, you know, less than a minute, if anything comes in that like bulk range in the middle of it.

Great. All right, one last question, looking for short answers here. This is coming from Ian Geisler. It's an interesting one. So, thinking about transparency, to what extent do you think fraudsters are able to use or go after Google UAC campaigns?

That is hard to say. I don't know enough about what goes into Google UAC campaigns, and I don't know enough what Google is doing on their end. I mean, they have definitely the manpower to take care of it. And I would argue, the news in the last couple of months showed that Google took quite a lot of action, but I'm also just a consumer on that end. I don't have any insights into what is going on there.

Unfortunately, we don't either here at Liftoff because, you know, it's a service that we don't use. So, of course, I can't provide a better answer for that.

Yeah. Yeah. It's a black box. Who knows. All right. Listen, I apologize for those who had questions that we weren't able to get to. We've got two talkers here. Yes. I'm here next to Matt. Andreas, great seeing you. I look forward to seeing you in San Francisco in a month or so. Thank you for joining us. I thought this was really informative. Everyone, have a great week. We will be sending out an email with a link to the recording. So, if you want to watch it again or share with others, feel free.

Want to get the latest from Adjust?