Blog Demystifying incrementality for user eng...

Demystifying incrementality for user engagement with Facebook

Return on ad spend (ROAS) is on every marketer’s mind, but when it comes to measuring ad effectiveness, you can’t afford to ignore incrementality — the uplift in business outcomes driven by advertising. Understanding incremental ROAS allows you to allocate your budget more wisely by funneling funds toward ads that drive better conversions and result in more downloads — rather than spending money to find users who would have converted anyway.

Facebook Gaming’s Kate Minogue and Adjust’s Alexandre Pham came together in this mobile marketing webinar to discuss what incrementality really is, why it’s important to mobile marketers, and share best practices for considering incrementality in your day-to-day. As a bonus, they’ll help you understand how to factor incrementality into your LTV calculation. Other highlights include:

  • Conversion Lift — Learn how to use Facebook’s Conversion Lift tool to understand the impact of incrementality on your campaigns. (Bonus: And Learn about Cross-Platform Conversion Lift, which allows marketers to look beyond Facebook at other digital channels).
  • Real Impact — Understanding incrementality is integral to accurately measure the effectiveness of your campaigns, evaluating your KPIs, and understanding what you can improve upon.
  • Incrementality and Re-engagement — No one wants to pay for the same user twice unless it’s really impactful. Learn how incrementality can help you perfect your re-engagement strategies.

YouTube requires that you accept marketing cookies in order to watch this video.

The full transcript

Alex: Hey everyone, welcome to another Adjust webinar. My name is Alex, director of partnerships for Adjust. And today we have Kate Minogue from Facebook joining us to chat about everybody's favorite topic, incrementality. So Kate has been in the space for several years, I will let her introduce herself and explain what she does at Facebook currently.

Kate: Hi everyone. Thanks, Alex, for having me. So I lead the marketing science team for gaming here at Facebook. We work with some large advertisers in the gaming space to help them to use data more effectively to understand how successful their marketing strategies are. So one of the cornerstones of the work that we do is around educating advertisers on the topic of incrementality, and ensuring that they're measuring real business impact.

Alex: Nice, that seems like a very important topic for all marketers, and I hope we can kind of demystify the topic today, and go in depth with that you're doing currently at Facebook with your clients.

Alex: Looking at the agenda roughly, we're going to talk about the word itself, what is it, what's the buzz around it in the past several years. We'll go into also why this is important for marketers, why this should be basically more into their mindset rather than several tests that they run here and there. We'll take also specific examples on how it can help you assess your re-engagement campaigns, followed by a couple of best practices from both Facebook gaming and adjust around that topic. What we can also talk about is how the whole incrementality topic influences your LTV calculation, and finish with several case studies that you have to share around the topic.

Alex: So let's dive right in with this. Kate, if you can make us a quick introduction on the topic from your point of view, that'd be great.

Kate: Absolutely. So here we talk about incremental ROAS or return on ad spend. So particularly in the mobile gaming space, our advertisers are really focused on return on ad spend as their champion metric. What we prefer to talk about is the incremental return on ad spend. So what that refers to is the uplift in performance that was driven by an ad, so where we compare a group that were exposed to that ad to a group that weren't, and we're able to isolate what was the true impact of that ad, and how much of that return on ad spend is really due to the campaign activity that you did.

Kate: So we talk about the term incremental, and I know it's familiar to some and it's very unfamiliar to others, sometimes I worry if we've made up some of these words, but anyway. Incrementality is when we talk about the uplift in business outcomes caused be advertising. So if you consider there is a subset of your population of target users that will convert even if they don't see any ads, so that might be people that from word of mouth hear about your mobile app, or go looking for it themselves. Then there's the customers that convert after they've seen an ad. But there's an overlap between both of those, there's those that would have converted anyway, but then we do show them an ad. And those are the ones where we could say there's a little bit of wastage. So we're always trying to identify and what is truly incremental is those that would not have converted without seeing the ad, and were driven to convert because of the ad. That's the truly incremental impact. And it's exhibited by the dark circle segment on the right in this slide.

Alex: Nice, can you tell us a bit more about how you guys are making and designing those tests on the Facebook side?

Kate: Yeah, absolutely. So at Facebook we've got a product called Conversion Lift, which we use to isolate exactly that incremental impact that we talked about. So within that what we will do, and there's a self serve version of this, or a manage version that a Facebook account manager would help you set up, but in both cases this is what happens in the background. So you identify the audience, the users that you're going to include in your campaign. Then Facebook randomizes that audience and separates into the test group and the control group. We then serve the ad impression just to those users in the test group and no ad impressions is served in the control group. After that, we can record the conversions in both groups using the MMPSGK in most cases, and then we can measure the impact and uplift between the two groups.

Kate: So the way that we do that is we understand what happened in the control group, and we liken that to what would have happened anyway. And then we compare that to the conversion rate in the treatment group, or the test group, and say that only the difference can really be attributed to that campaign. Now, sometimes these experiments are run on one campaign, sometimes they're run on all of your Facebook activity. But the logic stays the same, it's about switching off the impressions that you want to measure the impact of just for the control group.

Kate: This is how we then calculate that incremental return on ad spend, so that champion metric that's really important to advertisers. So what we do is we take the revenue delivered by the test group, those that did see the ads, we subtract the revenue delivered by the control group. This example is if they were both the same size, but if they're not we scale up to make sure they're comparable. We then divide that difference by the amount that was spent on the ads. So in this example, the test group drove $20,000 in revenue, those that didn't see the ads in a comparable sized group drove 10,000 in revenue, for an ad spend of 5,000. So what that shows is that we had a 2X uplift in performance and revenue driven by those in the test group, and that's our incremental ROAS figure, the 2X.

Alex: All right, thanks Kate for the quick explanation on incrementality, I think that's set the stage well for the following content of this presentation. As you can see, this is a concept that is definitely important when you are running any big campaigns. The concept also of splinting test group, control group, and having the right approach in how you look at your results is very important.

Alex: So that brings me to our second question, why is this something important for marketers?

Kate: Yeah, of course. So this all comes down to both accurate measurement and also the understanding of real impact. So we always endeavor to have the most accurate KPIs and metrics possible, so that we have a really good read on where our spend is going, which users are exposed to our advertising, and where we think it made a difference. Advertisers have a really hard job to do with all of the different channels that they work on, the different campaigns that they have, to know which ones are worth their time and investment and which ones aren't. So the more accurate that measurement piece is, the less we can worry that we are spending in the wrong places. So that's the first part.

Kate: The second part is around that accuracy being for real impact. So what we want to avoid is a situation where you're attributing all of your impact to just an ad that was served to the most people, or an ad that just happened to be at the very, very end of a touchpoint but didn't really make a difference. So that's where experiment design and incrementality come in, it's using scientific methodology to really validate and assess what would've happened if this wasn't ... this ad wasn't present, versus maybe the correlation that we see in some other methodologies. It will allow us to look for areas for improvement, what we always say in here, what isn't measured properly isn't improved. So it helps us to identify where can be improve compared to things based on their real impact, their real incrementality. And it's about being more strategic. So it's about not just accepting things as they are, but constantly iterating, looking for more impact, looking for more efficiency and effectiveness.

Alex: Great. I think on that thought I have a quick question for you, from your experience with customers at Facebook, how difficult you think this is to make clients understand the concept, the value, and basically build also the tests that will make them understand the true impact?

Kate: I think that definitely varies. I think the concept, while it is complicated in some respects, advertisers do largely understand it. Once we've talked through it with them, the concept of incrementality does resonate. People have heard about this randomized control trial from medicine, and a lot of different industries in the past, so I think the academic concept of incrementality resonates. Where we probably struggle a little bit more is in getting that investment for experimentation, getting even that time investment, or that opportunity cost investment of taking a control group. Because the reality is, if you're really confident that your campaign is performing well, you might not want to take a control group and stop people from actually seeing the ads. So that's one challenge that we have.

Kate: Another challenge is obviously around people being able to dedicate part of their budget to something that might not deliver the ROI, and more you're dedicating that budget to finding out for future. So a lot of this comes into that planning and experimentation comfort level. And if advertisers or user acquisition teams are not empowered to do that, or if they have pressure to move fast, then that's where the appetite falls a little bit to the wayside. Because most of these advertisers are obviously they're using an MMP for they day to day reporting and attribution, and that's the right thing, that's the right tool. We're definitely not suggesting that anybody would move to a lift test or an experiment for every single campaign, because it would take too long, it would give you a lot of kind of point in time views of how things performed. And it would limit the opportunity of things we know are successful.

Kate: So I think that's where we maybe hit a bit of friction, it's more in we'd like to do this, but I can't wait two to four weeks, or I don't want to take 20% or whatever that might be. And the other one I guess rather than the concept being difficult is when they see a discrepancy between the numbers in the lift study and the numbers that they see in their MMP, or even in our Facebook ads manager. That can be a little bit difficult, either to digest or to sell in to their leadership as to why those two numbers are different. We can talk about that in a little bit more detail later once we've kind of gone through the different sections, but I think those are the biggest challenges that we face.

Alex: Yeah, definitely. This process will always happen and this is something we always work on recognizing and make plan to discern why this is happening. I think this is great that Facebook is helping customers on approaching that topic. You are working with them to make them understand the incrementality on Facebook, and I'm sure that the learnings you give to them, the approach that you teach them as well, is definitely helpful in other marketing channels that they have, that they are using, and overall can improve other side of the businesses.

Kate: Yeah, and we also have a product which is a little bit more complicated than our general Conversion Lift for Facebook campaigns, it's a product called Cross Platform Conversion Lift, which allows us to look at other digital channels and their impact with Facebook as well. That works slightly differently because obviously we can't randomize and take a control to the user level for other channels, so we take it on more of a geographical holdout. So matched markets, or matched zip codes, depending on the place. But we are working on products like that that can help even more expand out the rest of their media strategy.

Kate: But I think we're seeing advertisers increasingly, once they have run a Lift test on Facebook and understand Facebook's uplift, go to their other publishers and ask for the same thing. Because once they kind of understand that it is the purest way to show real impact, then they really just want to see it everywhere.

Alex: So we can see that incrementality is a concept that is important all across the marketing lifecycle, both from the user acquisition side and also the re-engagement piece. So we'd like to focus now a little bit more on that and see how we can help you assess your re-engagement campaigns a bit more specifically. So Kate, if you have some more directions in this range, we can work from there.

Kate: Absolutely. So exactly like you said Alex, incrementality is important across the board in your full media strategy. What we see on these our acquisition side is that if an advertiser is unsure about their attribution logic, or how accurate maybe that they've chosen are, then they can use incrementality to give an explicit view of the real impact and to validate that attribution.

Kate: But in reality, particularly in mobile gaming, we see re-engagement is where incrementality is really, really important on a day to day basis. So the reason for that is because there's a clear motivation not to pay for the same user twice unless it's really impactful. So what advertisers want for re-engagement campaigns is proof of a profitable uplift. So if you take the example of if I show an ad to all of my most active users, and they continue to be active, that might not be that surprising. But it also might've had nothing to do with the ad. So it's trying to isolate that there was a lot of activity after my ad versus there was a lot of activity driven by my ad. And we often use the example of like if we take that user acquisition versus re-engagement, which isn't the test that we're going to do, but just as an example, user acquisition probably has a very low baseline of activity, it's users that are not familiar with your app at all. But that might mean it's easier to create an impact.

Kate: Whereas on the flip side, re-engagement might have users already active in the app with a higher baseline of activity, but much harder to make an impact. So it's seeing that impact like with like, rather than just the activity, like with like. That's what makes re-engagement such a good use case for incrementality and experiment design. Still useful on the user acquisition or the app install side, but what we see on mobile gaming is that they're quite confident with the attribution logic in their MMPs. For the most part, because of these heavy skewed toward mobile app install ads, their last touch models are quite accurate. And when we have done validation of a mobile app install ad in the attribution model, we see that incrementality lines up pretty well with the attribution models that are chosen. That changes a lot though when it comes to the attribution and re-engagements, there is a much bigger difference, and maybe last touch doesn't show us everything about the impact that ad is having.

Kate: I think as well, and obviously it varies from MMP to MMP, but re-attribution is newer and it's an area that advertisers maybe still aren't 100% confident, or don't have the validation that they need. So that's where re-engagement we've seen there's a massive appetite.

Alex: Clients often ask us, actually, what are the best settings for their re-attribution, everything that is based on their re-engagement campaign, how they should work with each other on the marketing channels, so I think a little refresher on how re-attribution works with Adjust is important.

Alex: So we have one main mechanics behind, so a user from our site can only be linked to one single source, so that's important for our customers to understand and define correctly when a user is re-engaging. That's why we have the concept of inactivity period, that would be the period of time where we would still consider a user active and linked to an existing tracker. Past that inactivity period, let's say that...period is at seven days, if the user did not open the app after seven days, there was no session whatsoever within seven days, in that case the user would be eligible for re-attribution and if he or she clicks on a targeting or re-engagement ad and generates a new session with the re-attribution window, in that case we'll be able to allocate that user to the new re-engagement source.

Kate: Yeah, and just to jump in there, I think it's so valuable for you to talk through this, and I think it's something that advertisers still sometimes don't grasp well. When I look at that inactivity period, it reminds of some campaigns that were run with advertisers on what we call early re-engagement. So that is where we try to reduce that massive drop off that happens a couple of days after users install an app. So there's a campaign run keep them engaged in their first, say, week of installing an app. And they haven't been inactive at all. So that's one thing that advertisers need to be really, really conscious of is if there isn't that period of inactivity but they've set that period of inactivity, then they're not going to see the reality in the Adjust UI.

Kate: The second thing then is that for that re-attribution windows or whatever logic that you've set up, I really think it's valuable to have run a Lift study to help guide you towards those settings, rather than just setting it arbitrarily, or even based on best practices that you've heard from others in the industry. If you do have the ability to run you own Lift study to assess whether you've made the right decision, what you'll get is an understanding of how much impact that campaign really had, and then you can compare it to two or four different attribution windows to see which one was the closest. And then set that attribution logic for your next campaigns. With the comfort and the knowledge that it is a good estimate for that incremental impact.

Alex: For sure, the clients can always throw some best practice, but I think it always goes down to attribution ... an inactivity period and then attribution window will always depend on the dynamic of your game, the dynamic of your app, and in that case you can experiment around it and make sure that the segments that you are looking at re-engaging are not within that inactivity period. It sounds obvious, but this is something where we see still a lot of clients having some difficulties with.

Alex: Talking about best practices, this let's me go to our fourth part of the presentation. We can share here a couple of examples of what we've seen working well with our customers. So Kate, I'll let you start with a couple of the ones that you have with your Facebook clients.

Kate: Cool. So the reality is that this kind of a test, this kind of experiment is not easy. You're also not guaranteed to get the results or the statistical confidence that you want the first time you run one of these tests. So there's a few things to consider because just running a test that will really set you up for success.

Kate: The first one of those is that we really need to get an understanding, like the core of all of Lift is what is the baseline? What is the what would've happened anyway in the control group. And it's great if you actually know that before you set up the study, because that can help you determine how much signal you would want. It's actually easier in re-engagement than it is in user acquisition, because obviously with user acquisition there might be more of an unknown in the conversion rate. Whereas for a retention marketing campaign, you have potentially in your customer data an idea what percentage of those users are going to come back if you don't do the campaign. So if you have that data, that will really help to understand are you looking at a campaign with a very high baseline and maybe a lower uplift or the other way around. Because what you're measuring is that uplift versus baseline. If you don't have it the first time, before you run the test, an estimate will do, and then you will find that out from the control group in the first test, which will help your future tests be better.

Kate: The second thing is: always have a hypothesis. Always, always, always have a hypothesis when you're running a test. And I'd even go a step further than that and say before you run the test, have a plan for what you're going to do it you get positive, negative, or even flat results. So I always advise advertisers to have a next step outlined for those three scenarios, because that will help with any ambiguity when you see your test results. Your primary outcome, you need to really be able to measure what matters to you most, the uplift of the metrics that matter to you most. But also to understand what your action is going to be after that test.

Kate: The third thing, this is where a lot of tests fall down, is to enure the test has a strong power. So some people will call this power, others might call it statistical significance, or confidence, but that's about knowing that your results are repeatable, and reliable, and trustworthy. So if I see a certain uplift, how much faith can I have that I would see that again in future if I reran that campaign? To get that power, you need to have enough signal, enough conversion events, and a big enough audience to get a robust result. If I said to you I'm basing my actions or I'm basing my results of what one person did, you'd laugh at me. You'd say that doesn't make any sense, you can't generalize what one person. This is the exact same thing. The power is how confident are you that you can generalize the results, and that confidence always goes up the more evidence and the more signal you have. So the more conversion events, or the higher the change between baseline and test, the more confidence that you're going to have.

Kate: The last point then for me, and I know Alex, you've got some great best practices that you guys have worked on with advertisers as well, the last point for me is be prepared to iterate. I mentioned the scenario where maybe you don't get that statistical confidence the first time around, maybe you didn't know what your baseline was going to be, so when you calculated that power, or the population size you needed for your test you might have had a wrong estimate. In those situations, just be prepared to run the test again with the new information you have. But even if you do get a strong statistical uplift, be prepared to test again. Keep testing, keep iterating, run a B test, different things to always improve on your strategy. Because marketing moves too quickly, customer behaviors move too quickly, technology moves too quickly to take one result and assume that it will always hold. Testing should become part of your team's DNA and be something that you're always, always doing.

Alex: Yep, that's super helpful. That brings me also to a couple of other best practices that we see from our side. A lot of customers are ... obviously as Adjust we are measuring a lot of raw activity, raw data from their app. They are asking us if we can help also on their LTV calculation, they are asking us if we can help them assess the impact of their incrementated campaigns. One concept that is important is really looking at the marginal costs. So some customers will look at how much money they spent additionally, and then how many users they got. But the important KPI here is how much money did you spend to have an additional user, or initial activity? If you goal was getting additional users, or if your goal was to get an additional sale for e-commerce, or purchase for a game, then you need to really understand what was the real cost, the marginal cost of having one additional action within your app.

Alex: Then looking at your result, that's important to then adapt those results into the appropriate channel, really...your test as much as you can and run, for example, a test only for a certain platform, a certain country, a certain device. So you can go into like very much granular and the end goal being scaling up the channels that work well for you, and also always looking at your baseline on the organic side and not cannibalizing your activity there.

Alex: Keeping in mind as well that the results that you have from your hypothesis might defer if you scale up those tests. So if you have results a certain sample of users, you still run the risk that these results can be different if you apply them to a much larger audience.

Alex: Now if we take a little step back, I think it's also important that the different teams within your company are all aligned on those incrementated tests. Because if you only run them on your...activity and want to reward, let's say the paid UA teams around it, why the ASO team also is doing some work on their side to look at how they can increase the organic number of user, then in that case there might be also some cannibalization, you might reward a certain team or activity while others also have an impact on the overall calculation.

Alex: The last best practice we have is also to use all your mobile measurement partner's macro. I think what makes M&Ps very strong in what they are doing is the granularity and the depth at which they can go to providing such data points for customers. So don't hesitate to make the full use out of these, and go as deep as possible for your calculation to be precise and impactful for your campaigns that didn't work.

Kate: Yeah, these are really great, I think the second last one there is one of my favorites because when we talked earlier about the challenges that we have with this, the reality is that some teams are not incentivized, or don't have visibility over growth targets that are based on incrementality. And if that's not something that the company is focused on, then it's very easy to get into a cycle of having campaigns that cannibalize each other and that overlap with each other, and that isn't perceived as a problem. I think the macros sound really great and something that everybody should definitely be taking advantage of.

Kate: I think like your third point there around ensuring people are aware that results might differ when applied to a larger audience, also be cautious if you after the fact breaking down the results in different segments where maybe the control group wasn't representative across all. So ensuring that if you do further segment your test for analysis after the fact, that there is a representative control group in each of your future segments, because we have seen situations where people try to do that full stock analysis and maybe one of the groups didn't have a representative group for statistically significant uplift. So if you do have hypotheses at the start of a different audience, set up your test to represent that. It comes back to the real importance of that hypothesis from day one, which is make sure that you know what you want to know, and that you've set up your test to give you as much information about that question as possible. Because after the fact might not give you accurate answers.

Alex: Regarding when you were talking about measuring the ... making the test statistically significant, do you on your side run test with like a certain confidence level? How do you talk about this with customers, and do they question you on like, okay, we've run that test, how confident are you that this result is going to be impactful in the long run?

Kate: Yeah, definitely. So our Lift tool....90 to 95% confidence, we can adjust it, but we do have quite a high threshold for where the confidence that we will accept. And we have advertisers that demand that as well. So particularly advertisers that are really bought into experiment design often have that internal mindset about how confident is confident. Our results will show if the confidence is maybe over 70 or over 80%, but we usually get an amber flag saying we don't necessarily think this will be repeatable, because I guess 80% sounds high, but it's also one in five times this will be the same. Which is often too high for an advertiser. How low confidence you expect will probably depend on how much money you're putting behind it and the decisions that you're making. But we try to stick to the 90, 95% levels because I think we can really be confident that that is a repeatable result.

Kate: And in a lot of cases that might be we are confident that there will be a positive uplift the next time you run this, and we have this level of confidence, but that magnitude might change, and obviously that's bound to a whole multitude of variables like you say, scaling to a different audience and things like that. But what it does say is we're 90 to 95% confident or even more that there was a positive uplift from this activity. And we have a lot of conversations with advertisers about that. If we get below that, if it is the 70 to 80, we will talk to advertisers about it, but really have that conversation about should we run the test again? Should we look at a larger scale test or trying to find more signal, or whatever that might be to get a higher confidence. If it's below 70%, we deem it let's just not even look at this test. We need to run something again, those results are not reliable.

Alex: Yeah, I think that's a great approach and making everything look very mathematical, because in the end these are all economics and mathematics that are applied to marketing.

Kate: Exactly.

Alex: So talking about mathematics, economics, this brings us to LTV calculation, that's the favorite topic of every marketer. Could you help us on understanding what you guys are doing around this, and helping customers understand how incrementality really influences their LTV calculation?

Kate: Absolutely. So LTV has been such a major topic for us in marketing science and gaming for the last one to two years even. And it's something that's very top of mind for our advertisers, something that really mobile gaming are ahead of the curve on in terms of how they predict lifetime value and how they use it as a KPI for marketing. But somewhere there's still a massive opportunity to evolve and to improve is in the understanding of how LTV and incrementality relate to each other.

Kate: Now I'm conscious we often talk about incrementality on one hand and LTV on the other, we don't always bring that into the same conversation, so I thought it would be useful to maybe close that link today. So maybe first just to make sure that everybody has the same understanding for the discussion, I'll show you the equation that we often use to describe LTV as our advertisers understand it. So any lifetime value equation is made up of two things. The first is your historical information, so at some point in time, let's say X, we look back and say how much value has this user already delivered? And that's a function of maybe their purchases to date, or the engagement that they've had, the days that they've been active through that period.

Kate: The next thing that's summed to it then is the prediction of the future value. So from that day X on to infinity or to whenever the lifetime is likely to end, what is our expectation of both purchases or other value events and retention of that user that is going to lead to their future value? And both of those combined make up the holistic lifetime value, or LTV.

Kate: To consider how re-engagement relates to that, we add a third part to the equation. So we still have our historical information, we still have our prediction, but we've got a new third part to the sum. And that is what happens after re-engagement. Take an example where you just have those first two parts, you start at point X, let's say it's day seven after the install, and you've predicted how much is this user going to be worth over their lifetime. But then on day 30 you run a re-engagement or a retention marketing campaign, you believe that that has a positive impact on that user's engagement in your app, but you don't update your lifetime value prediction. Potentially because in your company you only make a prediction for the measurement of user acquisition campaigns. What your left with is a lifetime value number that hasn't changed, but a campaign that you believe really had an impact.

Kate: In a perfect world, your model would be dynamic enough to actually adapt to the activity that has happened. So when that retention marketing campaign is run, and if that user's behavior changes as a result and they become more valuable or more engaged, then you get that third part of the equation with is the incremental value that was delivered after the re-engagement campaign, and so you're left with a new LTV, and maybe even an incremental LTV figure.

Alex: Great. Now that we've looked at what incrementality is about, why it is important for marketers, and how it can help you in assisting your campaigns, I think looking at real case studies, our audience would really be eager to have that from your experience, Kate. So do you have something that you can share that is related to your work with the mobile gaming space?

Kate: Absolutely, yeah. So I could talk about a meta analysis that we actually did last year where we looked at over 100 different of these experiments, these Conversion Lift experiments, where gaming advertisers in the mobile space ran engagement campaigns on Facebook to get an understanding both of what are the results overall like? Is this a good strategy for advertisers? And also are their learnings we can take on the best strategies? So what we found when we looked at all of those campaigns was that engagement campaigns on Facebook were driving an average of 3X incremental ROAS, and an increase of 40% in app opens of existing users on the app. Which was really, really positive and particularly on that ROAS or lifetime value metric, which is so important to our advertisers, to show that there is an uplift in the value delivered rather than just the activity was really promising to see.

Kate: So like I said, we looked at a lot of different campaigns, but we also looked at the difference between what app marketers have optimized for us. So within those 100 different gaming engagement campaigns, some of them had used the mobile app engagement objective with link clicks, while as others were optimizing for purchase. And our general narrative is around optimize for the action that you care the most about. If purchases or value are the thing that's most important to you, then you should optimize for that. But there are nuances when that comes into re-engagement. Because again, it all comes back to what was going to happen anyway? And are we having a material impact on that? So it was important for us to be able to split those results into were they optimizing for link clinks or purchase, and is purchase always the right solution or is link clicks often useful as well?

Kate: So we found that the campaigns that optimized for link clicks drove an average of 2.3 times more app opens, the campaigns that optimized for purchase drove an average increase of 2X on incremental ROAS. Now this doesn't sound too surprising, it aligns with that narrative which is optimize towards what you want. So if you want more people to open the app then you should optimize towards link clicks. If you want more people to purchase, then you should optimize towards purchase. What I would say is if you have a cohort of users that have a high purchase rate, and you're unlikely to be able to increase that or if the audience is so small that you're not going to really get a good signal, then that's where link clicks can come in handy. So that might be a high value re-engagement campaign, where just reminding them to open the app is as powerful, or on the purchase side it might be a valuable strategy for re-engaging non paying users and trying to convert them to paying users.

Kate: So it very much is horses for courses and making sure that you are testing it, looking at not just what would have happened anyway and taking credit for high purchase rates of high purchasers, but instead looking at what impact are these campaigns having, and should I try a different objective for different cohorts.

Alex: Nice, that's a very interesting on understanding what's exactly the goal that's driving these gaming companies. Have you seen any, let's say, difference in objectives or approach from different app verticals within the gaming space? Did you see like maybe some more core strategy games having certain goals in mind and approach, or maybe with the more recently rise of hyper casual gaming, maybe some difference in how they deal with the topic?

Kate: Yeah, so I guess for now re-engagement and hyper casual just are not a match. We do not see that vertical spending a large amount of budget or time focusing on retaining users. Their strategy obviously is much more on the high volume acquisition. What we have seen that was quite interesting was that early re-engagement works very well in the social casino space, where they maybe have quite a high or a steep cliff after a couple of days after acquisition. So re-engaging users in early life, so say a week after they've installed the app, even if there's inactivity, proved quite useful, and maybe that's where if you have a non payer using purchase conversions can help to convert to payers. But again, always remembering what you'll see in your MMP if you don't have an inactivity period.

Kate: We see on the casual space there is a lot of ... and similarly on the core or strategy side, there's a lot of lapsed user re-engagement, so trying to get users back and particularly for older games quite a bit of activity takes place in trying to re-engage existing players, VIP players, or those that have lapsed for a set period of time. So there is definitely variance across the different sub genres of gaming. And obviously then outside of gaming in terms of use cases of incrementality, there's a lot of comparing prospecting to re-targeting and where is your ... exactly like you described, that marginal costs, marginal gains piece, where is the money better spent there, so the re-targeting piece is a big one, particularly in e-commerce. And then in other verticals, maybe like online travel, it's what are the gains above say the search marketing, that is it such a big part of their business. But even within mobile gaming, there's quite a few different campaigns that make sense. A lot of advertisers are trying to seek always on engagement, but the challenges of measurement are one of the main reasons that they haven't been able to get there yet.

Alex: Thanks Kate for all those examples, I think it makes the whole discussion much more concrete now. I think we've covered a lot. As you can see, this is a concept that is going to evolve both for user acquisition and re-engagement. We definitely believe both at Adjust and Facebook that this is a necessity that all marketers should have a look at. There's a lot to work on here, it's not as intuitive as just checking data on a dashboard. It's really hand in hand work with your partners, and in this case Facebook is doing a great job there. And really consulting their clients around that topic. But we believe the payoff is there, there is value for customers who are doing it correctly and...a lot of benefits for your business.

Want to get the latest from Adjust?