Wooga: Paying higher CPIs can improve LTV and drive profits for your game
Senior Content Manager
May 16, 2018
Acquire your users at a low cost per install (CPI) and then drive them to engage at high levels used to be the business mantra UA managers followed to grow installs and profits. However, the realization that the average app loses its entire user base within a few months (!) is waking app marketers up to the hard truth that a focus on quantity over quality is a sure-fire way to burn money, not make it.
Thanks to a rethink of user acquisition that starts with a scorecard to ensure network partners are highly likely to deliver highly engaged users Wooga, the Berlin-based maker of story-driven casual games, is moving the needle on key metrics and monetization of its new game Tropicats. We caught up with Yury Bolotkin, Wooga UA Manager executing what he calls “kick-ass UA” to find out more about the strategy and why paying more for installs is paying dividends. We don’t give it all away here - Yury will share his learnings and lessons at Mobile Spree (Berlin, June 7).
You are following a radically new approach to improve metrics for your relatively new game. What’s the story here?
We launched Tropicats in November 2017, and initially we struggled to market this game successfully. We wanted to make sure the game is profitable, and that starts with acquiring profitable users. We started with a joint effort bringing together the development and marketing teams. The development team focused on improving in-game metrics such as retention and days played in the game; the User Acquisition team, where I am, invested effort in improving how we run campaigns. In February we also got an initial budget to manage, which we invested in app optimization campaigns running on Facebook and AdWords. The idea was to target paying users from the start and bring them into the game for the long term.
This approach transformed our marketing campaign completely, and we saw it pay off. Data showed the game was earning more money and revealed the overall metrics of the game improved. Encouraged by this amazing success, we decided to pay user acquisition costs that were double, triple and even 5X the lowest CPIs because we knew it would get us more paying users who would then spend more money and time in the game. You get what you pay for, and we were getting highly valuable users, so we decided to invest.
That’s a smart move, but one you make when analytics and expectations are aligned. What were the data points and KPIs you watched to make this decision at precisely the right time?
At launch, we had a moderate budget in comparison to our other game, June’s Journey, which had an enormous budget from the beginning. Granted, we launched Tropicats with a somewhat limited budget, but we were also seeing that users we did end up acquiring weren’t spending a lot in the game. Examining D7 retention (retention for the first seven days), which is our main benchmark, showed this. We decided to monitor this and not scale our budget to see how long this was going to stay this way. After all, it was November and pre-holidays, the period before competition for users gets tough around Christmas.
We decided not to make significant changes in our strategy. But then January came, and we had a discussion internally. We formed a small team, and I became responsible for the game UA budget and had to make the decision on which networks to run which campaigns, and which strategy would allow us to spend budget and make it profitable for the game monetization. We made a fresh start with new campaigns targeting paying users with Facebook and AdWords. We kept the networks we used for UA and switched our approach. Instead of trying to find the best paying user based on their profile or interests, we decided to rely on their algorithms. These are the networks that claim their machine learning can do this better than humans, so we said, “okay, let’s try it and see if it turns out better for us.”
What did you need to do or learn to give up control to the algorithm?
It’s tough at first. It’s like you have to put away your ego and tell yourself: “Okay, I’m ready to take the gamble and rely on artificial intelligence to call the shots here.” And you also have to be ready to give credit to the artificial intelligence if it can do that. So, yes, that leaves you with less control over the campaign. But you also have more time to spend on other things like analyzing the data and thinking about new approaches you can take to campaigns and projects because you have the free time and the freedom to map out ideas you didn’t have the time to explore before.
The AI is targeting users it identifies as the best paying users, so you can devote your effort to deep-diving into the numbers and find the clues that you can pass on to the development team, for example, to help them improve the overall mechanics of the game. You can also gather the insights that will allow you to spend some budget on new platforms and try new sources.
But you don’t just experiment. You have developed a detailed list of questions you put to these ad partners to help you decide whether to allocate ad spend. Coudl you provide a high-level description of what you are looking for and how you know if your ad network partner makes the grade?
Because we have confidence in the algorithms on AdWords and Facebook to target the best paying users, I feel I can allocate a bit more budget toward testing—and even bid higher on other networks and experiment. I’m doing this on a few video networks that didn’t scale before because we were very moderate with our bids and didn’t know what LTVs would be. Now I can allocate more budget and make a higher margin when we get better quality users.
We always make sure these networks fill out a questionnaire, we designed internally for this purpose. It’s basically a Google form that asks some great questions - nearly 60 in total - and gives us real food for thought. My personal favorites are:
1. Where does your traffic come from?
2. Do you offer self/managed service?
3. Are you able to receive post-install events? Is your algorithm able to optimize based on post-install events?
4. Do you support impression tracking through an impression link?
5. How do you report cost data? (API, link, etc.)
6. Which ad formats do you support?
We also added questions to check the partner's technical setup and spot potential disconnects with how we do UA. These include asking how they are integrated with their publishers and at what point is a click/impression-URL fired in their system. We have also developed a system to grade the answers and rank the partner, which I have tried to automate. It’s possible, but I prefer to go through the answers and evaluate them for myself. Now we have more than 20 evaluated responses from networks all over the world, and this really helps us spend with confidence and experiment to get the best results.
Be sure to register here to attend Mobile Spree, where you can connect with Yury and learn more about how you can quiz ad partners and interrogate the data. With nearly a decade of experience in digital marketing working for networks including Fyber and GBN, Yury is excited about exploring new perspectives and fresh ideas that will allow him to optimize UA spend and grow results.