Retention rates and other lies

Paul H. Müller

Sep 23, 2019

For many Adjust clients the retention rate, specifically “Day 1”, is a vital indicator of campaign performance, yet discussion on the definition and calculation of this widely-adopted metric is scarce.

While this KPI may at first glance appear rather simple and self-explanatory. In reality, it requires some rather complex considerations, and can be defined in a lot of different ways that can lead to very different results.

For example, an Adjust client that had recently switched over from another attribution provider saw their average D1 retention drop dramatically.

The client concluded that, clearly, there must be something wrong with the way Adjust calculates the retention rate.

But it turns out both were “right”, but had differing definitions of what a ‘day’ is.

How hard can it possibly be?

When we think about a user that has been “retained” on the day after he installed, we think about someone that installed our app on his way home from work and then opened it again the next morning. Maybe he played the same game twice on his way to work or used the same food delivery app two nights in a row. This is the intuitive way to think about a user opening our app on two consecutive days. Where “day” is defined as the calendar day for this user, specific to his location and time zone.

This is where the trouble begins.

If all your users would live, work and travel only within a single time zone, the above definition would be trivial to implement. But that’s not how the real world works. Users are spread all over the globe and time zones are some of the weirdest things programmers have to deal with. Heaven forbid your users move around between time zones.

This means it is almost impossible to tell when the calendar day for each individual of your users has changed to the next and should be considered as retained for D1.

This is also the reason that no current MMP or mobile analytics providers use this as definition. It is simply too complex to do.

But if “day” doesn’t mean what you think it means, how does your attribution provider define it?

The difference a day makes

The simplest way to solve this issue is to use a centralized clock, creating a universal “standard day”.

This is the most commonly used approach of analytics providers, often using UTC as the time zone they define as global definition of day. It is easy to implement and only requires the server to know what day it was in the UTC time zone when a user opened the app.

But how does this definition differ from our intuitive assumptions we looked at before?

As long as we talk about European users, the difference between a UTC day and the actual calendar day is only a few hours apart. Depending on daylight savings time and country we rarely differ more than ±2h.

But what about the rest of the world?

In New York, the actual day is trailing our global UTC day between 4 and 5 hours, not great not terrible. Thankfully people will typically sleep during those hours.

However, San Francisco sits either 7 or 8 hours behind our UTC day. This means a user that opens an app before 5 pm local time and then opens it again afterwards (even within the same hour), they will count as having retained on day 1 . Clearly not what we have in mind when thinking about retention.

China, on the other hand, is 8 hours ahead of the UTC day, shifting “midnight” to 8 am. So a user opening an app in the morning and then for lunch will count as retained.

So clearly this definition has some serious issues and in this example do not realistically reflect “D1 retention”. While we can still compare campaign performance according to retention rate within a time zone, a global comparison using this definition would be deeply flawed.

But wait, there’s more.

In a well intended attempt to offer customization, many measurement companies allow their clients to pick a different time zone than UTC to define their “global day” for cohorts.

The idea is that clients can pick a day that cuts their user bases “real days” in the most realistic way possible.

But, coming back to our example of a client switching to Adjust, this can be used to create deeply flawed KPIs.

Imagine a company based in China with most of their users in North America. They define their global day as China Standard Time.

Suddenly “midnight” is defined as 12pm in New York and 9am in San Francisco.

By artificially splitting the most active periods of app usage into two separate days we get extremely inflated retention rates. Users playing two rounds of a game a few minutes apart during their lunch break will be considered “retained”.

It should be clear that this way of counting retention is not just wrong, it is dangerous. MMPs offering this approach, without properly explaining to clients the drawbacks and potential data issues, can mislead marketers into burning through budgets on users who appear far more valuable than they really are.

Right on time

At Adjust over the last few years, we’ve been researching into alternative approaches to attempt to solve this issue.

Instead of defining “day” by changing dates on a calendar, we defined “day” as “period of 24 hours”.

This way we can compare two timestamps of a user opening an app and determine if more than 24 hours have passed in between them.

So no matter the time zone, or local time of a user 24 hours must pass before we will count him retained for D1.

All users follow the same definition without any edge cases creating a comparable and meaningful KPI.

Easy, right?

A necessary struggle

For clients coming over to Adjust from other MMPs, adjusting to this methodology can be tricky to begin with. The client, mentioned earlier, saw their D1 retention drop from around 50%, to around 25%.

This makes sense - around half of the users retaining on D1 were doing so by calendar logic, but weren’t returning 24 hours after install. Still, nobody likes to see numbers drop - and understandably the client was concerned.

This is why it’s important to align your approach to analysis and marketing alike under a common goal of overall success and health of your app’s ecosystem, rather than chasing big numbers. Much like our fraud suite, the adoption of which can see a drop in installs as we ferret out the fakes you’d previously been paying for, switching to our cohort methodology can entail a drop in D1 retention. This is not a bad thing - it gives you more objective and actionable statistics, but if the UA manager’s bonus depends on artificially inflated retention, then an awkward conversation is sure to follow.

Ultimately, it’s the informed marketer that makes the best decisions. Understanding the logic behind every KPI pays dividends, as does using ones that factor in things like time zones and geography.

High numbers are great, but accurate numbers are even better, and that’s why we take the approach we do to retention calculations.

Want to get the latest from Adjust?