Blog 'Opens' is a terrible metric - here's what to calculate instead

'Opens' is a terrible metric - here's what to calculate instead

Of all app metrics that exist, “opens” is a, to us, vanity metric. It’s a relic from a past time when phones were simpler, users more attentive, and app analytics mostly a distraction. Just how is it broken, and what can replace it?

I’ve spoken at a few different conferences in the past about this issue, because our approach - when speaking of sessions rather than opens - is important context to understand reports like our Mobile Benchmarks.

The “open” is frequently used as a token of engagement, to measure the number of times the user opens the app. Practically, this is typically measured through the number of context switches - where the app is pulled into foreground, swapped to background, or simply “opened”.

This worked fine when apps were generally used in solid chunks of time - you’d open the app, do your thing, and only switch away when you were done. One “open” then corresponded very nicely to a solid chunk of time spent engaged with an app.

Then the platforms started allowing users to multitask, let apps run in the background, and directly encouraged rapid switching, e.g. through the iOS double-tap. Over time, app developers constructed their apps around the premise that users no longer necessarily gave apps their attention over a consistent chunk of time.

The demise of the open metric

So what did this mean practically, in terms of measuring engagement via opens?

The first issue with this is that the logic is inconsistent between platforms. iOS and Android, for example, have very different ideas about how and when your code is called as the app is launched. On iOS, we’re familiar with didFinishLaunching and related methods; on Android, we talk instead of activities that are triggered at different times. Simply defining an “open” as whenever either of these types of methods are called immediately creates inconsistencies.

The second issue comes from the fact that different apps have totally different behaviors around opens. Players switch away from Clash of Clans in a totally different way than they do from Flappy Bird, where pauses are treated totally differently. So the number of opens that are triggered are dependent on the mechanics of the game, and not on the users’ engagement.

The third thing is that a proliferation of re-engagement techniques - push notifications, instantaneous product retargeting, and so forth - can result in totally different patterns of opens across multiple apps.

What this means is that the “open” metric isn’t comparable. Its meaning changes between platforms - so you can’t compare performance on your iOS and your Android app, to see which is doing better. You can’t compare two different apps that you’ve released, because their mechanics disrupts your KPI. And you can’t compare different cohorts across time, as any of these factors can change between builds or optimizations to your campaigns.

This makes it absolutely useless to derive any useful insights on your performance.

Where are the alternatives?

If we’re looking to measure engagement as an expression of how interested a user is in the offering of a particular app, and as a proxy for how likely they are to become valuable users, “opens” is clearly broken.

I regard two particular metrics as key for understanding these values.

The first one is the proper session. The problems with “open” is relatively similar to the issues in web tracking and understanding the connections between frequent, discrete activities. If we instead calculate sessions as spans of activity where there may be any number of opens, we can get a more accurate sense of how often and how frequently users are engaging with your content without mixing in any of the problems above.

In our case, we’ve defined a session as a discrete span of activity separated by a 30-minute break to any other activity. This can be a little more complex to measure. In our SDK, we check on each open when the last activity was measured, reporting a new session only if the interval was long enough. Here’s the code.

The second metric that can more effectively mirror users’ engagement is the amount of time spent. Intuitively, if they spend more time in the app, they’re more engaged.

Both metrics are part of our Mobile Benchmarks report. Since they both rely on very well-defined units - seconds and minutes - they’re immediately comparable. If you start measuring either proper sessions or time-spent today, you can immediately compare those numbers to the vertical benchmark in our report.

Want to get the latest from Adjust?