Demystifying cohort retention session kp...

Demystifying cohort KPIs, part 1: Understanding retention and sessions

Cohort analysis is an extremely useful tool for understanding what happens to the behavior of users in your app when you’ve made a change or an optimization, whether that’s on the product side or in your marketing outreach. Rather than viewing your app’s performance in aggregate, cohort analysis groups users by shared characteristics, such as install date or acquisition source, and measures how each group behaves over time. This reveals patterns in retention, engagement, and monetization that traditional reporting often misses, helping mobile marketers make faster, smarter decisions.

Adjust’s extensive list cohort key performance indicators (KPIs)  make it easy to uncover meaningful insights into how your campaigns and in-app experiences perform. From analyzing day 1 retention to tracking sessions per user, these metrics provide the context needed to optimize onboarding flows, refine user acquisition (UA) strategies, improve user lifetime value (LTV,and more.

This article is the first in a three-part series that breaks down our cohort KPIs. In part 1, we focus on retention and session-based metrics, what they mean, how to interpret them, and how to use them to drive strategic app growth.

What is cohort analysis in mobile apps?

Cohort analysis is a method of grouping app users and tracking how their behavior changes across key points in the user journey. It helps you see how specific groups respond to product updates, campaigns, or onboarding flows, so you can measure impact more precisely.

By comparing behavior at set intervals, like day 1, day 3, or week 4, you can identify what’s driving engagement or drop-off within each cohort. For example, users acquired during one campaign might show stronger day 7 retention or trigger more sessions per user than another group, helping you optimize acquisition and in-app experience accordingly.

Definition of cohort

What is cohort retention in mobile apps?

Cohort retention measures how many users from a defined group, typically based on install date, return to your app after their first session. It reveals whether your app drives repeat engagement and helps assess the long-term value of your acquisition and product strategies. Unlike overall retention, cohort retention isolates behavior over time based on shared starting points, making it ideal for tracking patterns and benchmarking improvements.

Why retention and session KPIs matter for app growth

Acquisition brings users in. But what drives real, lasting success is what happens after the install. That’s where retention and session KPIs come in, not just as metrics, but as strategic tools for shaping product and marketing decisions.

Retention highlights long-term user value

When users continue to engage, they drive more sessions, increase monetization potential, and create higher LTV. High retention also improves UA efficiency by boosting the long-term impact of each install. On the flip side, poor retention equals high churn, which generally means the  overall return of even the best-performing campaigns is weak.

Sessions reflect engagement depth

Retention tells you who comes back, but sessions show how often and how meaningfully they interact. Frequent, repeat sessions may point to habit formation, while session drop-offs, despite steady retention, can flag UX issues or friction. Segmenting these trends by cohort surfaces insights that help prioritize optimizations across the user journey.

These KPIs become even more valuable when segmented by acquisition source, app version, or platform—something we’ll dive into next.

How to use cohort tables to analyze app behavior

Cohort tables turn user behavior into actionable insights, helping you identify where users drop off, when they return, and which campaigns or features drive long-term engagement.

Visualization formats for cohort analysis including heat maps, line charts, and funnel charts

Reading cohort retention tables

Cohort tables track performance metrics like retention or sessions across time intervals (e.g. day after install). Each row reflects a user group (based on install date), and each column shows how that group behaves on specific days. In retention tables, for example, day 7 shows what percentage of users returned seven days after install.

Viewed as a heatmap, patterns emerge, like steady drop-off or sudden spikes, making it easier to spot friction or growth triggers tied to campaigns or UX changes.

Understanding DAI logic

DAI aligns metrics to the user lifecycle rather than calendar dates. Day 0 is the install day, day 1 is the next full day, and so on. Since users install at different times, cohort size naturally decreases with each DAI. This is normal and critical to keep in mind when analyzing later-stage metrics.

Spotting drop-offs and standout cohorts

With cohort tables, you can pinpoint:

  • Churn points: A dip between day 1 and day 3 might signal friction in onboarding or value delivery.
  • Sticky cohorts: Higher retention in one campaign could point to better targeting or onboarding.
  • Engagement dips: Fewer sessions per user, despite steady retention, might reveal low product depth.

Example scenario

If a cohort in APAC shows high day 1 retention but steep day 7 drop-off, and only 1 session per returning user, it may indicate weak post-onboarding engagement or poor localization. Segmenting by campaign and creative helps identify what’s working and where to optimize.

How Adjust defines and segments user cohorts

Adjust cohorts are built to help you analyze behavior from the moment users install, or are reattributed to, your app. From there, you can apply flexible filters to measure retention, sessions, and revenue across meaningful user groups.

Install-date cohorting

Most cohort reports in Adjust start by grouping users by install date. Metrics like retention rate, sessions per user, and LTV are then tracked by day after install (DAI) to reveal how engagement and value evolve over time.

You can visualize cohorts across three timeframes:

  • Daily: Up to day 120
  • Weekly: Up to week 52
  • Monthly: Up to month 36

Switch between cumulative and non-cumulative views to track total outcomes or changes at each stage of the user lifecycle.

Acquisition source and tracker-level segmentation

Adjust tracker URLs let you analyze user performance based on the full attribution hierarchy:

  • Network
  • Campaign
  • Ad group
  • Creative

This structure enables precise comparisons, like testing which campaign creative drives higher day 7 retention or which ad group leads to longer session durations. Cohorts also include reattributions, giving a complete picture of both new and returning user behavior.

Platform, country, and device filters

To go even deeper, you can segment cohorts by:

  • Platform (iOS, Android, etc.)
  • Country (based on where the user was first active)
  • Device type (e.g. phone vs. tablet)

These filters help uncover platform-specific or geo-driven patterns. For instance, users on tablets in North America might show higher retention but lower session frequency than phone users in SEA. With these insights, you can localize campaigns, refine UX, and prioritize high-performing markets.

Key cohort KPIs explained

To understand how users engage with your app over time, it’s essential to measure the right cohort KPIs. These metrics provide the building blocks for uncovering performance trends and identifying opportunities to improve retention, engagement, and user lifetime value. Below, we break down the five key cohort KPIs used in Adjust.

Cohort size

Cohort size refers to the total number of users included in a given cohort, typically based on install or reattribution date. It forms the baseline for calculating most other KPIs, including retention and sessions. In Adjust, cohort size is calculated for each DAI, meaning it reflects how many users in the cohort have reached a specific DAI milestone.

For example, if you’re analyzing users who installed the app during the previous week, only those who have reached day 3 since install are included in the day 3 metrics. If a user installed on Sunday and today is Monday, they won’t appear in the day 2 or day 3 data yet. This is why cohort size tends to shrink the further out you go, as newer users haven’t completed enough time in-app to be counted on later days.

This table shows how many users from the selected cohort have reached each DAI milestone. The cohort shrinks over time as newer users haven’t yet accumulated enough days post-install.

Understanding how the size of your cohort changes by DAI ensures you're interpreting retention and engagement metrics in the right context.

Retained users

Retained users are those who return to the app and trigger at least one session on a given day after install. This raw count gives a clear picture of how many users are coming back, separate from percentages or ratios.

Continuing with our example cohort, here’s how retained users are distributed:

Each row shows how many users installed on a specific day of the week and returned on subsequent DAIs. Dashes indicate DAIs that haven’t occurred yet for that install group.

This view helps teams assess the actual volume of retained users across install days and DAIs.

Retention rate

Retention rate is the percentage of users in a cohort who return on a given day after install. It’s calculated by dividing the number of retained users by the cohort size for that DAI.

Formula: Retention rate (Day N) = (Retained users on day N ÷ Cohort size on day N) × 100

This metric reveals how well your app retains users at different stages—whether you're optimizing for early engagement (day 1), mid-term habit-building (day 7), or longer-term value (day 30 and beyond).

Sessions by DAI

Sessions by day after install shows the total number of sessions triggered by a cohort on each DAI. It reflects not just who returns, but how often they use the app when they do.

The table above shows session activity segmented by each user's install day. For example, users who installed on Friday triggered four sessions on day 0, one session on day 1, and two sessions on day 2.To get the full picture of engagement, total sessions across install days are summed and shown in the table below.

This helps identify patterns like drop-offs after onboarding or spikes in engagement around key in-app events.

Sessions per user

Sessions per user measures the average number of sessions triggered per retained user on a specific DAI. It helps distinguish between users who casually return and those who actively engage.

Formula: Sessions per user (Day N) = Total Sessions on Day N ÷ Retained users on Day N

By dividing sessions by retained users (instead of cohort size), you isolate engagement depth from overall retention performance. A cohort with moderate retention but high sessions per user may still indicate a strong user-product fit.

Common pitfalls when interpreting cohort KPIs

Cohort KPIs are powerful but only when read in the right context. Missteps in interpretation can lead to misleading conclusions that affect everything from UA spend to product iterations. Here's what to watch out for.

Confusing retention with engagement

A common error is assuming that more sessions mean better retention. But a user who logs in frequently isn’t necessarily a loyal one as they might just binge briefly and churn. That’s why retention and session KPIs must be analyzed together to understand both loyalty and depth of use.

Misreading incomplete cohort data

Because cohort metrics are calculated by DAI, not all users will have reached every DAI. This is especially important for interpreting metrics at later intervals. Comparing day 7 metrics across newer and older cohorts without factoring in cohort maturity can skew your analysis.

Mismatched timeframes (D1 vs W1)

Day-based and week-based retention aren't interchangeable. Day 1 retention captures users who return after 24 hours, while week 1 typically reflects cumulative return behavior. Misaligning the two can confuse internal benchmarks or cross-team reporting.

Sample size distortions

Comparing a 100-user cohort to one with 10,000 users can surface noisy trends. Small cohorts are more volatile, meaning retention or session rates can swing widely with just a few user actions. Always view KPIs alongside cohort size to assess reliability.

Best practices for cohort analysis

Cohort analysis is most impactful when used consistently and strategically. To move beyond reporting and toward action, teams need to look at the right data, at the right time, through the right lens. Here are four best practices to get the most out of your cohort KPIs:

Monitor cohort KPIs weekly

Weekly cohort reviews help you monitor shifts in retention and engagement while minimizing noise from daily fluctuations. This is especially useful after rolling out updates, launching campaigns, or changing onboarding flows. In Adjust, cohort tables can be viewed at daily, weekly, or monthly intervals to fit your team’s review cadence.

Segment by acquisition source and user behavior

Don’t stop at install-date cohorts. Segmenting by tracker, such as campaign, ad group, or creative, lets you pinpoint which sources bring in high-quality users. Layer on behavior-based filters (like completed onboarding or triggered key events) to identify which actions correlate with better retention or higher LTV.

When segmenting by tracker or event, ensure your cohorts are large enough to produce reliable results. Tiny segments may show exaggerated swings in retention or engagement that don’t reflect true patterns.

Combine with qualitative insights and A/B testing

Cohort metrics tell you what’s happening. Pairing them with qualitative research, like in-app surveys or support feedback, can help explain why. Mapping cohort KPIs to A/B test variants allows teams to evaluate performance changes over time. For example, if variant B shows higher day 7 retention and more sessions per user, you have evidence that your iteration delivers stronger results.

Each insight you gather from cohort KPIs should point toward a potential optimization like refining a push notification, streamlining onboarding, or reallocating UA spend.

Use event-based cohorting for deeper analysis

Event-based cohorting, which is grouping users by actions like completing onboarding or reaching a paywall, unlocks deeper insights into behavior after specific milestones. This technique is especially valuable for analyzing feature adoption, monetization funnels, and re-engagement campaigns. We’ll dive into this more in part III of this series.

Align cohort timeframes to your app lifecycle

Adjust your DAI or cohort range to match your app’s lifecycle—shorter for gaming, longer for subscriptions—so your insights align with user value windows.

Making the most of your cohort retention data

Cohort KPIs, especially retention and session metrics, are among the most actionable performance signals in mobile growth. They reveal where the product meets user expectations, which acquisition strategies drive lasting engagement, and when user value begins to taper off.

Solutions like Adjust Datascape bring these signals into sharper focus, letting teams explore retention trends across creatives, geos, and app versions—all in one view. With intuitive visualizations and flexible filters, Datascape makes it easier to compare cohorts and uncover what drives real impact. The faster you connect the dots, the faster you optimize.

In part II, we’ll move from engagement to monetization, exploring how KPIs like LTV and ARPU complete the growth equation.

Ready to turn user behavior into growth insights? Request a demo to see how Adjust’s can help you optimize one KPI at a time.

Be the first to know. Subscribe for monthly app insights.

Keep reading