Blog
Free
Consult
Monetization
— 10 min read

Applica’s Experiment History Review Framework

A framework to properly evaluate your historical product experiments. Presented by our CEO, Sviatoslav Hnizdovskyi.

<article-h2>Intro<article-h2>

<step-margin-s>——————————<step-margin-s>

<p-l>In this article, I present a framework for experimentation that helps you properly evaluate all the historical product experiments you’ve done and determine the low-hanging fruits and bottlenecks as you approach your next hypotheses.<p-l>

<p-l>I started developing this framework and iteratively improved it while working as a PM at BetterMe, and later as an advisor & consultant to Fabulous, Loona, and Drops before starting my own project - Applica.<p-l>

<p-l>These apps were already quite optimized, and I desperately wanted to find a solution that would allow me to systematically learn from the history of a few hundred A/B tests they performed in the past.<p-l>

<p-l>Some important caveats before we start the deep dive:<p-l>

<step-s>1<step-s><p-l>Why should you do this? The framework can be used to identify the potential impact for your next hypotheses and parts of the funnel that are the most optimized.<p-l>

<step-s>2<step-s><p-l>This framework will reveal its full potential if you have a history of at least 80-100 experiments. The more experiments, the better and more informative insights you gain. We’ll discuss how to prioritize hypotheses if you don’t have extensive experimentation history in the other article.<p-l>

<step-s>3<step-s><p-l>I will use the LTV (customer’s lifetime value) as a main optimization metric, but you can also utilize this framework to target other metrics and categories besides monetization.<p-l>

<p-l>For example, as for activation, such optimization metrics are: conversion to the aha moment, conversion to first value exchange, percent of new users who’ve successfully achieved habit moment, and short-term retention of your new users.<p-l>

<p-l>For engagement: feature discovery, feature stickiness (frequency of usage), the average length of the user session, the number of sessions per user, and the number of user actions per session.<p-l>

<p-l>For retention: long-term retention (Day 7/30; Week 1/4; Month 6/12…), etc.<p-l>

<step-margin-s>——————————<step-margin-s>

<article-h2>Step 1: Categorize historical experiments<article-h2>

<step-margin-s>——————————<step-margin-s>

Step 1: Categorize historical experiments

<p-l>Go through your entire history of experiments and assign each one a category. For example, in monetization optimization, some main categories are subscription pricing, trial length, paywall design, onboarding sequence, special offers.<p-l>

<p-l>You can expand or contract specific types depending on how many tests you've run in each category. For example, if you've conducted many onboarding experiments, you can break them down into subcategories: number of screens, screen order, screen content, and so on.<p-l>

<p-l>Some possible categories for other step of the funnel, for example “Retention”:<p-l>

<step-s>1<step-s><p-l>Feature 1;<p-l>

<step-s>2<step-s><p-l>Feature 2;<p-l>

<step-s>3<step-s><p-l>Feature N;<p-l>

<step-s>4<step-s><p-l>Long-term commitment mechanisms (Duolinguo-like);<p-l>

<step-s>5<step-s><p-l>Long-term commitment mechanisms (Duolinguo-like);<p-l>

<step-s>6<step-s><p-l>In-App messages;<p-l>

<step-s>7<step-s><p-l>Emails.<p-l>

<step-margin-s>——————————<step-margin-s>

<article-h2>Step 2: Revisit main metric change for each experiment<article-h2>

<step-margin-s>——————————<step-margin-s>

Step 2: Revisit main metric change for each experiment

<p-l>Evaluate your successes in each experiment. For example, determine the variant with the best change in LTV that every single experiment brought, the worst change in LTV that it brought, and the difference between the two. It gives a hunch on how reasonable and effective your hypotheses were.<p-l>

<p-l>For example, imagine you had the following AB test variants and results on paywall button CTA text:<p-l>

<p-l>Default variant: “Continue”, 0% (as it is default)<p-l>

<p-l>Var A: “Purchase” -10% to LTV<p-l>

<p-l>Var B: “Add to cart” +5% to LTV<p-l>

<p-l>In this case, you should write 10% in the first column, -5% in the next, and Delta is 15% (percentage points).<p-l>

<step-margin-s>——————————<step-margin-s>

<article-h2>Step 3: Define the average metric change per test within a category<article-h2>

<step-margin-s>——————————<step-margin-s>

Step 3: Define the average metric change per test within a category

<p-l>Now you’ll have to create a Pivot table. Determine the average increase in LTV after each test within the category for both best and worst variants, and count the number of experiments per category.<p-l>

<step-margin-s>——————————<step-margin-s>

<article-h2>Step 4: Sort categories by most significant average metric improvement<article-h2>

<step-margin-s>——————————<step-margin-s>

Step 4: Sort categories by most significant average metric improvement

<p-l>Now, you are moving on to the central part – prioritization. Start by sorting the list of categories from the largest to the slightest average change in LTV.<p-l>

Step 4: Table

<step-margin-s>——————————<step-margin-s>

<article-h2>Step 5: Keep track of diminishing returns (plateau) of optimization within each category<article-h2>

<step-margin-s>——————————<step-margin-s>

<p-l>The last step in the macro evaluation of your experimentation history is building a chart to visually assess whether you have reached an optimization plateau in each category.<p-l>

Step 5: Keep track of diminishing returns (plateau) of optimization within each category

<p-l>Do you see that you are approaching or have already reached a plateau in a category? You might want to consider re-testing your initial assumptions or trying much riskier hypotheses.<p-l>

<step-s>1<step-s><p-l>Retesting initial assumptions. Let’s imagine early on you decided on a particular user onboarding strategy, and by progressing with 10+ AB tests within it you reached its limit of optimization. If you were to try a completely different strategy, let’s say eliminating onboarding survey and rather jumping straight into the first session of core product functionality, you might actually see a completely different curve of optimizaiton (the example is arbitrary).<p-l>

<step-s>2<step-s><p-l>Trying much riskier hypotheses. In case you are stuck, you might want to consider something completely different within a category you have never tried before. Let’s say you tested your prices for a yearly subscription. You had 5 AB tests in the price range from $49 to $69. In this case, it might be worth running one more test and trying two completely different variants: $29 and $99 to see how your price behaves elasticity.<p-l>

<step-s>3<step-s><p-l>If neither of these methods work, move to the second-best category on your list.<p-l>

Keep track of diminishing returns (plateau) of optimization within each category

<step-margin-s>——————————<step-margin-s>

<article-h2>Step 6: Adjust your impact scoring within ICE/RICE<article-h2>

<step-margin-s>——————————<step-margin-s>

<p-l>Now, after a thorough exploration of all your historical experiments, you can be better informed of whether your future hypotheses within these categories might bring any value.<p-l>

<p-l>The final recommendation is to adjust your Impact parameter of the RICE and ICE frameworks, according to the strength of the category, besides just the strength of the idea itself.<p-l>

<step-margin-s>——————————<step-margin-s>

<article-h2>Summary<article-h2>

<step-margin-s>——————————<step-margin-s>

<p-l>I know from experience that the key to product success is constant experimentation, evaluation, and re-evaluation of the hypotheses in different product parts.<p-l>

<p-l>I believe that Applica’s Experiment History Review Framework can help products with a long history of experimentation reassess their successes to date and prioritize their backlog more efficiently.<p-l>

<p-l>I hope it helps you gain more confidence that your most impactful and promising ideas are tested first. If you think you've reached a plateau, using the framework will not only help you determine where exactly but also help you decide what to do next.<p-l>

<step-margin-s>——————————<step-margin-s>

If you found this blog post useful, subscribe to our newsletter to get even more articles and case studies.

Continue Reading

Activation
— 10 min read

First-time User Experience (FTUE)

Discover
Acquisition
— 8 min read

How to Make a Mobile Marketing Strategy for an App

Discover
Monetization
— 8 min read

A Guide to Building Your App's User Personas

Discover
Monetization
— 10 min read

Customer Journey Map Tutorial for Apps (So You Don't Get Lost)

Discover
Retention
— 9 min read

In-App Notifications & User Retention

Discover
Activation
— 8 min read

How to Build Product Walkthroughs for Your App

Discover
Monetization
— 10 min read

Guide to Product-Led Growth with Examples

Discover
Monetization
— 7 min read

Why You Need a Product Growth Manager

Discover
Activation
— 12 min read

Onboarding UX: Provide a Better UX Experience

Discover
Retention
— 10 min read

How to Find Your Aha Moment and Optimize It

Discover
Retention
— 7 min read

Customer Retention Strategies Examples for Apps

Discover
Activation
— 5 min read

What is User Activation in Mobile Apps?

Discover
Acquisition
— 5 min read

Best Practices for App Metadata

Discover
Activation
— 5 min read

How to Find User Activation Metrics for an App

Discover
Engagement
— 6 min read

Top Strategies for Mobile App Re-engagement

Discover
Monetization
— 8 min read

Mobile Paywall Examples that Convert

Discover
Monetization
— 6 min read

Difference between ARPU & LTV

Discover
Acquisition
— 8 min read

Fix Your Customer Lifecycle Funnel in 5 Steps

Discover
Acquisition
— 7 min read

How to Reduce User Acquisition Costs for Mobile Apps

Discover
Monetization
— 9 min read

Mobile Paywall Design Best Practices

Discover
Acquisition
— 13 min read

User Acquisition Strategy for Apps 101

Discover
Monetization
— 8 min read

Mobile Product Monetization Strategies

Discover
Monetization
— 11 min read

App Monetization Strategies for Android

Discover
Monetization
— 7 min read

LTV Modeling Your App Deserves

Discover
Acquisition
— 6 min read

User Acquisition Strategies for Mobile Games

Discover
Engagement
— 6 min read

Best Mobile Attribution Tools in 2023

Discover
Activation
— 5 min read

User Activation in Mobile: 5 Easy Steps

Discover
Monetization
— 8 min read

Best Subscription Paywall Solutions for Apps

Discover
Monetization
— 13 min read

How to Monetize Your App: 13 Strategies

Discover
Activation
— 7 min read

Perfect Prompt: How to Make an ATT Prompt That the User Will Accept

Discover
Monetization
— 8 min read

App Growth Strategy: Why You Need It, Action Plan to Build One

Discover
Monetization
— 9 min read

Fake Door Testing: Reduce Risks, Build Efficiently

Discover
Activation
— 4 min read

5 Tips to Boost User Activation

Discover
Monetization
— 7 min read

13 App Growth Metrics We Track, and So Should You

Discover
Acquisition
— 8 min read

App Funnel Strategies in 2023: a Step-by-Step Guide

Discover
Monetization
— 10 min read

Future-Proof Mobile App Growth Strategies for 2023

Discover
Monetization
— 8 min read

How to Kick Off Your Subscription Optimization

Discover
Engagement
— 5 min read

App Onboarding Experiments Analytics: A Guideline

Discover