Insights from Kirill Potekhin, CPO at Adapty, shared on the Podlodka podcast.
Most teams treat onboarding like a project. Build it, test it, launch it, move on.
That mindset costs you money every single day.
High-performing apps don’t “finish” onboarding. They treat it like paid acquisition: something that requires constant testing, iteration, and optimization. The question isn’t whether to test onboarding. It’s how often, what to prioritize, and how to avoid testing yourself into chaos.
First: who should own onboarding?
Before testing frequency matters, ownership does. Because if nobody owns onboarding, nobody optimizes it.
In most B2C subscription apps, onboarding sits with the growth manager or growth PM — the person responsible for in-app monetization: pricing strategy, paywall optimization, and onboarding flows. This makes sense. Onboarding is part of your conversion funnel, not a product feature. It lives between acquisition (owned by UA) and activation (owned by the product). The growth manager connects those dots.
In larger products with multiple features or sub-products, each PM typically owns onboarding for their specific flow, with the CPO or Head of Product setting overall direction.
The key principle: whoever owns go-to-market for a feature should own onboarding for that feature. Responsibility for user discovery and conversion should live in the same hands.
If your onboarding doesn’t have a clear owner, fix that first. Testing without ownership is just chaos.
How often should you test?
The short answer: more often than you think.
The idea that you’ll find the “perfect” onboarding and run it forever is tempting. It’s also unrealistic.
In B2C products, especially, onboarding needs frequent updates. Trends change. User expectations evolve. Competitors introduce new patterns that become table stakes. That interaction where users press their finger on the screen for 5 seconds to “commit” to a goal? Nobody used it a few years ago. Then someone tested it, conversions jumped 3%, and now it’s everywhere in habit apps.
Three percent might not sound like much. But when it touches every single user who installs your app, 3% compounds into meaningful revenue.
You can’t predict what will work. Ideas that seem brilliant fail in testing. Small tweaks that feel insignificant drive double-digit lifts. The only way to know is to test.
That doesn’t mean you need to change onboarding every week. A working version can run for one, two, or even three months, depending on your resources. But if you haven’t touched onboarding in six months, something’s wrong.
The cardinal rule: test sequentially, not in parallel
Here’s where most teams break their own testing.
If you change onboarding, pricing, and product features simultaneously, you have no idea what caused your results. Every test poisons every other test.
Run one major test at a time. Change onboarding this month. If that works, lock it in and test pricing next month. Come back to onboarding with a new hypothesis after that.
This forces prioritization. If you’ve run 10 onboarding tests but haven’t touched pricing in months, pricing probably has more headroom. A pricing test might deliver a 10% lift while another onboarding tweak delivers 2%. Test the higher-leverage thing first.
That said, when you’re starting from an unoptimized onboarding baseline, it’s realistic to see 10–20% conversion lifts — sometimes more. If you’ve never run a structured onboarding A/B test, start there.
The 80/20 testing rule
Most small apps run 50/50 A/B tests. Half the traffic sees version A, half sees version B.
Large apps rarely do this. When you’re getting 10,000+ installs per day, a bad test costs real money.
Instead, run 80/20 or 90/10 splits. Keep 80-90% of traffic on your proven control version and send 10-20% to the new variant. You’ll still reach statistical significance, but you’re not risking major revenue loss if the test bombs.
This is especially important for aggressive tests – completely new onboarding structures, radical pricing changes, or counterintuitive ideas that might fail spectacularly.
Always keep your control running. Traffic sources change. Attribution shifts. Audience composition evolves. If you’re not running a proper A/B test, you can’t separate signal from noise.
The single biggest onboarding opportunity most apps ignore
Here’s the insight that consistently surprises growth teams: localization outperforms most onboarding experiments.
Not translation. Full localization — language, imagery, narratives, value propositions adapted per region.
Apps earning $100K–$200K per month from non-English regions that localize their onboarding see a conversion jump of approximately 30% within two weeks. That’s not a tweak. That’s a structural revenue unlock.
The logic is simple: if you’re showing English onboarding to French-speaking users, your conversion is guaranteed to be lower. You don’t need an A/B test to confirm this. You need to act on it.
Start with translation. Today, LLMs handle this well. Human review is ideal, but machine translation dramatically outperforms no localization.
Then test narratives. What a “healthy meal” looks like varies by country. What a “happy family” looks like varies. The problems you emphasize and the value propositions you lead with should differ by region, not just language.
If your app generates meaningful revenue from non-English markets and you haven’t localized onboarding, this is the highest-ROI action available to you right now — higher than any copy test or button color experiment.
What to test (and what to skip)
The mistake most teams make is testing surface-level changes instead of structural ones. Button colors. Headline copy. Image swaps. These generate data without generating meaningful lift.
Test in this order:
1. Structural changes:
- Onboarding length (add screens vs. remove screens)
- Question types (multiple choice vs. open-ended vs. interactive)
- Personalization logic (how you segment users based on answers)
- First paywall placement (after screen 5 vs. after screen 10)
2. Content and messaging:
- Value propositions by segment
- Social proof placement and format
- Imagery and visual style
- Localized narratives
3. Interaction patterns:
- New engagement mechanics (finger press, progress bars, commitments)
- Friction points (strategic pauses vs. fast flow)
- Transitions and animations
Only after you’ve exhausted structural improvements should you optimize the details. Small changes on a broken structure are wasted effort.
Only after you’ve exhausted structural improvements should you optimize the details. Small changes on a broken structure are wasted effort.
📋 Not sure what to test next?
We’ve compiled 81+ onboarding A/B test ideas — covering flows, messaging, permission screens, personalization, social proof, first-task activation, and more — into a free checklist used by mobile growth teams.
Download the onboarding A/B testing checklist →
Can you over-test onboarding?
Yes, but the risk isn’t testing too much. It’s testing too many things simultaneously and losing track of what works.
Some teams test onboarding so aggressively that they conclude it’s better to remove it entirely. I’ve never actually seen that work. Products rarely get simpler over time – usually the opposite.
I have seen cases where simplifying onboarding improved performance, usually by cutting useless screens. If fewer than 90-95% of users complete onboarding, something’s wrong. Sometimes the fix is shortening it. Other times, lengthening it works better.
But fully removing onboarding means users see a paywall immediately. That doesn’t work. You still need to show them why the product solves their problem.
Even free apps benefit from basic guidance. No onboarding is almost always worse than bad onboarding.
When to redesign onboarding completely
Most of the time, you’re iterating. But some situations require a full rebuild:
New monetization model. Switching from paid to subscription requires a complete onboarding overhaul. New users see the product as-is. Existing users need a transition message: subscriptions enable ongoing development, new features, and continuous improvement — and here’s what you get in return.
New audience segments. If your product expands beyond its original niche, segment immediately. Ask users what they want to accomplish upfront, then branch the flow. The core product might stay the same, but the value proposition differs by segment — and onboarding should reflect that. Watch your metrics for where new user cohorts drop off or fail to activate. That’s your signal.
The comparison that clarifies everything
| Approach | High-velocity testing | Set-it-and-forget-it |
|---|---|---|
| Testing frequency | Monthly or quarterly tests | Once at launch, rarely revisited |
| Localization | Continuous expansion into new languages/regions | English only or minimal translation |
| Ownership | Clear owner (growth PM, product manager) | Shared responsibility (nobody owns it) |
| Iteration cycle | Sequential tests with clear hypotheses | Random changes without measurement |
| Results | 10-20% annual conversion improvement | Stagnant or declining conversion over time |
| Mindset | Onboarding is part of growth infrastructure | Onboarding is a one-time project |
What high-performing teams do differently
Teams that consistently improve onboarding conversion:
- They assign clear ownership. Onboarding has a named owner who’s accountable for testing and results.
- They test sequentially. One major change at a time, with proper A/B testing and statistical significance.
- They prioritize localization. Translation alone often delivers bigger lifts than months of copy optimization.
- They protect downside risk. Large apps use 80/20 splits to test aggressively without catastrophic revenue loss.
- They never stop. Onboarding optimization is continuous, not a project with an end date.
The teams that struggle treat onboarding like a feature: build it once, ship it, move on. Then they wonder why conversion stagnates while competitors pull ahead.
The bottom line
Onboarding is never finished because user expectations evolve, markets shift, and new patterns emerge constantly.
The apps that win treat onboarding the way they treat paid acquisition: something that demands continuous investment, rigorous testing, and relentless iteration. A 2% lift this quarter, 3% next quarter, 30% from localization — these compound and touch every user who installs your app.
Want to run onboarding experiments without waiting for App Store review? Adapty’s Onboarding Builder lets you ship and test variants without code — including one-click auto-localization — and tracks 20+ metrics so you know exactly what’s working.




