Why $1K and $10K are critical milestones
$1K and $10K are decision points where most founders either panic and quit or waste money without learning anything.
The first $1K is about finding creative truth. Can you make something that earns attention in a feed where nobody asked to see your ad? If you can’t, spending more won’t help. If you can, you’ve unlocked the hardest part.
$1K to $10K is about proving the business. You have creatives that work. Now you need to prove they drive real revenue that makes sense. By $10K spent, you should know if this is worth investing in seriously.
Most people try to skip to $10K without doing the $1K work. They end up with expensive lessons and no clarity.
How to read this guide
This isn’t a universal framework. You won’t find one online that fits you perfectly. If you over-trust a generic checklist, you’ll either panic early or waste money slowly.
Think of this as a way to spend money to learn, then spend money to validate, while staying honest about what Meta is actually telling you.
Choosing your ad platform
Meta has so many things that need to come together to make it work. Your creative, your targeting, your event setup, your attribution. But none of that matters if your product isn’t ready. And by “ready” I mean: onboarding, paywall, pricing, retention, value. Everything.
If you can make Meta work, you can make anything work.
If someone is making organic work but can’t make paid work, and they’ve truly done everything well, their unit economics might be failing. They might be leaving money on the table. Organic is great, but paid is where you prove the math.
iOS vs Android for Subscription Apps
iOS makes much more money for subscription apps. The users pay better. LTV is higher. Retention is usually better, too.
But there’s a catch: attribution. iOS tracking is broken. You will see gaps between what Meta reports and what actually happened. Sometimes 50% gaps on certain days.
This brings me to two things you should know:
Blended ROAS shouldn’t scare you. If you’re a small app, blended is actually easier to track. You know your total spend, you know your total revenue. The gap between “attributed” and “real” matters less when your numbers are small enough to see clearly.
Web-to-app solutions exist now. Not talking about web quizzes for lead gen. I’m talking about actual web-to-app flows that can give you even better attribution than Meta’s aggregated event measurement. Worth exploring if iOS attribution is driving you crazy.
Google App Campaigns
Google App Campaigns can work. People think Android is easier because tracking is better, and that’s true. But actually, iOS on Google can be better for subscription apps because iOS users pay better. So don’t write off Google iOS just because tracking is harder. Check blended.
Don’t run install campaigns on Google. They bring near-fraud level quality. Optimize for purchase or ROAS instead.
Apple Search Ads
Often “nice to have.” Sometimes long-tail keywords are amazing. But for most apps, it’s expensive and not infinitely scalable.
If you can make Apple Search Ads work at meaningful volume, your app is ready. Pricing, onboarding, paywall, value — everything is validated. At that point, you’re often just one hero creative away from scaling Meta.
TikTok ads
You’ll see a lot of “I grew with TikTok organic” stories on Twitter. A few things to know:
- 90%+ will fail and you might be one of them.
- Most stories talk about views and installs. Views don’t matter. Installs don’t matter. Purchases matter.
- If they’re not talking about ROAS, don’t take it seriously.
- TikTok can require full-time dedication to crack. It’s not a side quest.
AppLovin
For solo devs and small teams: don’t touch it. It’s corporate territory. I wrote to them with $500/day once. No response.
Phase 0: Before you spend $1 on Meta ads
If you skip Phase 0, your first $1K will still be spent, but it won’t teach you the right lesson.
0.1 — Minimum event tracking (Non-negotiable)
- ✅ Install (still useful as a sanity metric)
- ✅ Trial start (if you have a trial)
- ✅ Purchase / Subscribe (direct signal)
- ✅ Revenue (you need to track actual money coming in, not just events)
When I say you need “revenue tracking,” I mean you need a way to compare what you spent vs what the app actually earned. Not just what Meta claims. Your own source of truth for money. This is what lets you calculate blended ROAS.
0.2 — Know your baseline funnel (If you can)
- Install → paywall view
- Paywall view → trial/purchase
- Trial → paid
Sometimes you don’t have a baseline yet. If your app is new and you haven’t had enough users to measure this, that’s okay. Meta will teach you these numbers over time.
Using Meta as a diagnosis tool makes a lot of sense. Run campaigns, see what happens, learn your funnel from real data. Just know that if you’re running install campaigns only, you can’t use those numbers as a benchmark for conversion quality. You need to eventually test with trial or purchase campaigns to learn what your funnel looks like with paid traffic.
0.3 — One sentence you should be able to say
“I can pay $X for a customer because I earn $Y back within Z days/weeks.”
You don’t need perfect attribution to start. But you do need a rough sense of the math. If you can say this sentence with some confidence, you’re ready to spend money on learning.
0.4 — iOS attribution reality: Think blended ROAS
Especially on iOS, Meta will undercount. Sometimes by a lot. You might see 50% gaps on certain days. Don’t judge campaigns by today’s screenshot. Give it time to settle (look at the last 7 to 30 days).
What blended means: Total spend vs total revenue across your whole app. Not just what Ads Manager reports as “attributed.”
If you weren’t getting organic traction before running ads, then what you see as “organic” after running ads is really paid. Because without pay, you didn’t get those numbers.
Quick practical checks:
- If you use an MMP (AppsFlyer, Adjust, etc.), great — but don’t stop there.
- In App Store Connect, filter by app referrer (Facebook / Instagram). Compare downloads there vs installs in Meta.
- Compare Meta’s reported revenue vs your actual revenue in RevenueCat or your internal system.
0.5 — ATT + SDK timing atters
ATT and SDK need to work together. You need the ATT pop-up for better attribution, but the goal isn’t high opt-in rates. Most users won’t opt in. That’s fine.
The real issue: if you delay SDK initialization until late in your app flow (after account creation, after multiple screens), you add “unknowns” to your attribution.
If your numbers look weird, this is one of the first places I’d check.
0.6 — Attribution tools help measurement, not magic performance
Tools like AppStack can help reduce discrepancies and give Meta better data. But don’t buy attribution tools expecting them to automatically make your ads cheaper. What they do is reduce discrepancies so you can make decisions faster.
Phase 1: First $1K–$2K (Creative discovery)
The goal of the first $1K is to find 1–2 hero creatives that prove you can earn attention and drive installs efficiently. Profit comes later.
The Meta ads budget myth
There’s an idea that you need $10K/month in Tier 1 markets to know if Meta works. I don’t fully buy that.
Yes, eventually you need a real budget to scale. But for creative testing, you can start with $20–$30/day in Tier 1 countries and learn whether you have something that works. The first $1K is about finding a creative signal, not about scaling.
1.1 — Campaign structure
- Start with a manual install campaign (not Advantage+ for this stage).
- Keep targeting broad (don’t fragment when you have low volume).
- Put 3–5 ads per ad set.
Exception: if you already know something hard about your audience (a clearly feminine app, or users over 40 only), don’t let Meta “discover” that with your money.
1.2 — The #1 KPI in creative testing: Spend distribution
For creative discovery, the most important metric is often not CPI, not CPM. It’s where the spending goes.
When Meta refuses to spend on a creative, it’s saying: “I don’t believe this will scale.” Meta has data we can’t see. Trust the spend signal.
This doesn’t replace IPM or CPI. It’s in addition to them. But if the algorithm isn’t spending, that tells you something before the other metrics even matter.
1.3 — The zero-spend problem (Andromeda in 2026)
Lately, you’ll see creatives getting zero spend. Meta picks favorites too fast.
Fix: Force minimum spend so every creative gets tested.
- Use automated rules to stop each ad after $2–$5 spend.
- Once every creative has spent that minimum, run them together again.
- Then watch where the money flows. That’s your creative truth.
1.4 — IPM benchmarks for Meta ads
IPM (installs per 1,000 impressions) is a creative strength signal for install campaigns.
| IPM range | What does it mean |
|---|---|
| Below 2 | Don’t scale. Keep testing. |
| 2–5 | “Can work” zone. |
| 5–10 | Strong performer. |
| 10+ | Push hard. |
These are ranges, not religion. Focus on which of YOUR creatives get the most spend.
1.5 — Android-first testing (usually)
Test on Android first (cheaper, better tracking). Take winners to iOS.
- Test around 200 creatives on Android install campaigns.
- A small number rises above.
- Take those winners (around 10) to iOS.
- Usually, 1–2 become “hero creatives.” This is normal. Don’t let the hit rate discourage you. Focus on cost per winner: how much time and money to find one creative that actually scales.
- Those go to conversion-optimized campaigns.
But iOS pays better for subscriptions. So focus on iOS — just don’t turn that into religion.
1.6 — Best countries for Meta ad testing
My default bundle for high-revenue markets:
- Tier 1 West: US, CA, UK, IE, AU, NZ
- Europe (high-value): DE, FR, IT, ES, NL, CH, Nordics
- Key setting: Language set to English (catches expats too).
1.7 — Creative production: Variety matters more than volume
This isn’t about AI vs human. You need creatives that are sourced differently, look different, and say different things.
- Competitor watch: What angles are working for them?
- Top viral videos in your category: What hooks and formats are getting traction?
- Founder video: You on camera. Authenticity often beats polish.
- External creators: 100% match for your audience. Fit matters more than fame. Tools like Modash or HypeTrain can help you find them fast, or just search manually on Instagram/TikTok.
- AI tools (Sora, Runway, Higgs Field): Great for hooks and B-roll. For voiceover, identify which ElevenLabs voice Higgs Field used, then use that same voice to narrate your screen recordings or product demos.
- Screen recordings: Show how the app works. Builds intention.
When I say “test 200 creatives,” I mean from different sources with different hooks. Not 200 variations of the same thing.
1.8 — The creative rule I repeat the most
Don’t scream your app. Start with the pain, the emotion, the moment. Hook first. Then, after around 5 seconds, explain the app.
And when you have a good creative, make variations. One winner can become five tests.
1.9 — When to move to phase 2
You’re ready when you have 1–2 creatives that:
- Get consistent spend (Meta keeps choosing them)
- Hit IPM of 5+ in Tier 1 countries
- Show a CPI you can live with (even if not profitable yet)
If you’ve spent $1–2K and nothing is getting spent or your IPM is stuck below 2, don’t move on. Go back to creative. More budget won’t fix weak creative.
Phase 2: $2K–$10K (Conversion validation)
You’ve tested creatives. You know which ones work for installs. Now: can they drive actual revenue at economics that make sense? At this stage, you’re spending more per day. $150–200/day is reasonable for validation. You need enough events for Meta to learn, and enough data for you to trust the numbers.
2.1 — Move winners to conversion optimization
Take your hero creatives and run campaigns optimizing for trial or purchase.
Meta is a good mirror to show you if something is going to work. If conversion campaigns aren’t working, it can be your pricing, onboarding, paywall, or value. The app has to be ready too.
2.2 — Advantage+ Is not one-size-fits-all
If you can optimize for direct purchase, automation (including Advantage+) can work great because Meta gets a clean signal: “this person paid.”
If you’re trial-first, Advantage+ can be risky. Meta may find cheap trials (often younger cohorts) who don’t convert to paid later. You can “win” the wrong event.
If you still want to use Advantage+ for a trial app, consider bid factors (reduce 18–24 by 30–40% if your app isn’t for Gen Z) or go manual when you need control.
2.3 — Max conversion number vs Value of conversion
You have options: Maximize Number of Purchases or Maximize Value of Purchases.
If you’re just starting, the default will be Maximize Number of Purchases. Maximize Value of Purchases is not available for new advertisers. Facebook needs to see the purchase history first. It will unlock later once you have enough data.
Once eligible for value optimization (which won’t happen in your first $10K), if your app has direct purchases without trials, you can optimize for purchase value, not just purchase events. This tells Meta to find people who will spend more.
This works well for weekly subscriptions and coin-based monetization, where you get revenue signals fast. For annual subscriptions with trials, the signal takes too long. Stick with purchase or trial optimization instead.
2.4 — Trial optimization (Be careful)
Trial conversion is “blind” to what happens after. Meta finds trial starters, not necessarily people who convert to paid.
Two ways to protect yourself:
- Limit your audience: Exclude demographics you know don’t convert to paid. For example, if your app is NOT catering to Gen Z, you might exclude 18–24. But if your app IS for young users, don’t exclude them.
- Use “trial plus” events: Custom events like “trial started but not cancelled within the same hour.”
If you can’t do custom events, A/B test trial vs no-trial paywalls with tools like Superwall.
A common trap with trial campaigns in the US: maybe 50% of your budget goes to 18–24 year olds. They may love starting trials. They don’t love paying.
If you’re using Advantage Plus, use bid factors to reduce 18–24 by 30–40%. Or run manual campaigns: one for 18–24, another for 25+. Compare the trial-to-paid conversion, not just the trial volume.
For purchase campaigns, age and gender matter much less. If someone is buying on day one, let Meta find them regardless of demographics.
2.5 — Conversion lift (and the CPI shock)
- Organic install-to-paid around 2% → purchase campaigns can push around 10–20%.
- Organic trial rate around 10% → trial campaigns can push around 30%.
Your CPI on purchase optimization can be 5X to 10X your install CPI. That’s expected. You’re paying for intent.
This CPI jump has a cash flow implication. If your payback period is 60–90 days, you need runway to survive the gap between spending and earning. Make sure you have the cash before you scale into purchase campaigns.
Important: Don’t assume your cheapest CPI creative will be best for conversions. Trust the algorithm when optimizing for different goals.
2.6 — Event volume thresholds
- 5+ events per day — minimum
- 50+ events per week (ideally 7/day)
This is the threshold where Meta can actually learn who to target. Below this, you’re flying blind.
2.7 — Don’t overcomplicate signals
There are companies doing “signal engineering” to help you spend more efficiently. Custom events, funnel signals, propensity models. They exist, and they can help.
But here’s the thing: most of them require 20+ trials per day as a prerequisite. If you’re below that, don’t get lost in the math. Focus on creative and basic event tracking.
When you’re spending big, signal sophistication matters. When you’re spending small, creative matters more.
2.8 — Learning phase reality
If nothing comes in 2–3 days, nothing magical will come later.
There will be fluctuations. But with smaller budgets, the learning phase isn’t a magical promise. Good signs appear early. Bad signs don’t reverse themselves.
Don’t build superstition into your process. If you’re seeing purchases in the first few days, that’s your signal. If you’re not, more time won’t fix it.
2.9 — Cost per result settles over time
This is different from 2.8. That section is about whether you’ll see ANY signal. This section is about what happens once you ARE getting conversions.
When you switch to conversion optimization, costs look unstable at first. This is normal.
Costs settle DOWN with volume, not up. As more events come in, Meta learns who to target. Give it 3–7 days before making big decisions about CPA.
Example: First day CPA might be $150–200. The second day might drop to $100. These are examples, not absolutes. Your numbers will be different. The point is: don’t panic on day one.
iOS note: iOS campaigns need one extra day to normalize. The first day is often expensive. Numbers settle within 3 days.
2.10 — How ROAS improves over time
ROAS can improve in two directions:
- Algorithm learning: Meta gets better at finding buyers, so your cost per result drops.
- LTV increasing: Your actual revenue per user grows week over week, month over month, as users stick around and pay more.
Both matter. Don’t just watch the cost side. Watch the revenue side too.
2.11 — Budget split once you have winners
- 70–80% → Proven winners (BAU)
- 20–30% → Always testing new creatives
2.12 — Paywall segmentation for paid vs Organic traffic
- For paid Meta traffic: consider a no-trial paywall so Meta gets purchase signals faster.
- For organic users (lower intent): trial can still be right.
Tools like Adapty, Superwall, or RevenueCat make this segmentation easy. You can create a “Meta users” segment and show them a different paywall. These tools also cover onboarding flows, A/B testing, and analytics. Everything from install to purchase in one place.
2.13 — When you find a winner, don’t let it sit
When one creative is winning consistently, that’s not the time to relax. Make more versions of it. Different hooks, different scripts, different formats. You want variations ready before the original starts to decline.
Scaling Meta ads: When founders freeze
Most good accounts stall at this point. Not because they’re failing, but because the founder gets scared.
If you have 50+ events per week and you’re hesitating
I see no issue with increasing the budget. More events = better for the campaign. Things should settle over time. Don’t feel pressure to constantly touch it.
- Observe breakdowns over days/weeks (age, gender, country).
- If one country performs better, consider separating it as its own ad set. This is just an example of what to watch for, not a universal rule.
The “One creative eating all spend” worry
Companies spend a million dollars on one ad. One dominant creative shouldn’t stop you from scaling. More budget = more chances for other creatives to prove themselves.
Your fear can slow you down
Diminishing returns don’t start at $70/day. For most apps, they start much later.
The $500-for-3-days test
- Increase spend to $500/day for 3 days.
- Day 1 might be shaky. Give it time.
- Watch if the performance settles. If positive, run for 7 days.
Bid cap as a guardrail
Create a second campaign with a bid cap. If your CPR is around $7, set the bid cap at $10–12 (example numbers, yours will differ). You can put a $5,000/day budget and see if the algorithm spends within your profitable range.
Scale while you can
If you have positive economics and wait too long, “I could have gotten $150,000 for my $50,000. And now I have $5,000 spent well, and it brought me $10,000.” Creative fatigue is real. When something works, push it.
Creative production: The real game
The question is: what does it cost you to find a winner?
Focus on cost per winner. Maybe you create 100 AI UGC ads and get one winner. Or maybe you get on camera yourself, make 10 variations, and one of those 10 beats all 100. Compare the cost to find what works for you.
Learn CapCut. It’s how you turn footage into testable ads. AI tools like Sora or Hicks.ai are good for hooks and B-roll, but you still need CapCut to finish.
Work with creators who are a 100% match for your audience. Authenticity matters more than follower count.
When you’re “Investable” — around $10K spent
- Repeatable creative production: A process that keeps finding winners, not just one lucky hit.
- Signal stability: 50+ events per week. Results aren’t pure noise.
- Economic clarity: You can explain CAC vs payback and why it makes sense.
- Scaling without superstition: Clear rules for when to duplicate, split geos, kill, or push.
Summary
Meta is a mirror. If it’s not working, it’s usually one of two things: your creative isn’t good enough, or your app isn’t at that level yet (pricing, onboarding, paywall, value).
Fix the fundamentals first. Then scale.
And if your blended ROAS is positive and you have the cash? Don’t let fear slow you down.
About the author
Samet Durgun runs The Growth Therapist from Berlin. After a decade scaling apps, he now works with subscription app founders as a fractional CMO. The focus: diagnosing why growth stalls before recommending fixes. Connect on LinkedIn here




