← All Posts

The Startup Validation Stack: What 12 Projects Taught Me About Evidence

CB Insights analyzed 101 startup post-mortems and found that 42% failed because there was no market need. I have experienced the same failure mode in miniature: I built SureInsure (an insurance analysis tool) to feature-complete before asking a single user whether they wanted the product. Nobody did. The building took three weeks. The validation that would have saved those three weeks takes one afternoon.1

TL;DR

Startup validation follows a specific sequence: desirability (do people want the solution?), feasibility (can the team build the solution?), and viability (can the solution sustain a business?). After launching 12 projects in 9 months — Ace Citizenship, Return (941), ResumeGeni, Banana List, Water, Reps, Design Gallery, Sorting Visualizer, Starfield Destroyer, SureInsure, amp97, and my personal site — I have experienced every validation anti-pattern firsthand. The projects where I validated before building shipped faster and found users. The projects where I built before validating taught me expensive lessons.


My Validation Scorecard

Project Validated Before Building? Outcome
ResumeGeni Yes (landing page + waitlist) Active users, revenue
Ace Citizenship Yes (community research + interviews) Growing user base
Personal site Partial (content validated, design not) 100/100 Lighthouse, steady traffic
Banana List No (scratched my own itch) Useful to me, no market traction
SureInsure No (built to feature-complete first) Zero users, shelved
Sorting Visualizer No (weekend project) Portfolio piece, not a product

The pattern is stark: projects where I invested in validation evidence before writing code found users. Projects where I built first and validated never produced results.2


The Validation Sequence

Why Order Matters

Engineers default to feasibility first: “Can we build the thing?” Product managers default to viability first: “Can we monetize the thing?” Both skip the question that kills 42% of startups: “Does anyone actually want the thing?”3

The correct sequence tests the cheapest-to-validate assumption first:

  1. Problem validation (Is the problem real and painful?)
  2. Solution validation (Does the proposed solution address the problem?)
  3. Channel validation (Can the target customer be reached?)
  4. Revenue validation (Will customers pay?)
  5. Scale validation (Do unit economics work at scale?)

Each stage costs more to test than the previous one. Skipping ahead wastes resources testing expensive assumptions that depend on unvalidated cheap ones.


The Projects Where I Skipped Steps

SureInsure: The Feature-Complete Trap

I built SureInsure — an LLM-powered insurance policy analysis tool — because I found insurance documents confusing. My validation approach: none. I assumed my personal frustration generalized to a market need.

Three weeks of building produced a working tool that could parse insurance policies, highlight coverage gaps, and explain exclusions in plain language. The technology worked. The problem: insurance policy holders do not actively seek analysis tools. The pain is real but latent — people do not know their coverage is inadequate until a claim fails. No amount of product quality solves the distribution problem for a latent pain point.

What validation would have revealed: A dozen conversations with insurance holders would have exposed that nobody searches for “insurance policy analyzer.” The problem exists at claim time, not at policy review time. The channel (search) does not match the problem timing (crisis).4

Banana List: Scratching My Own Itch

I built Banana List (a SwiftUI + SwiftData grocery list app) because I wanted a specific workflow: quick capture, iCloud sync, and nothing else. The validation was my own usage — which is valid for tools I build for myself but produces no market evidence.

Banana List works. I use the app daily. The app serves one user perfectly. The error was not building the app but assuming “I want the product” generalizes to “a market wants the product.” My usage validated feasibility and personal desirability but validated nothing about market desirability or distribution.


The Projects Where I Validated First

ResumeGeni: Landing Page Before Code

ResumeGeni started as a question: would job seekers pay for AI-generated resumes optimized for ATS systems? Before writing a line of application code, I built a landing page describing the value proposition and added a waitlist form.

The evidence: - 340 email signups in 2 weeks from targeted Reddit and LinkedIn posts - 12 users who replied asking “When can I use the product?” - 3 users who offered to pay for early access

The waitlist validated desirability (people want ATS-optimized resumes) and channel (job seeker communities on Reddit/LinkedIn). Only after the evidence passed my threshold did I invest in building the FastAPI backend, HTMX frontend, and LLM integration.5

Ace Citizenship: Community Research First

Ace Citizenship (a citizenship test prep app) started with community research, not code. I spent two weeks in citizenship preparation forums, subreddits, and Facebook groups observing: - What questions people asked most frequently - What existing solutions they complained about - What they wished existed

The research revealed a gap: existing prep apps covered content but not test-taking strategy. The strategy gap became the product differentiator. Building started only after the research produced a clear differentiator that existing products did not address.6


The 30-Day Framework (Refined by Experience)

Week 1: Problem Validation

Method: Conduct 10-15 structured interviews with potential customers. Do not describe the solution. Focus exclusively on the problem space.

Questions that actually work: - “Walk me through the last time you experienced [problem]. What happened?” - “What did you try? What worked and what failed?” - “How much time/money do you spend dealing with [problem] today?”

Evidence artifact: Problem frequency and severity matrix. If fewer than 7 of 15 interviewees describe the problem as frequent (weekly+) and painful (spending money/time on workarounds), the problem lacks sufficient market pull.7

Week 2: Solution Validation

Method: Present a solution concept (wireframes, landing page, or verbal description) to the same interviewees. Measure reaction intensity, not politeness.

Strong signals: “When can I use the product?” “Can I pay for early access?” “Let me introduce you to my colleague who needs a solution.”

Weak signals: “That’s interesting.” “Looks nice.” “I’d probably try the product.” I heard all three for SureInsure from friends. None converted to usage.

Evidence artifact: Commitment rate. If fewer than 3 of 15 take a concrete action (sign up, deposit, referral), the solution does not match the problem strongly enough.8

Week 3: Channel Validation

Method: Run two small-scale customer acquisition experiments. Spend $200-500 per channel testing whether the target customer can be reached.

For ResumeGeni, I tested two channels: - Reddit job seeker communities: 340 signups at $0 spend (organic posts) - LinkedIn targeted content: 45 signups at $150 spend ($3.33 per signup)

Reddit won. The channel validation told me where to invest ongoing acquisition effort.9

Week 4: Revenue and Unit Economics Validation

Method: Pre-sell the product or accept payment for early access.

Evidence artifact: Conversion rate from qualified lead to paying customer. If the rate falls below 2% for B2B or 0.5% for B2C, the value proposition requires revision before investing in production development.10


Validation Anti-Patterns I Have Practiced

The Survey Trap

Surveys measure stated preferences. Customer interviews and commitment behaviors measure revealed preferences. A survey showing 80% “would use the product” translates to roughly 5% actual adoption. I learned the gap between stated and revealed preferences with SureInsure: every friend said “that sounds useful.” Zero friends used the product after launch.11

The Founder-Audience Problem

Founders who validate exclusively within their personal network receive biased data. Friends provide supportive feedback that does not predict market behavior. Cold outreach to strangers produces higher-quality validation data because strangers have no social incentive to be encouraging.

My ResumeGeni validation worked because the signups came from strangers on Reddit, not friends. My SureInsure “validation” failed because I only asked people who knew me.12


Key Takeaways

For founders: - Validate desirability before feasibility; the most common failure mode is building a product nobody wants, not building a product that does not work - Measure commitment behaviors (signups, deposits, referrals) rather than stated enthusiasm; polite interest does not predict purchasing behavior - A landing page with a waitlist costs one afternoon; building to feature-complete costs weeks or months

For engineers joining startups: - Ask to see the validation evidence before committing to a technical architecture; the right technical investment depends on which hypotheses have been validated - Prototype for learning speed, not production quality; the first version’s purpose is generating evidence, not serving customers at scale


References


  1. CB Insights, “The Top 12 Reasons Startups Fail,” Research Brief, 2021. 

  2. Author’s project validation scorecard. 12 projects launched in 9 months with varying validation approaches. Projects with pre-build validation outperformed projects without. 

  3. Osterwalder, Alexander et al., Testing Business Ideas, Wiley, 2019. Validation sequence methodology. 

  4. Author’s SureInsure post-mortem. LLM-powered insurance analysis tool built to feature-complete without market validation. Zero user adoption. 

  5. Author’s ResumeGeni validation. Landing page produced 340 signups, 12 direct inquiries, and 3 early access payment offers before application code was written. 

  6. Author’s Ace Citizenship research. Two weeks of community observation in citizenship prep forums revealed strategy gap as product differentiator. 

  7. Fitzpatrick, Rob, The Mom Test, self-published, 2013. Customer interview methodology that avoids false positives. 

  8. Alvarez, Cindy, Lean Customer Development, O’Reilly, 2014. Commitment behavior as validation signal. 

  9. Author’s channel validation. Community forum posts (340 signups, $0) vs. professional network paid content (45 signups, $150). Channel economics determined acquisition approach. 

  10. Ries, Eric, The Lean Startup, Crown Business, 2011. MVP and pre-sell validation methodology. 

  11. Ariely, Dan, Predictably Irrational, HarperCollins, 2008. Gap between stated and revealed preferences. 

  12. Maurya, Ash, Running Lean, O’Reilly, 2012. Cold outreach validation methodology. 

Related Posts

The Pathless Path: How I Left a 12-Year VP Role to Build 12 Projects

I left VP of Product Design at ZipRecruiter after 12 years to build independently. No plan, no destination, just curiosi…

6 min read

Critical Yet Kind: How I Encoded Feedback Principles into 86 Hooks

Google's Project Aristotle found psychological safety predicts team performance. I encoded the same principles into auto…

6 min read

Design Systems for Startups: How I Built Mine Backwards

I built my design system backwards — tokens first, components never. After a CLS bug taught me the cost of skipping toke…

8 min read