--- id: ins_experimentation-paralysis operator: Elena Verna operator_role: Growth advisor source_url: https://www.lennysnewsletter.com/p/the-new-ai-growth-playbook-for-2026-elena-verna source_type: podcast source_title: Elena Verna 3.0 — 10 growth tactics that never work — Lenny's Podcast source_date: 2026-04-28 captured_date: 2026-05-01 domain: [growth-demand, product] lifecycle: [process-cadence] maturity: applied artifact_class: framework score: { originality: 4, specificity: 5, evidence: 4, transferability: 5, source: 5 } tier: A related: [ins_add-new-growth-model-every-18-months] raw_ref: raw/podcasts/elena-verna--earned-channels-tactics-that-never-work--2026-04-28.md --- # Don't test what won't reach sample size in a month, pre/post is fine ## Claim A/B testing has a sample-size cost most teams ignore. If the change won't accrue enough sample in 30 days, don't test it, ship it and use a pre/post readout (24h, 7d, 28d, 1yr). Reserve scientific A/B for high-traffic real estate or strategic pivots. Treating every change as an experiment is a velocity-killer dressed as rigor. ## Mechanism Underpowered tests yield indeterminate results, "no significant difference", which leadership reads as "the change didn't work." Teams then revert good changes because the test couldn't prove them, or freeze in indecision. Pre/post readouts give honest directional signal at low cost, fast enough to keep velocity. Statistical rigor matters when the stakes justify it; it actively harms velocity when it doesn't. ## Conditions Holds when: - The team can read pre/post data honestly without confusing seasonal effects. - Leadership accepts directional reads on lower-stakes work. Fails when: - The change has high stakes (pricing, core flow). Skipping rigor there compounds errors. - Pre/post can't separate the change from other variables (multiple shipped at once, big external event). ## Evidence > "If we cannot collect the sample size in a month, we shouldn't test it. Period." · Elena Verna on Lenny's Podcast, 2026-04-28 The default is pre-vs-post readouts at 24h, 7d, 28d, 1yr. A/B testing is reserved for high-traffic surfaces and strategic decisions. ## Signals - The team's testing cadence accelerates while quality of decisions does not degrade. - Indeterminate-test "we don't know if it worked" results disappear from review. - Leadership reads pre/post outcomes as legitimate signal, not lazy work. ## Counter-evidence A/B test purists argue any pre/post comparison is confounded by external variables. They are technically right and operationally wrong: in low-traffic environments, pre/post is the only honest tool. Use the A/B test religion only where data volume genuinely supports it. ## Cross-references - `ins_add-new-growth-model-every-18-months`, the seeding period explicitly allows pre/post over A/B