Glossary / marketing

A/B Testing

marketing

Quick Definition

A/B testing compares two versions (A vs B) to see which performs better. Shows version A to 50% of users, version B to 50%, measures which converts more. Data-driven decision making instead of guessing. Essential for optimizing funnels.

Detailed Explanation

A/B testing (split testing) is controlled experiment to optimize conversion rates. How it works: Hypothesis: "Changing CTA from 'Sign Up' to 'Start Free Trial' will increase conversions." Split traffic: 50% see version A (control), 50% see version B (variant). Measure results: Track conversion rate for each. Statistical significance: Need enough visitors (typically 1,000+ per variant) to trust results. Declare winner: If version B converts 20% better with 95% confidence, implement it. What to test: Headlines (most impact), CTAs (button text, color, placement), Pricing (amounts, positioning, tiers), Images (product screenshots vs lifestyle photos), Form fields (number, order, required vs optional), Page layout (long-form vs short). Industry examples: Booking.com runs 1,000+ A/B tests simultaneously. Every element tested constantly. Obama campaign A/B tested donation pages—increased donations 40%. Hotjar tested video vs image on landing page—video increased conversions 80%. Testing tools: Google Optimize (free, basic), VWO ($200/month, advanced), Optimizely ($2K/month, enterprise). Common mistakes: Stopping test too early (need statistical significance), Testing too many things at once (can't identify what worked), Not having clear hypothesis (random testing wastes time), Ignoring mobile (50%+ traffic), Testing low-traffic pages (need volume for significance).

Formula

Statistical Significance = Need ~1,000+ conversions per variant for 95% confidence. Lift % = ((Variant Conversion - Control Conversion) ÷ Control Conversion) × 100

Real-World Examples

Dropbox

A/B tested homepage: "Your stuff, anywhere" (generic) vs "Your files, anywhere" (specific). "Files" version won—10% higher conversion. Small word change = millions more signups.

Groove

Added customer photo + testimonial to landing page. Conversion increased from 2.3% → 3.6% (57% lift). Social proof A/B test: ₹0 cost, huge impact.

Price testing

SaaS tested ₹999/mo vs ₹1,499/mo pricing. ₹1,499 had 15% fewer signups BUT 40% higher revenue. Higher price won despite lower volume.

Why It Matters for Your Startup

A/B testing removes guesswork. Opinions don't matter—data decides. Small improvements compound: 10% improvement in 5 funnel stages = 61% overall improvement. Netflix, Google, Amazon run thousands of tests yearly—data-driven culture critical to success. Every % improvement = millions in revenue at scale.

Common Mistakes

  • Testing without enough traffic (need 1,000+ conversions per variant for reliable results)
  • Stopping test when you see "winner" after 2 days (false positives from small samples)
  • Testing too many variables at once (multivariate testing requires 10x more traffic)
  • Not testing mobile separately (mobile users behave differently)
  • Testing everything (focus on high-impact areas: headlines, CTAs, pricing)

Frequently Asked Questions

How long should I run an A/B test?

Until you reach statistical significance (95% confidence) with 1,000+ conversions per variant. Typically 2-4 weeks. Don't stop early—small samples give false results.

What should I test first?

High-impact areas: (1) Headlines (biggest impact), (2) CTAs (button text/color), (3) Pricing (amounts/positioning), (4) Form length, (5) Social proof. Avoid testing logo colors (low impact).

Can I test multiple things at once?

Not recommended (can't identify what caused improvement). Test one variable at a time. Exception: Multivariate testing if you have 10K+ daily visitors—requires much more traffic.

Ready to Find Your Startup Idea?

StartupIdeasDB has 3,000+ validated problems to help you build the next big thing.

Browse Problems →
A/B Testing - Definition, Examples & Formula | StartupIdeasDB Glossary | startupideasdb.com