Back to Skills
Data/experiment-designer

Experiment Designer

Design A/B tests with proper methodology, sample sizes, and success criteria.

Time Saved

2-3 hrs → 15 min

Compared to doing it manually

Slash Command

/experiment-designer

Type this in Claude to run the skill

The Problem

Shipping changes without testing is risky. Poorly designed tests produce invalid results.

What You Get

  • Complete experiment design doc
  • Sample size calculation
  • Metrics with targets and alerts
  • Success/failure criteria
  • Pre-launch checklist

Want this automated?

This skill is part of a workflow that automate multiple steps together:

How to use this skill

  1. 1Download the skill file using the button on this page
  2. 2Add the file to your .claude/skills/ folder in your project
  3. 3Type /experiment-designer in Claude to run the skill

Best For

PMs running product experimentsGrowth teamsAnyone making data-driven decisions

Frequently Asked Questions

A/B tests compare two versions of the same thing. Experiments are broader — they can test hypotheses, validate assumptions, or explore new directions. All A/B tests are experiments, but not all experiments are A/B tests.

Start with a hypothesis ("We believe X will cause Y"). Define success metrics before you start. Minimize variables to isolate cause and effect. Set a timeline and commit to acting on results.

Failed experiments are successful learning. Document what you learned, update your assumptions, and decide: iterate, pivot, or move on. The only failed experiment is one you don't learn from.

View Workflow →