A/B Testing Agent Skill
Experiment setup, variant splitting, statistical significance, and feature flag integration.
The Skill
Full content, every format. Copy it, download it, or install with one command.
---
description: Experiment setup, variant splitting, statistical significance, and feature flag integration.
homepage: https://yepapi.com/skills/ab-testing
metadata:
tags: [testing, experiments, ab-testing, optimization]
---
# A/B Testing
## Rules
- Hash-based assignment: deterministic variant selection using `hash(userId + experimentId) % 100` — no randomness drift between sessions
- Variant config: store experiments in a config object `{ id, name, variants: [{ id, weight }], status }` — never hardcode variants
- Split traffic server-side: assign in middleware or API route, set cookie for consistency — client-side splits cause flicker
- Statistical significance: require 95% confidence (p < 0.05) before declaring a winner — use z-test for proportions
- Sample size: calculate minimum sample before starting — `n = (Z^2 * p * (1-p)) / E^2` where E is minimum detectable effect
- One metric per experiment: define a primary metric upfront (conversion rate, click-through, revenue) — secondary metrics are exploratory only
- Mutual exclusion: users in one experiment should not overlap with conflicting experiments — use experiment layers
- Feature flags: wrap experiment code in feature flag checks — clean up losing variants after experiment ends
- Duration: run for at least 1-2 full business cycles (7-14 days minimum) to account for day-of-week effects
- Logging: track `experiment_id`, `variant_id`, `user_id`, `timestamp` for every impression and conversion event
## Avoid
- Peeking at results before sample size is reached — inflates false positive rate
- Changing experiment parameters mid-run — invalidates statistical analysis
- Testing too many variants at once — increases required sample size exponentially
- Client-side variant assignment without cookies — causes flickering and inconsistent experiencesInstall
Why Use the A/B Testing Skill?
Without this skill, your AI guesses at a/b testing patterns. It might hallucinate deprecated APIs, use outdated conventions, or miss best practices entirely. With it, your AI follows a proven ruleset — every suggestion aligns with current standards.
Drop this skill into your project and your AI instantly knows the rules. Better code suggestions, fewer errors, faster shipping.
Try These Prompts
These prompts work better with the A/B Testing skill installed. Your AI knows the context and writes code that fits.
"Set up an A/B testing framework with hash-based user assignment and statistical significance tracking"
"Create an experiment dashboard that shows variant performance with confidence intervals"
"Build a feature flag system that supports percentage-based rollouts and A/B experiment layers"
Works Great With
A/B Testing skill — FAQ
It provides rules for experiment setup, hash-based variant assignment, statistical significance checks, and feature flag integration. Your AI writes correct A/B testing code from the start instead of guessing at implementation details.
Run `npx skills add YepAPI/skills --skill ab-testing` in your project root. This copies the skill file into your repo where your AI coding tool can read it automatically.
Use the formula n = (Z^2 * p * (1-p)) / E^2 where E is the minimum detectable effect. The skill includes this formula and enforces a 95% confidence threshold before declaring winners.