Loading...
Loading...
Detect sample ratio mismatch in your A/B tests. Catch data quality issues before they invalidate your results.
A/B tests rely on random assignment to ensure that control and variant groups are statistically equivalent. When the observed traffic split deviates from the expected ratio, it signals that randomization may have been compromised. This is known as Sample Ratio Mismatch (SRM), and it's one of the most common, and most overlooked, threats to experiment validity.
The chi-square goodness of fit test compares your observed user counts against the expected counts under a fair split. A significant result (low p-value) means the observed imbalance is unlikely to have occurred by chance alone, pointing to a systematic issue in your experiment setup or data collection.
SRM often stems from technical issues rather than statistical ones. Browser redirects, client-side rendering differences, ad blockers, and bot filtering can all create uneven group sizes. Even small implementation bugs — like a variant page loading slightly slower and losing impatient users, can trigger SRM.
When SRM is detected, start by checking your assignment logic and data pipeline. Look for events that fire in one group but not the other. Segment by browser, device, and geography to isolate the source. Leading experimentation platforms like Optimizely, Statsig, and Eppo run automated SRM checks and alert you before you draw conclusions from compromised data.
Explore more A/B testing and statistics tools
Analyze A/B test results with frequentist Z-tests and T-tests.
Check if your data follows a normal distribution.
Calculate standard deviation, variance, and coefficient of variation.