Concepedia

Publication | Closed Access

Peeking at A/B Tests

128

Citations

21

References

2017

Year

TLDR

A/B testing users often monitor p‑values and confidence intervals during experiments, which can compromise inference. The study proposes a new statistical method to overcome unreliable inference from traditional p‑values and confidence intervals in A/B testing. The authors develop a novel statistical framework, implemented in Optimizely, and validate it with simulations and real‑world data to provide reliable inference. The method yields always‑valid p‑values and confidence intervals, enabling safe continuous monitoring and more efficient detection of true effects, outperforming traditional approaches in simulations and real data.

Abstract

This paper reports on novel statistical methodology, which has been deployed by the commercial A/B testing platform Optimizely to communicate experimental results to their customers. Our methodology addresses the issue that traditional p-values and confidence intervals give unreliable inference. This is because users of A/B testing software are known to continuously monitor these measures as the experiment is running. We provide always valid p-values and confidence intervals that are provably robust to this effect. Not only does this make it safe for a user to continuously monitor, but it empowers her to detect true effects more efficiently. This paper provides simulations and numerical studies on Optimizely's data, demonstrating an improvement in detection performance over traditional methods.

References

YearCitations

Page 1