Aggregating for accuracy: a closer look at sampling and accuracy in class action litigation
Courts and authors have suggested that, under certain circumstances, claim aggregation—and statistical sampling procedures in particular—can increase not only efficiency, but accuracy as well. Such assertions have been used to rebut anti-aggregation arguments that are based on the premise that accuracy cannot be sacrificed for the sake of efficiency. But assertions that sampling procedures can increase accuracy have been met with scepticism and a general unwillingness to rely on such assertions in real-world contexts. The scepticism is arguably due to the fact that legal scholarship has not begun to contemplate, in any rigorous form, the practical effect of sampling procedures on accuracy under real-world conditions of both claim variability and judgment variability, and with realistic constraints imposed by the law. In the current article, I introduce a framework for examining the conditions under which sampling can increase accuracy in the law. In particular, I introduce a model for studying the effects of sampling on accuracy, and for deriving the optimal sample size, with respect to accuracy, under conditions of claim and judgment variability, and with constraints described by reductive sampling. I then discuss a number of important extensions, such as methods for estimating variability parameters and the use of sequential sampling and stratification.
Law, Probability and Risk
Hillel J. Bavli, Aggregating for accuracy: a closer look at sampling and accuracy in class action litigation, 14 L. Probability and Risk 67 (2015)