Bonus 48: Selection Bias in Supreme Court Analyses
Data-driven claims about the Supreme Court's (lack of) ideological homogeneity repeatedly ignore, or otherwise fail to account for, the justices' near-plenary control over the cases they are deciding
Welcome back to the weekly bonus content for “One First.” Although Monday’s regular newsletter will remain free for as long as I’m able to do this, much of Thursday’s content is behind a paywall to help incentivize those who are willing and able to support the work that goes into putting this newsletter together every week. I’m grateful to those of you who are already paid subscribers, and hope that those of you who aren’t will consider a paid subscription if your circumstances permit:
One of the central distinctions between the substance of Monday’s free issues and that of Thursday’s bonus content is the personalization of the latter. This week, my prompt was an op-ed published in the New York Times on Sunday by Fordham law professor (and my friend) Ethan Leib, and Nora Donnelly, a student at Fordham. The headline (which they likely didn’t write) claims that the Court is “Not as Politicized as You May Think,” and the piece, which builds from a forthcoming Southern California Law Review article, supports this argument through an analysis of the justices’ voting patterns in 87 statutory interpretation cases across the October 2020, 2021, and 2022 Terms. As they write, many of those 87 cases were decided unanimously, and even most of the divided ones did not split the Court into its “usual” camps: “There were actually only 10 cases over three years that generated the ideological division you might expect given the court’s configuration.” Thus, “ideology is not predetermining case outcomes 77 of 87 times in a large and important part of the docket that affects millions of people.”
I have no quibble with how Donnelly and Leib analyze the 87 cases they have chosen to analyze. Indeed, unlike far too many other attempts to quantify the justices’ voting patterns, they do an especially good job of accounting for cases with fractured rationales or those in which something other than statutory interpretation was doing the work.
But the attempt to generalize broader conclusions about the Court from that (large) subset suffers from a problem that has become virtually endemic in most public discussions and academic analyses of the work of the Court: The complete lack of attention being paid to selection effects, and the very real bias that is baked into any such data by the fact that these are the cases (and questions) that the justices are choosing to decide. It’s long-past time to account for the high and low politics of certiorari when trying to assess the overall work (and polarization) of the Court, and it’s disappointing that, in October 2023, these kinds of academic and popular writing aren’t even acknowledging the selection-effects problem, let alone accounting for it.
For those who are not paid subscribers, the next free installment of the newsletter will drop on Monday morning. For those who are, please read on.
Keep reading with a 7-day free trial
Subscribe to One First to keep reading this post and get 7 days of free access to the full post archives.