Even though common law courts create and articulate the law within their decisions, surprisingly little is known about the quantitative readability levels of any single national apex court’s decisions, and even less is known about how any one apex court’s readability levels compare to those of other similar apex courts. This Article offers new data and analysis that significantly reduces the blind spots in these areas by reporting the results of an original empirical study of the readability of judicial decisions released in 2020 from the apex courts of five English-speaking jurisdictions.
This Article draws on applied linguistics theory and Natural Language Processing techniques in order to provide both uni- and multi-dimensional readability scores for the 233 judicial decisions (comprising more than 3 million words of text) that form the corpus of this study. The results show that readability levels vary by approximately 50% between the most- and least-readable jurisdictions (the United States and Australia, respectively). This Article then analyzes the data comparatively in order to determine whether institution- or jurisdiction-specific factors are capable of explaining readability variances between the different courts. This Article concludes that certain comparative factors, such as the average panel size used by each court and the ratios of both former law professors and women who sit on panels in each jurisdiction, can explain 23.7% of the total variances in readability scores. These findings may help judicial branch and executive branch decision-makers better understand how their court’s decisions “stack up” against other courts in terms of readability and offer insights into how readability levels could be enhanced.
Author: Mike Madden
Volume 23, Issue 2