Monday, May 7, 2012
Tamara Piety (Tulsa) writes:
On the Roger Williams study, I don’t think it is “a good snapshot of more regional law schools with highly productive faculties” for a number of reasons. My first objection is that the report does not really count “productivity.” Rather it counts only productivity in certain selected journals. At the heart of my critique is a concern about whether what is included and what is left out of that selection can be justified as validly discriminating between the various journals on the basis of quality. The study is entitled, “Per Capita Productivity of Articles in Top Journals, 1993-2011: Law Schools Outside the U.S. News Top 50.” The words “top journals” modifies the word “productivity” and clearly signals the limitations of the study. However, the report is commonly referred to as simply “the productivity study” by both RW and others. I understand that calling it “the productivity study” is shorthand, but if faculty are published in journals not captured by the study, that productivity is not reflected and thus makes it impossible to distinguish between those who are publishing below the somewhat arbitrary cut-off and those who are not publishing at all. That seems unfair and I think all would agree that there is a big difference between these two conditions. There are quite a few respectable journals not captured by RW’s methodology.
As indicated on the RW website, journals selected for the list were the main journals of schools with a peer reputation rank of 2.8 or higher in the US News report of 2008 plus 13 “selected journals,” mostly specialty journals, from among the top 50 journals as reflected in Washington & Lee’s combined rankings from 2006 (the 2007 report). So far as it goes I suppose this is a reasonable cut-off (although why 2.8 and not 2.7? or why 2.8 and not 2.9 or 3.0?).
Yet however justified this method may have been, it is now several years old and there have been many changes to both peer reputation and to the W & L rankings. Using stale data tends to increase the likelihood of what you so justly criticize as the “echo chamber” effect of the US News Reports. One justification for drawing the line at the top 20 publications, as you do in your own study, is that the top-20 group remains relatively stable from year to year. Such a list would indeed be a snapshot, however rough, of productivity in top journals. (I still chafe at erasing productivity if it does not appear in a top journal, but that is another discussion).
Not so when you start moving farther down in the rankings; those ranking have changed quite a bit at the margins. And those changes may have a fairly dramatic impact on the “productivity” calculation for individual faculty and a change in an individual’s score may change the institution’s score. For example, there are at least three schools which did not have a peer ranking of 2.8 or higher in 2007 but which do now and so would presumably make a difference in this survey. They are Yeshiva (Cardozo), George Mason and Utah. Because their peer reputation did not exceed 2.8, their main journals weren’t included in the RW list. All 3 are listed in the US News top 50 institutions and although the W & L journal rankings are 83 for Utah and 91 for George Mason, Cardozo’s is 27. Yet none of these journals make RW’s list. There are other institutions included in the RW study which have experienced a drop in peer reputation rating from 2007 to 2011 which would put them on the wrong side of the original line -- Oregon (2.8 to 2.7) and Villanova (2.8 to 2.6). Yet publications from these two journals are included. When I raised this objection to Yelnosky last year he told me that they assumed these shifts around the margins would be “a wash” – that is they would not make a significant difference to the overall calculation. That may be correct, although I am not sure what his evidence is for this assumption, but it is certainly not true as to some individuals.
And there is an another problematic aspect of this study. In describing its methodology RW says that in addition to using schools with a peer assessment score of 2.8 or above, it added to its list “an additional 13 journals that appeared in the top 50 of the Washington & Lee Law Journal Combined Rankings in June 2007.” It looks like maybe every specialty journal from the rank of 50 and up in W & L was included (although it is interesting that several of the main law journals which did make RW’s list, presumably on the basis of peer reputation, were listed below the Berkeley Technology Law Journal which comes in at 50 on the 2006 W & L ranking). Once again, the list of specialty journals ranked at 50 and above in the 2011 W &L rankings contains changes from 2007. There are far fewer specialty journals that make the W & L top 50 cut off. For example, the Berkeley Technology Law Journal drops off the list (although barely, at 53). Also dropped from the list of the top 50: the J of Legal Studies (74),the Harvard Environmental Law Journal (53), the Harvard J. on Law & Public Policy (51), the Harvard Journal on Legislation (60), the University of Penn J. of Con. Law (62) and the Yale J. on Reg. (57). These are all terrific journals. I do not endorse excluding them. But they no longer meet the criteria RW established (whatever the merits of that system may be) and continuing to adhere to a stale ranking means excluding several higher ranked journals. This makes the RW study look less representative of quality or merit on its own terms and instead rather arbitrary.
This last point highlights that there are some disparities between the W & L rankings of journals and the US News Peer reputation score for the school in which the journal is housed – in some cases very large ones. For example, Villanova’s main journal makes the RW list on the basis of its US News Peer rankings score of 2.8 in 2007 (it is now 2.6); however its journal comes in at 123 on W & L, far below several other respectable journals which just miss the 2.8 peer reputation cut-off. Something similar is true for Pittsburgh with a peer rating in US News of 2.8 and a journal ranking of 155, the University of Miami, had (and has) a peer ranking 2.8, but a journal ranking of 137, Oregon’s peer ranking has, as noted, slipped to 2.7 and its journal is ranked at 131. In contrast, publications in, for instance Houston, did not count for the RW survey even though its review is ranked at 41 (up from 57 in 2006) in the W & L ranking and its peer review score, while not 2.8, is a not very distant 2.6 (same as Villanova) as of 2011. Yet publications in Pittsburgh, Oregon and Miami all counted, while those in Houston did not. There are a number of other respectable journals which are not counted by RW but which have journals ranked above (sometimes considerably above) those which RW does count. Here are just a few (in no particular order):
On RW list W& L Rank Not on RW List W & L Rank
Oregon 131 Lewis & Clark 52
Georgia 75 Brooklyn 55
BYU 86 Utah 83
Florida State 73 U. of Cinn. 56
U. of Miami 131 Hofstra 61
U. Pitts. 155 Cardozo 27
San Diego 86 Buffalo 79
Villanova 123 Case Western 120
This is not a systematic comparison, but you get the idea. It seems to me that the W & L methodology, which counts citations, and the US News which counts -- who knows what? --- may be comparing apples and oranges, but it is clear that under RW’s methodology a substantial number of journals from law schools in the swath ranked in US News Report on overall scores at between 50 and 100 are excluded from RW’s study. If we just count the ones with a peer reputation score of 2.6 (the same as Villanova) and higher they are: Cardozo, Loyola LA, George Mason, Houston, Tennessee, Case Western, Chicago-Kent, Temple, Rutgers-NJ, Brooklyn, Lewis & Clark, and Kansas. (I may have missed some). On the W &L list, two journals are in the top 50 which are not counted by RW – Cardozo (27) and Houston (41). In short, the inclusions and omissions are hard to justify on the merits and thus raise questions about the validity of what the study purports to measure.
I realize you have to draw lines somewhere and that the selection can seem arbitrary and still be generally valid. For the reasons I discuss I don’t think this can be said of the RW study– at least not any more. I realize it would be a massive task to go back and recalibrate the results based on new rankings. And it is probably unrealistic to have to revisit the lists every year since you would not only have to add new articles and faculty and delete those who left, but you’d have to reevaluate those articles which had already been counted for a particular faculty member. This last point illustrates another weakness in using this report as a snapshot of productivity. If an article a faculty member has published drops off the list because the journal in which it was published drops off the list, surely that says nothing about the faculty member’s productivity; only something about the prestige of the journal.
For all these reasons I think a real productivity report ought to include all law reviews, or all journals period. Such a report would still be imperfect in many ways (by excluding books for instance). But it would be a fairer view of what it purports to measure -- “productivity” because it would capture more journals (particularly peer-reviewed journals) -- than this study which reinforces the already murky and incredibly sticky reputational scores from US News. While I don’t think peer reputation is completely invalid, it is something of a black box. It is hard to figure out what actually goes into calculating peer reputation other than the prior US News score itself. Hence the “echo chamber.” The RW study, which I assume was inspired in part as a way to try to “unstick” some of these sticky numbers, tends instead, I fear, to make them stickier still, especially to the extent it relies on old data.
Since anything can get published somewhere, I'm skeptical about the value of counting everything. But Professor Piety raises some useful issues about how to do the selectivity cut-offs. Thoughts from readers? Signed comments only: full name and valid e-mail address.
UPDATE: Do see Professor Yelnofsky's reply in the comments, below.