Brian Leiter's Law School Reports

Brian Leiter
University of Chicago Law School

A Member of the Law Professor Blogs Network

Monday, May 7, 2012

On "faculty productivity" studies

Tamara Piety (Tulsa) writes:

On the Roger Williams study, I don’t think it is “a good snapshot of more regional law schools with highly productive faculties”  for a number of reasons. My first objection is that the report does not really count “productivity.” Rather it counts only productivity in certain selected journals.  At the heart of my critique is a concern about whether what is included and what is left out of that selection can be justified as validly discriminating between the various journals on the basis of quality. The study is entitled,  “Per Capita Productivity of Articles in Top Journals, 1993-2011: Law Schools Outside the U.S. News Top 50.” The words “top journals” modifies the word “productivity” and clearly signals the limitations of the study.  However, the report is commonly referred to as simply “the productivity study” by both RW and others. I understand that calling it “the productivity study” is shorthand, but if faculty are published in journals not captured by the study, that productivity is not reflected and thus makes it impossible to distinguish between those who are publishing below the somewhat arbitrary cut-off and those who are not publishing at all. That seems unfair and I think all would agree that there is a big difference between these two conditions. There are quite a few respectable journals not captured by RW’s methodology. 

As indicated on the RW website, journals selected for the list were the main journals of schools with a peer reputation rank of 2.8 or higher in the US News report of 2008 plus 13 “selected journals,” mostly specialty journals, from among the top 50 journals as reflected in Washington & Lee’s combined rankings from 2006 (the 2007 report). So far as it goes I suppose this is a reasonable cut-off (although why 2.8 and not 2.7? or why 2.8 and not 2.9 or 3.0?). 

Yet however justified this method may have been, it is now several years old and there have been many changes to both peer reputation and to the W & L rankings.  Using stale data tends to increase the likelihood of what you so justly criticize as the “echo chamber” effect of the US News Reports.  One justification for drawing the line at the top 20 publications, as you do in your own study, is that the top-20 group remains relatively stable from year to year. Such a list would indeed be a snapshot, however rough, of productivity in top journals.  (I still chafe at erasing productivity if it does not appear in a top journal, but that is another discussion). 

Not so when you start moving farther down in the rankings; those ranking have changed quite a bit at the margins. And those changes may have a fairly dramatic impact on the “productivity” calculation for individual faculty and a change in an individual’s score may change the institution’s score.  For example, there are at least three schools which did not have a peer ranking of 2.8 or higher in 2007 but which do now and so would presumably make a difference in this survey. They are Yeshiva (Cardozo), George Mason and Utah.  Because their peer reputation did not exceed 2.8, their main journals weren’t included in the RW list. All 3 are listed in the US News top 50 institutions and although the W & L journal rankings are 83 for Utah and 91 for George Mason, Cardozo’s is 27. Yet none of these journals make RW’s list. There are other institutions included in the RW study which have experienced a drop in peer reputation rating from 2007 to 2011 which would put them on the wrong side of the original line -- Oregon (2.8 to 2.7) and Villanova (2.8 to 2.6). Yet publications from these two journals are included. When I raised this objection to Yelnosky last year he told me that they assumed these shifts around the margins would be “a wash” – that is they would not make a significant difference to the overall calculation. That may be correct, although I am not sure what his evidence is for this assumption, but it is certainly not true as to some individuals. 

And there is an another problematic aspect of this study.  In describing its methodology RW says that in addition to using schools with a peer assessment score of 2.8 or above, it added to its list “an additional 13 journals that appeared in the top 50 of the Washington & Lee Law Journal Combined Rankings in June 2007.” It looks like maybe every specialty journal from the rank of 50 and up in W & L was included (although it is interesting that several of the main law journals which did make RW’s list, presumably on the basis of peer reputation, were listed below the Berkeley Technology Law Journal which comes in at 50 on the 2006 W & L ranking). Once again, the list of specialty journals ranked at 50 and above in the 2011 W &L rankings contains changes from 2007. There are far fewer specialty journals that make the W & L top 50 cut off. For example, the Berkeley Technology Law Journal drops off the list (although barely, at 53). Also dropped from the list of the top 50: the J of Legal Studies (74),the Harvard Environmental Law Journal (53), the Harvard J. on Law & Public Policy (51), the Harvard Journal on Legislation (60), the University of Penn J. of Con. Law (62) and the Yale J. on Reg. (57).  These are all terrific journals. I do not endorse excluding them. But they no longer meet the criteria RW established (whatever the merits of that system may be) and continuing to adhere to a stale ranking means excluding several higher ranked journals.  This makes the RW study look less representative of quality or merit on its own terms and instead rather arbitrary. 

This last point highlights that there are some disparities between the W & L rankings of journals and the US News Peer reputation score for the school in which the journal is housed – in some cases very large ones. For example, Villanova’s main journal makes the RW list on the basis of its US News Peer rankings score of 2.8 in 2007 (it is now 2.6); however its journal comes in at 123 on W & L, far below several other respectable journals which just miss the 2.8 peer reputation cut-off. Something similar is true for Pittsburgh with a peer rating in US News of 2.8 and a journal ranking of 155, the University of Miami, had (and has) a peer ranking 2.8, but a journal ranking of 137, Oregon’s peer ranking has, as noted, slipped to 2.7 and its journal is ranked at 131.  In contrast, publications in, for instance Houston, did not count for the RW survey even though its review is ranked at 41 (up from 57 in 2006) in the W & L ranking and its peer review score, while not 2.8, is a not very distant 2.6 (same as Villanova) as of 2011. Yet publications in Pittsburgh, Oregon and Miami all counted, while those in Houston did not. There are a number of other respectable journals which are not counted by RW but which have journals ranked above (sometimes considerably above) those which RW does count. Here are just a few (in no particular order):

 

On RW list          W& L Rank          Not on RW List     W & L Rank

Oregon                131                          Lewis & Clark                      52

Georgia                75                           Brooklyn                               55

BYU                        86                           Utah                                      83

Florida State       73                           U. of Cinn.                           56

U. of Miami        131                         Hofstra                                 61

U. Pitts.                155                         Cardozo                               27

San Diego            86                           Buffalo                                 79

Villanova             123                         Case Western                   120

 

This is not a systematic comparison, but you get the idea.  It seems to me that the W & L methodology, which counts citations, and the US News which counts -- who knows what? --- may be comparing apples and oranges, but it is clear that under RW’s methodology a substantial number of journals from law schools in the swath ranked in US News Report on overall scores at between 50 and 100 are excluded from RW’s study. If we just count the ones with a peer reputation score of 2.6 (the same as Villanova) and higher they are: Cardozo, Loyola LA, George Mason,  Houston, Tennessee, Case Western,  Chicago-Kent, Temple, Rutgers-NJ, Brooklyn, Lewis & Clark, and Kansas. (I may have missed some). On the W &L list, two journals are in the top 50 which are not counted by RW – Cardozo (27) and Houston (41). In short, the inclusions and omissions are hard to justify on the merits and thus raise questions about the validity of what the study purports to measure. 

I realize you have to draw lines somewhere and that the selection can seem arbitrary and still be generally valid.  For the reasons I discuss I don’t think this can be said of the RW study– at least not any more. I realize it would be a massive task to go back and recalibrate the results based on new rankings. And it is probably unrealistic to have to revisit the lists every year since you would not only have to add new articles and faculty and delete those who left, but you’d have to reevaluate those articles which had already been counted for a particular faculty member. This last point illustrates another weakness in using this report as a snapshot of productivity. If an article a faculty member has published drops off the list because the journal in which it was published drops off the list, surely that says nothing about the faculty member’s productivity; only something about the prestige of the journal. 

For all these reasons I think a real productivity report ought to include all law reviews, or all journals period. Such a report would still be imperfect in many ways (by excluding books for instance). But it would be a fairer view of what it purports to measure --  “productivity” because it would capture more journals (particularly peer-reviewed journals) -- than this study which reinforces the already murky and incredibly sticky reputational scores from US News. While I don’t think peer reputation is completely invalid,  it is something of a black box. It is hard to figure out what actually goes into calculating peer reputation other than the prior US News score itself. Hence the “echo chamber.” The RW study, which I assume was inspired in part as a way to try to “unstick” some of these sticky numbers, tends instead, I fear, to make them stickier still, especially to the extent it relies on old data.

Since anything can get published somewhere, I'm skeptical about the value of counting everything.  But Professor Piety raises some useful issues about how to do the selectivity cut-offs.  Thoughts from readers?  Signed comments only:  full name  and valid e-mail address.

UPDATE:  Do see Professor Yelnofsky's reply in the comments, below.

http://leiterlawschool.typepad.com/leiter/2012/05/on-faculty-productivity-studies.html

Rankings | Permalink

TrackBack URL for this entry:

http://www.typepad.com/services/trackback/6a00d8341c659b53ef0163050844d2970d

Listed below are links to weblogs that reference On "faculty productivity" studies:

Comments

I recall saying something similar when the study was first announced in the comments here: http://prawfsblawg.blogs.com/prawfsblawg/2008/09/faculty-product.html

My comment then still seems apropos. I'll add that I've since published in George Mason. I thought it was on the list, but am surprised to see it is not. I guess I'm 1 less productive than I was yesterday.

As for the notion that anyone can publish anywhere, that's not exactly true. But even if it was, so what? This is for schools outside the top 50, whose professors have a harder time placing in "top" journals. It seems odd to explicitly look at lower ranked schools and then ignore a key factor of lower ranked-ness - the difficulty in placing in top journals.

My comment from 2008:
"I agree with anon(1) at least as to the selection of 'top' journals. I think it's fine to call this a study of faculty placement, but to call it faculty 'productivity' is a misnomer. There is plenty of fine scholarship that appears in the journals of tier II - IV schools, as well as specialty journals of all schools.

Case in point, I have one article [NB I have more now!] in a listed journal. It has a few cites, but no one has mentioned it to me. I have another article in a specialty journal of a school ranked ~100. It has been cited numerous times, has led to speaking engagements, book chapter invitations, and has more than twice the downloads on SSRN [now 3x]. Is the first article true "scholarship" but the second article not?

To arbitrarily limit the data set to a small subset of all journals that exist and then pronounce that the study can aid in the evaluation of 'serious scholarly culture' seems a bit of a reach. That said, if you are evaluating which schools have fared better in placement, this study seems perfectly fine.

On a side note, why not include all journals?"

Posted by: Michael Risch | May 7, 2012 4:37:45 AM

On a related note, since I'm at Villanova:
1. Our reputation took a hit this year for reasons I don't want to replay. Does that mean all of the placements from 1993 on are devalued and the faculty who published are no longer productive?

2. In the last couple of years, I've had several people ask if I would pass along a good word to our journal because they had trouble placing elsewhere. One went to our journal, most didn't for a variety of reasons. The rankings of the schools varied from top 25 to aspiring profs.

To say that someone was more productive than others based on these two random facts seems a stretch.

Posted by: Michael Risch | May 7, 2012 4:43:12 AM

Why not include books, book chapters, and papers in non-law-review publications, figuring out a way to measure those contributions? A chapter in an Oxford or Cambridge collection deserves consideration and is certainly an indicator of productivity. A piece in a top peer-reviewed economics or political science or philosophy journal is too.

I understand that this would require more work in compiling data, drawing lines, making assessments, and so on. But there are many extremely productive law professors who do not publish exclusively in the top law reviews (though they may do so there too). If the aspiration is to provide a relatively accurate picture of "productivity," then why should not the increasingly prevalent practice of publishing outside the (top) law reviews not make some sort of appearance in these studies -- at least counting for more than the goose egg that all of this work gets now?

Posted by: Marc DeGirolami | May 7, 2012 5:14:17 AM

I can't quarrel with Tamara's main point -- that I could have used a different list of journals for the study.

I tried to come up with a list that went beyond the admittedly more stable top 15 or 20, given the small number of pieces published in those journals by faculty at schools outside the U.S. News Top 50. After that, however, reasonable minds differ on which other journals to include. I remain reasonably confident that switching a few journals in or out would not make a significant difference across an entire faculty, but I recognize it could, as Tamara suggests, for a given individual. That is one reason I have never published results for any particular faculty member. (I once sent the dataset to Paul Caron, and he listed the top tax scholars, but I would not do that again. I am relieved that he only listed the top scorers and not those who did not fare well by my limited measure).

I have never purported to be generating a measure of general scholarly productivity, and I can’t control how people characterize the results of the study. I describe the methodology with the study results in as transparent a way as possible: “The dataset consists of an inventory of the scholarly output in top law journals of the faculties at “non-elite” law schools. It thus provides some objective information to assess the relative strength of the “non-elite” schools in one form of scholarly research.

I think the study is capturing something of value. For example, Yale and Harvard scored the highest among the schools we studied. San Diego is well below them, but ranked first among schools not in the U.S. News Top 50, and well above schools that fell between 41 and 82 in our study. Those results certainly comport with what I would imagine most of us would expect and serve as at least a rough measure of the validity of the measure. While the differences between schools with scores that are close are likely insignificant, the faculty at a school with a score of 10.00 likely produces very different scholarship than the faculty at a school with a score of 3.00.

However, as I have said since I started this enterprise five years ago, I encourage others to design alternative or adjunct measures of the scholarly culture at various law schools. I applaud Gregory Sisk and his team at St. Thomas for publishing the results of their citation study, and I understand an updated version is in the works. (I also note that the results of their study are correlated with our results, although not perfectly).

I may indeed make some changes in our study going forward, but by all means, don’t wait for me. Let a thousand flowers bloom.

Posted by: Michael J. Yelnosky | May 7, 2012 4:48:00 PM

A somewhat related query: Whatever happened with "The Deadwood Report," which was announced in 2008 in the Green Bag? I am spending the year in China and my Internet is heavily censored, so maybe that is the problem, but I can't find any subsequent editions.

Posted by: Ann Bartow | May 14, 2012 5:58:57 PM

Post a comment