« August 2012 | Main | October 2012 »

September 14, 2012

Sisk on Citation Studies

Gregory Sisk (St. Thomas/Minnesota) writes with a useful set of reflections on the Phillips-Yoo citation study discussed last week:

Believing as I do both that U.S. News rankings are flawed (and thus should be supplemented by multiple other ranking approaches) and that scholarly impact or quality is multi-dimensional (and thus also benefits from a diversity of approaches), I too welcome any thoughtful new attempt to evaluate the meaning of citations to legal scholarship.  James Phillips and John Yoo have certainly added a thoughtful contribution to scholarly rankings.  At the same time, I think Brian Leiter’s conclusion is right – the Phillips-Yoo approach is not better than the Leiter Scholarly Impact Score method, but rather is different

However, one aspect of the Phillips-Yoo method strikes me as mistaken or at least mistakenly characterized.  As a central critique of Scholarly Impact Scores, Phillips and Yoo complain that the method refined by Brian and that we at the University of St. Thomas applied this year are “bias[ed] against younger scholars.”  Isn’t it odd to describe as “bias” the natural tendency of older, more experienced, and well-published scholars to draw greater attention from other scholars?  If Scholarly Impact Scores were calculated over a lengthy time frame, then tired and semi-retired older scholars – what Brian has aptly described as “once-productive dinosaurs” – would gain misleadingly high scores.  But limiting citations to a five-year period – as we do with Scholarly Impact citation ranking – quite  properly minimizes the impact of no-longer productive scholars, because citation levels naturally fall over time without anything new being contributed.  In general, the fact that an experienced and still active older scholar draws greater attention based on the larger portfolio of work available to be cited in the past five years is hardly a bad or irrelevant thing.

Along the same lines, I don’t know about the wisdom of taking the actual and objective data of current citations and then recalculating scores on the basis of longevity among a particular law faculty.  Unless one is careful to explain that this longevity depreciation factor is being used to separate out and identify up-and-coming young scholars (or to rank schools that have more promising younger scholars than other schools), one could characterized this method as genuinely biased in the opposite direction, that is, against older scholars.

But the better point is not that one or the other is biased so much as that they are doing different things – prediction of the future versus description of the present.  As I see it, Phillips and Yoo seek to devise a method of predicting the likely future scholarly impact of younger scholars (which is commendable and intriguing).  But their introduction of a longevity depreciation factor should not be understood as an improvement on our measurement of current scholarly impact (which it is not).

Let me explain what I mean by an example.  Suppose that Professor A, a recently tenured scholar, has published only 3 articles, each of which has been cited 100 times over the past five years, for a total of 300 citations.  Professor B has an additional ten years of experience as a tenured faculty member, has published 12 articles, each of which has been cited 50 times over the past five years, for a total of 600 citations.  If I understand them correctly, Phillips and Yoo apparently would conclude that the
scholar with the greater impact is Professor A, because each article individually drew more citations and because the number of years in teaching is fewer.  But, if we are measuring which scholar today has a greater scholarly impact, doesn’t the reality remain that it is Professor B?  The authors of 600 articles saw Professor B’s body of work as worthy of citation, while the authors of half as many articles reached that conclusion with respect to Professor A’s work.

Now Phillips and Yoo may be on to something important in predicting that Professor A is more likely to be the more prominent scholar in the future.  Their description of scholars like Professor A as more “relevant” may be shorthand for “making a prediction of the future prediction.”  Of course, it is possible that Professor A will not live up to the prediction, because he fails to remain productive, because his three articles prove to have exhausted his abilities and nothing afterward has the same scholarly luster, or because his work remains of the same high quality but he has saturated the scholarly interest in his particular scholarly message and thus he experiences diminishing returns in citation to his future articles that are along the same vein.  In fairness to Phillips and Yoo, however, those disappointing possibilities simply reflect that, by focusing on longevity and citations per article, they are attempting to predict the future and any prediction includes an element of uncertainty.

By contrast, the 600 citations garnered by hypothetical Professor B over the past five years is not a prediction but a present reality.  Whether by exceptionally prolific writing or diligence in promoting a point of a view through a series of articles or something else, she has succeeded in drawing the attention of the authors of 600 articles.  To dismiss or dilute that accomplishment by constructing a depreciation formula that incorporates number of years in teaching is to ignore the reality of the current impact.  Again, if our purpose is predictive, we might prognosticate that Professor B’s influence will decline or at least be surpassed in the future by Professor A.  But as a description of the present scholarly impact, haven’t the authors of 600 articles in my hypothetical already reached a definite conclusion?

As a final note, the Phillips-Yoo study appears mostly to provide confirmation of the Scholarly Impact Score method, as the changes in rankings among the 16 schools studied are mostly modest.  Moreover, because I expect that, however defined, All-Stars and Super-Stars would make up decreasingly smaller percentages of law faculties as one moves down through the ranking, the Phillips-Yoo method is likely to have decreasing significance as one moves from the 16 schools they chose to study to include the larger sets of 96 law faculties we studied in 2012 (and the full 200 law faculties studied in
2010).

Posted by Brian Leiter on September 14, 2012 in Rankings | Permalink

September 13, 2012

Class action against DePaul Dismissed

Not surprising.  Except in states with very stringent consumer protection statutes (California being one), these lawsuits are mostly destined to fail, for reasons noted previously.

Posted by Brian Leiter on September 13, 2012 in Legal Profession, Of Academic Interest | Permalink

September 12, 2012

A new high (or low?) in law school marketting

A video from New York Law School.  Curious.

Posted by Brian Leiter on September 12, 2012 in Law in Cyberspace, Legal Profession, Of Academic Interest, Rankings | Permalink

September 11, 2012

Why Tolerate Religion?

It will be out in early October.

Posted by Brian Leiter on September 11, 2012 in Jurisprudence | Permalink

September 10, 2012

Posner v. Scalia on Textualism, Redux

In our update to the earlier post, we noted that Justice Scalia's co-author, famed legal prose stylist Bryan Garner, had responded to Judge Posner's scathing review.  Part of Mr. Garner's response included linking to a series of posts by a very conservative blogger at the National Review (one with a clear antipathy towards Judge Posner), Ed Whalen (whom we encountered once before on the issue of internet anonymity).   The rhetorical volume of Mr. Whalen's postings is often out of proportion to their analytical and argumentative content, but his September 7 blog item does seem to get to the crux of the dispute.  Mr. Whalen writes (with bits of irrelevant rhetoric removed):

Scalia and Garner don’t hide the ball. In the first paragraph of their preface, they state that they seek to show that the “established methods of judicial interpretation … are widely neglected,” that this neglect has had lots of bad consequences, and that it is “not too late to restore a strong sense of judicial fidelity to texts” (p. xxvii). In their third paragraph, they state that just as meaning generally is determined by convention, so in legal systems “there are linguistic usages and conventions” as well as “jurisprudential conventions” (p. xxvii). To that end, they set forth and explain 57 interpretive principles or canons and they expose thirteen widespread falsities....

Among the strangest of Posner’s sentences is this rhetorical question: “How many readers of Scalia and Garner’s massive tome will do what I have done—read the opinions cited in their footnotes and discover that in discussing the opinions they give distorted impressions of how judges actually interpret legal texts?” (Emphasis added.)

...[T]he last clause of Posner’s question indicates that he somehow thinks that Scalia and Garner are trying to describe “how judges actually interpret legal texts.” In fact, their “approach is unapologetically normative, prescribing what, in our view, courts ought to do with operative language” (p. 9 (emphasis added). They are reacting against, and trying to remedy, the widespread judicial “neglect” of “established methods of judicial interpretation.”

Mr. Garner writes in a similar vein:

Most of Judge Posner’s criticisms of our research were founded on the assertion that the cases cited used, in their rationales, more than the single canon being illustrated. That would be a telling criticism if the purpose of the cases had been to show the authoritativeness of the canon. But that was not the purpose. In choosing cases, we wanted examples that (1) contained lively problems that could be readily explained without bogging down readers, and (2) involved discrete textual points. We were looking for interesting issues that would illustrate good textualism—through our explanations. All the canons discussed are well established and have been frequently applied; the examples are there merely to show how each particular canon works. That a given court considered other factors besides the canon is quite irrelevant to our purpose. Indeed, it would be very hard to find examples in which a single canon was the sole basis for the decision.

This would explain why both Mr. Whalen and Mr. Garner effectively concede (or so it seems to me) that with respect to several of the cases identified by Judge Posner, the presentation of those cases in the book was, indeed, incomplete in precisely the ways Judge Posner suggested.  The defense to that charge is:  the cases weren't being presented as evidence of textualism correctly practiced, but as illustrating only one canon of interpretation.

This, however, does raise a puzzle about the book, one consistent with Judge Posner's worries (though one to which there may well be a good answer).  If the cases cited as evidence of correct canons of textual interpretation did not, in fact, really rely on that canon of textual interpretation in rendering the decision (as Judge Posner charged, and as Mr. Garner, I take it, concedes), then these cases are no better than made-up examples of the application of canons of textual interpretation.  Why cite cases at all?   One might have thought the cases were meant to illustrate good textualist practice, but, if I understand them correctly now, both Mr. Garner and Mr. Whalen deny that.  The book is, as Mr. Whalen puts it, "unapologetically normative."  That's, of course, fine and could be quite interesting:  but why cite actual cases at all except to criticize them by reference to the applicable normative standard?  So while the reader might be tempted to think that the case examples are there as instances of sound interpretive practice, they are not, since most of the cases at issue were not really decided on textualist grounds, despite the passing invocation of a canon of which Mr. Garner and Justice Scalia approve.

Most readers will recall Karl Llewellyn's 1950 article on the canons of statutory construction.  He identified dozens of canons of statutory construction, many of which were obviously inconsistent with each other.  Yet each canon had been endorsed by a court as a correct canon.  Llewellyn did not show, of course, that there were no principled grounds for discriminating between the appropriateness of particular canons for particular cases and problems.  But the key question for a textualist is, if there are really 57 canons of textual interpretation (that's more than Llewelllyn found!), are there really sufficient meta-principles governing conflict among these canons to make textualism a constraining and reasonably determinate method of legal interpretation?  The fact that the cases cited as illustrating particular canons are decided on non-textualist grounds might make one skeptical that there really are "established methods" of interpretation, as Justice Scalia and Mr. Garner write in the portion of the preface that Mr. Whalen quotes.   If the canons of interpretation constituting sound textualist practice are not really decisive for the courts in rendering their decisions, in what sense are they "established"?  This now seems to be the key question raised by this exchange.

UPDATE:  Mr. Garner's rejoinder and Judge Posner's response to it are now on-line at The New Republic.

Posted by Brian Leiter on September 10, 2012 in Jurisprudence, Of Academic Interest | Permalink

Canadian Law School Rankings, 2012

So in Canada, a civilized country, they ask someone who actually knows something to design law school rankings:  no self-reported data, no room for fraud, and the emphasis is wholly on the quality of the faculty and the outcomes for graduates.  I would have liked them to do some things differently, but overall it's not a travesty.  It's also not hugely controversial, for obvious reasons.  Bob Morse, please take note!

Posted by Brian Leiter on September 10, 2012 in Rankings | Permalink

September 8, 2012

Striking Chart on Decline of "High LSAT" Applicants from 2011 to 2012

The chart is here (the article is less interesting).  In 2011, there were some 4,000 applicants with LSAT scores in the 170 to 180 range; that dropped to about 3,260 in 2012.  Harvard wants to take in about 550 new students, Yale and Chicago about 200, Stanford about 175, Columbia and NYU around 400 to 450.   And of course Penn, Berkeley, Michigan, UVA, Duke, and others will get some of these students.  Of course, the "prize" applicants have both the high LSAT and a high GPA (and in a serious major one hopes!), and of course some number of these high LSAT scorers are just good test-takers who have poor records otherwise.

Posted by Brian Leiter on September 8, 2012 in Of Academic Interest, Rankings | Permalink

September 7, 2012

More citation studies

This time from James Phillips, a PhD student at Berkeley's JSP program, and John Yoo (Berkeley). 

The two most interesting things they do are consult citations in the "Web of Science" database (to pick up citations for interdisciplinary scholars--this database includes social science and humanities journals) and calculate a citations-per-year score for individual faculty.  A couple of caveats:  (1) they look at only the top 16 schools according to the U.S. News reputation data, so not all law schools, and not even a few dozen law schools; and (2) they make some contentious--bordering in some cases on absurd--choices about what "area" to count a faculty member for.  (This is a dilemma, of course, for those who work in multiple areas, but my solution in the past was to try to gauge whether three-quarters of the citations to the faculty member's work were in the primary area in question, and then to also include a list of highly cited scholars who did not work exclusively in that area.)  Many of those decisions affect the ranking of schools by "area."  The limitation to the top 16 schools by reputation in U.S. News also would affect almost all these lists.  See also the comments here.

I liked their discussion of "all stars" versus "super stars," but it was a clear error to treat the top fifty faculty by citations per year as "super stars"--some are, most aren't.  Citations measures are skewed, first off, to certain areas, like constitutional law.  More importantly, "super stars" should be easily appointable at any top law school, and maybe a third of the folks on the top fifty list are.  Some aren't appointable at any peer school.  And the citations per year measure has the bizarre consequences that, e.g., a Business School professor at Duke comes in at #7 (Wesley Cohen, whom I suspect most law professors have never heard of), and very junior faculty who have co-authored with actual "super stars" show up in the top 50.

I was also puzzled by why the authors thought "explaining" the U.S. News peer reputation scores was relevant--the closer a measure correlates with that, the more dubious it would seem to be, I would have thought.  But that's minor. 

Appendix 5, publications per year, was utterly mysterious to me as to how the results were arrived at!

That's enough commentary for now--there's lots of interesting data here, and perhaps this will inspire others to undertake additional work in this vein. 

UPDATE:  A couple of readers asked whether I thought, per the title of the Phillips & Yoo piece, that their citation study method was "better."  I guess I think it's neither better nor worse, just different, but having different metrics is good, as long as they're basically sensible, and this one certainly is.  On the plus side, it's interesting to see how adding the Web of Science database affects things, and also how citations per year affects results.  On the negative side, a lot of "impact" that will be picked up in the Web of Science database may be of dubious relevance to the impact on law and legal scholarship.  And the citations-per-year measure has the odd result of elevating very junior faculty with just a year or two in teaching into elevated positions just because they may have co-authored a piece with a senior scholar which then got a few dozen citations.   No metric is perfect (what would that even mean?), but this one certainly adds interesting information to the mix.   It's particularly notable how the results are basically the same at the high end (Yale, Harvard, Chicago, Stanford, Columbia, NYU), but with some interesting movements up and down thereafter.

Of course, the biggest drawback of their approach is not the approach itself but that they only examined 16 law schools.  But someone else could rectify that.

Posted by Brian Leiter on September 7, 2012 in Rankings | Permalink

September 6, 2012

Health law

The blog.

Posted by Brian Leiter on September 6, 2012 in Of Academic Interest | Permalink

September 5, 2012

Tax scholarship matters...

...and could cost some hedge fund managers a lot!

Posted by Brian Leiter on September 5, 2012 in Of Academic Interest | Permalink