Tuesday, November 27, 2007
Once more into the citation rankings fray...
Brian Tamanaha (St. John's) raises some issues that deserve comment. He writes:
My objection to use of citations as a proxy for "impact" is not the claim that articles and books may have an influence without being cited in law review articles, although this is clearly the case. [I have read and learned from Isaiah Berlin, for example, but have never cited him].
This would, indeed, be a quite feeble objection, for reasons that are no doubt obvious to Brian and everyone else. (I can think of no example of someone with a substantial impact on legal scholarship in his or her field who is not cited at least sometimes!)
Rather, the problem has to do with the bizarre citation practices that have developed in U.S. law reviews. Law reviews typically require that almost every assertion be backed up by a reference; articles often have in excess of 400 footnotes, nearly one for every sentence [invited and symposium pieces escape these constraints].
As a result, law professors are required to produce reams of citations, even for commonplace assertions, a task they sometimes push off on research assistants. Over time, stock or standard citations develop, which are cited again and again. An easy way to come up with a citation is to plumb (or loot) the footnotes of earlier articles on the subject. A lot of parasitic opportunism of this kind takes place because it is an efficient way to come up with the required footnote....Owing to this practice (common?), the fact that a book or article is cited does not necessarily indicate that it was read by the law professor who cited it. Even if the professor actually reads it, moreover, the citation does not mean the article or book cited had any impact on the professor, particularly when the citation is produced after the passage was written. Again, many sources are cited solely because a citation is required by law reviews.
I have no quarrel with the facts described here by Brian, the real issue is their import. Any one citation might, indeed, have the flaws noted (it might have been added by an RA, be standard boilerplate, etc.--I've noted this issue myself). We would need real evidence, however, that large numbers of citations to a scholar did not, in fact, indicate scholarly impact: and there is no evidence (literally none) that with respect to scholars whose work is being cited 500 or 1,000 times in a seven-year period that this can all be "explained away" by the citations practices Brian describes accurately enough.
A more refined measure of impact or influence would count only the times when a source is actually discussed in the article in some fashion, even minimally....
I agree this would be a better measure; it would also be logistically impossible to carry out for several hundred scholars with tens of thousands of citations.
Even if this problem is corrected, there are other serious problems with Leiter's citation study as a measure of impact on legal scholarship.
Consider, for example, Leiter's ranking of Critical Theorists. Roberto Unger is ranked 20th, with 480 citations. Setting aside what one might think of the merits of critical theory, it is absurd to suggest that the "true measure" of Unger's impact in this field places him behind all the others cited. His Knowledge and Politics and Law in Modern Society influenced a generation of critical theorists (and others), although these works might not be cited very often today. This example alone demonstrates that the citation study is deeply flawed as a measure of impact.
It actually shows nothing of the kind, partly because Brian has misstated what the result means. It means that during the recent period studied, 2000-2007, Unger' s impact on legal scholarship was not as great as, say, Catharine MacKinnon's or Richard Delgado's, which strikes me as quite plausible. Law in Modern Society never had much impact, and Knowledge and Politics faded with the demise of Critical Legal Studies more than a decade ago. Will Unger have a longer-term impact than some of those whose work is being cited more often in recent years? Quite possibly. But this was, quite explicitly, not a study of "all-time" scholarly impact and importance; I don't think such a study could be meaningfully done with respect to our contemporaries.
Brian also misinterprets the meaning of the ordinal listing. As I stated at the beginning of the study: "The particular ordinal rank within the top ten or twenty means very little, but the lists do tend to be fairly representative of the major scholars in the field...." In other words, I explicitly caution against reading the data as meaning that #15 has more impact than #20 (though at the extremes [e.g., #5 vs. #20] that is probably a safe conclusion to draw in many cases).
Take a look at the "Law & Philosophy" ranking. A case can be made that Duncan Kennedy (1290 citations) and Roberto Unger, both relegated (or banished?) by Leiter to the Critical Theorists list, should also have been included on this list (both placing in the top ten, with Kennedy second). Leiter will no doubt assert that they do not engage in "legal philosophy" proper, which is a plausible claim, though by no means uncontroversial (Nussbaum and Waldron, on the list, also do much work that does not fit within a narrow definition of "legal philosophy"). Even conceding this, one might ask why such a narrowly defined category was utilized that excludes such important contemporary legal theorists.
Brian, who did his graduate work at Harvard Law School, is admirably loyal to his former teachers! But he makes a number of misleading claims in this short paragraph. No one, anywhere, is listed in more than one "top ten" or "top twenty" specialty listing; to the extent reasonable, scholars are placed in the broad area where most of their work falls. It is obviously uncontroversial to put Kennedy and Unger in "Critical Theories." The only question that might be raised is whether they should also appear in the unranked list of "highly cited scholars" who don't work exclusively in a particular field, in this case "Law and Philosophy." The objection to including them here is not, contrary to Brian, that they "do not engage in 'legal philosophy' proper" (since, as Brian notices, there are others on the list who don't work in legal philosophy proper), it is that there is nothing "philosophical" about their work, on any conception of philosophical work. (The category is "law and philosophy," not legal philosophy, which would be quite narrow: it is meant to capture a rich array of philosophically informed work about law, from general jurisprudence to criminal law theory and much else.) That is obvious in the case of Kennedy, whose treatment of philosophical matters is superficial; it might be more arguable in the case of Unger, since the philosophical content of Knowledge and Politics so clearly tracks the Left Hegelian style of argument in Lukacs's History and Class Consciousness, but as Brian probably knows, Unger's scholarly impact outside the legal academy is largely with social theorists on some Sociology and Politcs faculties, and not with philosophers. (Let me add, to preempt a standard refrain from those who aren't very philosophically competent, that this has nothing to do with Anglophone versus Continental traditions in philosophy; Unger is not a meaningful contributor to the latter traditions, as I am in a reasonably good position to know. There is a lot of sophomoric work of a purportedly philosophical nature that purports to insulate itself from criticism by claiming to be "Continental" not "analytic." But this is sheer nonsense: "Continental" does not mean philosophically incompetent and superficial, and it insults the brilliant figures in the post-Kantian traditions to invoke their important work as justifying the silliness and incompetence of some law professors [here I am not thinking of Unger, just to be clear].)
Another general problem with the ranking is that many people are cited for work in other fields: Raz for moral theory; Waldron for political theory; Leiter for his rankings; and so forth. This is true for many professors, not just those in legal philosophy. Leiter does not correct for this, which undermines the accuracy of the rankings (relative position and who makes the cut).
Work in moral and political theory, just like my work on the epistemology of evidence law, clearly falls within the scope of the category "Law and Philosophy." But Brian is correct that for almost everyone on the list there are "noise" citations: e.g., to my ranking site (which accounts for about 1-2% of my citations), or to completely unphilosophical work, or a mere acknowledgment. Faculty were put on the lists when about 75% of their citations were to work in the specialty. If we were able to correct for all the "noise," this might affect relative positions, but as noted already, relative positions are not very meaningful.
Our culture suffers from a ferocious ranking fetish. Leiter's citation study feeds the beast, when we should instead be starving it.
This seems a pleasantly high-minded sentiment, but in fact I think it is both silly and pernicious. First, there happens to be this news magazine, U.S. News & World Report, that produces famously unreliable rankings of law schools based on a host of largely non-academic criteria or unreliable data. We can put our head in the sand, and pretend it doesn't exist and pretend that students don't have a reasonable interest in comparative metrics of university quality, or we can do something. I prefer doing something, like producing defensible comparative metrics that pertain to actual aspects of academic and professional excellence. Second, rankings, when done well, provide useful information; Richard Posner puts the point aptly:
There is a tradeoff in communications between information content and what I'll call absorption cost. Ranking does very well on the latter score--a ranking conveys an evaluation with great economy to the recipient; it gives the recipient an evaluation of multiple alternatives (in this case, alternative schools) at a glance.
As Posner goes on to note:
But a ranking's information content often is small, because a ranking does not reveal the size of the value differences between the ranks....The quality difference between number 1 and number 2, or between the top 10 and the bottom 10, may be very great, but the quality difference between number 100 and number 200 may be small, at least relative to the appearance created by such a large rank-order difference.
The information content of college rankings, as in the case of U.S. News & World Report's rankings, is particularly low because these are composite rankings. That is, different attributes are ranked, and the ranks then combined (often with weighting) to produce a final ranking. Ordinarily the weighting (even if every subordinate ranking is given the same weight) is arbitrary, which makes the final rank arbitrary. U.S. News & World Report ranks 15 separate indicators of quality to create its composite ranking of colleges.
The rankings I have produced, including this one, avoid these defects.
Let me conclude by quoting a commenter from another blog, who well-expressed my view about most of the criticisms of the ranking data I compile:
The sorts of criticisms noted...can be taken in two ways--one reasonable and the other stupid. The reasonable way is as either noting things that might be improved on in the future or else as noting things that should lead smart consumers of the rankings to ask more questions or otherwise serve as caveats on using the rankings. I'm sure that Leiter has no objections to such remarks. The stupid way to mean them would be to believe that there could be a perfect ranking system, one that combines all desirable elements and has no undesirable ones. It's hard to tell here which way the remarks are meant so I'll assume it's the good way. Many critics of rankings, however, seem pretty clearly to mean the stupid thing....
Finally, the idea that rankings of schools in general is bad seems silly to me. Unless one thinks there is no significant difference between the schools (an unlikely proposition) then rankings can be useful to students. They are not perfect but of course anyone who uses them in a stupid way probably ought not go to law school....
https://leiterlawschool.typepad.com/leiter/2007/11/once-more-int-1.html