Someone named Richard Neumann, who teaches civil procedure and legal writing at Hofstra, thought to cc: me on an e-mail he sent to a listserve for, I believe, clinical law professors. He was "reacting" to this. He wrote:
The methodology makes this study worthless. Not only does it exclude about 70% of law school faculties, but at the 30% that are included, it excludes some of those most brilliant people I know. For example, look at Leiter's list of faculty at NYU and several other schools, purged of clinical and legal writing faculty. Leiter's study deserves ridicule, not only for its values, but also for its sloppiness. In a rigorous social science department, it would be laughed at as amateurish because it is unlikely to produce an accurate picture of what it purports to study and because it reflects the researcher's assumptions rather than an open-ended inquiry.
Since the original posting had not described the methodology (though it will be similar to what Eisenberg & Wells and I have used in the past), this irrational outburst was a bit, shall we say, surprising. In an effort to rank the top 35-40 faculties (which I explicitly noted was the goal) by scholarly impact, there is no reason to study all law faculties, unless one really had no idea which faculties stood a chance of being in the top 35-40. But since, as noted, we've done these studies before (and with many different mixes of faculties), we have a fairly good, if rough, idea of which 50 faculties or so have a good chance of ending up in the top 35-40, though we don't know in what order, of course. (A colleage at DePaul made the good suggestion via e-mail yesterday that I post the methodology, so that schools might undertake self-studies, which would serve as a helpful corrective to omissions; I will do so tomorrow. And, as I also noted yesterday we may yet add faculties to the list of 49.)
In any case, I don't think Professor Neumann's little outburst has much to do with his concern with social science methodology, as opposed to my failure to appreciate "some of those most brilliant people" he knows, namely, "clinical and legal writing faculty," who are excluded from the faculty lists. Of course, the proposed study was not aimed at evaluating the "brilliance" of anyone; it was aimed at evaluating, in the first instance, scholarly impact as measured by citations in the legal academic literature. Since clinical and legal writing faculty do not, typically, have the same obligations to produce scholarship that tenure stream academic faculty have, it seemed unfair in a study of per capita impact to include those faculty and then "evaluate" them by reference to a criterion that isn't always apt for what they do. (I think, for example, that this achievement by our capital punishment clinic at Texas is hugely impressive, but I have no reason to think a per capita citation study would do anything to reflect the excellence of our clinical faculty in this area.) Of course, there are clinical and legal writing faculty who produce scholarship; that is, quite obviously I would have thought, not the point. The point is that, at most schools, it is not their primary duty nor the primary measure of their professional excellence. A study of the quality and impact of clinical and legal writing faculty would no doubt be a separate and worthwhile undertaking; perhaps Professor Neumann will undertake to enlighten us in this regard with his distinguished command of social scientific methods.