Saturday, September 15, 2012
A colleague elsewhere writes:
The results looked odd to me, and I checked a few of their reported results, which appear to be very sloppy.
You mention on your webpage that when you generated your citation statistics, you searched the JLR database using the string “first /2 last” and then audited a subsample for false positives. I believe Yoo and Phillips failed to perform this audit. Their appendix lists Kathryn Judge as having 122 citations in her first year. If you search for “Kathryn /2 Judge” you get 124 hits in the JLR database, but only about 35 are true citations. Their results for Michelle Wilde Anderson and Michael Gilbert appear to have come from searching for “Michelle /2 Anderson” and “Michael /2 Gilbert,” which generate mostly false positives in both cases. Oskar Liivak is #3 on the list because Web of Science lists over 400 citations from physics articles he wrote before he went to law school. Even if one thinks that physics citations might be relevant for assessing the quality of a law professor, it certainly doesn’t make sense to divide his total citation count by the number of years he has been a *law* professor.
This also explains why Katherine Strandburg is #10 on the list of most cited professors. She has a total of 389 hits in JLR (not all of which are citations) and almost 1700 citations from physics publications she wrote before she went to law school. This total is once again divided by the number of years she has been a law professor.
Obviously, I don’t mean to disparage these particular professors. The fact that Yoo and Phillips inflated their citation measures doesn’t say anything about the actual quality of their work. But these errors are enough to convince me that Yoo and Phillips aren’t even measuring citations correctly, let alone quality.
We had noted earlier the risk that Web of Science cites would not necessarily pick up citations that reflect impact on legal scholarship, but these are even more extreme cases than I had imagined. The use of Web of Science also explains how economist Wesley Cohen at Duke (who isn't even a member of the core law faculty there!) fares so well in the Phillips & Yoo study, even though, I imagine, most law faculty have never heard of him. If they really didn't correct for false positives, that is also a rather serious error. Hopefully they will correct for these and other mistakes before long. I still think there are virtues to this approach, but it does need to be carried out correctly!
UPDATE: Katherine Strandburg (NYU) writes:
I've been traveling without consistent Internet access and the Phillips-Yoo citation paper just came to my attention because it was pointed out to me by a colleague. I just sent the authors an email pointing out that, based on a quick look at the paper, I believe their methodology is fishy. As I told them, "the problem is that when you count all publications, in my case that includes my physics publications. Cites to those are probably not too relevant to my relevance as a legal scholar. I don't know how many such cites there are, but those papers have been around for awhile. I'm also not sure how you figure "per year". In fact, I can't actually think of any sensible way to do it in my case. It wouldn't make sense to count only my years as a law professor, since my physics papers have been collecting citations (presumably -- I don"t really know whether anyone still cites them) since long before then. But it also doesn't seem to make much sense to count all the years since my first physics publication, since there were about ten years while I was going to law school and practicing law when I didn't do any research at all. All in all, unless I am misunderstanding something, the method doesn't seem to make much sense for someone in my situation (which, admittedly, is a rather weird situation)."
I now see on your blog that someone else has made a similar critique. Just wanted to say that I agree (though it's nice to see how many cites my physics papers have received).