September 28, 2012
"The Economics of Law School"
Ohio State's Davidoff in The New York Times; the key paragraph:
The problem of law school is one that is ubiquitous to higher education — the current model is inherently expensive but even today, lower-priced alternatives don’t seem to meet the standards or be desired by many students.
September 27, 2012
NYU in Crisis?
When faculty groups effectively sue the school, there's trouble. And most departments apparently oppose President Sexton's development plans. (The Law School and Philosophy Department, two big beneficiaries of the Sexton era, are not signatories.)
(Thanks to Vicky Brandt for the pointer.)
Nice to know there are countries where Google has to abide by the ordinary laws......without a special cyber-exception (e.g., CDA 230).
September 26, 2012
Summer Associate Hires Up Over 15% Since 2011......and the permanent offer rate remains over 96%.
September 24, 2012
Still More Thoughts on the Phillips & Yoo Citation Study
Mr. Phillips writes, in reply to the criticisms noted last week:
Prof. Leiter’s colleague’s concerns are about false positives, which would inflate scores. We find this the lesser evil, and thus diverged from the Leiter method by not using his sampling technique (which we failed to make clear in our methods section, and have since corrected) because we find the technique problematic from a sampling methodology and measurement theory perspective. Leiter looks at the first and last ten citations, counts up the number of “legitimate” ones, and multiplies that percentage by the total number of cites to get his initial raw value. Thus, someone with 1000 “cites” in Westlaw’s JLR, who had 16 legitimate cites of the first and last 10, would have a raw value of 800. This has three major problems. First, Leiter is using a non-random sample to represent the underlying population. That is a statistical no-no unless there is some kind of sophisticated statistical “correction.” Second, even if the sample was randomly drawn, it is too small to make useful inferences. The hypothetical professor we listed above (the average number of cites a professor had in our study was 976), with a random sample of 20 (with 16 legitimate), and 1000 total cites, would have a 95% confidence interval of 626-974, mean the “true” number of legitimate cites is most likely somewhere in that range—which is not very useful. Finally, the Leiter method makes it more difficult to compare scholars since some professors’ scores will be biased high and some biased low due to the non-random nature of the sampling, negating the value of the Leiter scores as a comparative metric, which is the only real value such scores have.
Our methodology just counts everything in the JLR database, biasing the scores higher than the “truth”, but treating everyone the same—equality of inflation—so that comparisons can be more easily made. Our method is also very easily reproduced, as Prof. Leiter’s colleague demonstrated. And we are not claiming our method (or any citation-based measure) is a measure of quality, but of relevance (and given that many citations are put in by student editors, citation studies are a long way from perfect). As to Prof. Strandburg, her situation is so rare—having highly cited works in an unrelated field, then completely shifting career trajectory and turning to the law—that the one or two people that are like her can be easily corrected when brought to our attention (as we did with her score). That is a lesser evil than completely excluding relevant work in peer-reviewed journals, in our opinion. And as for Prof. Cohen, while we have received much feedback wondering who he is and why he is included, we have also received feedback that “he should be included [because] he is well known in the IP field by those who read economics as well as law journals…[and] has done path breaking empirical research in IP for many years.” We appreciate the numerous feedback we have been receiving as we seek to refine our measure and paper.
I have to say this strikes me as unpersuasive. A few quick points: (1) ideally, random sampling for false positives would have been best, but in all the years of doing it non-randomly, no one has ever come forward with a single case where this method distorted the results; (2) by contrast, it is both a "statistical" and intellectual "no-no" to fail to correct for huge rates of false positives, since such rates are not evenly distributed across all names for the obvious reasons (e.g., someone with the last name "Judge"), and several cases of large false positives have now been identified; (3) in any case, it's an empirical, not statistical, question which method yields the most reliable outcomes, but I'm betting on the approach that I and now Sisk have used for quite some time; (4) using Web of Science was a good addition to the mix, but there clearly needs to be some sensible protocols in place to screen out citations utterly irrelevant to legal scholarship and also more sensible protocols about who counts as a member of a law faculty (tenure stream status in law was our criterion, which would eliminate a lot of the strange inclusions in the Phillips & Yoo lists). James Heckman and Gary Becker are now cross-appointed to the law faculty at Chicago, and they crush Cohen (and almost everyone else!) on Web of Science, but it would be bizarre to think that should be decisive in a ranking of law faculties!
Thoughts from readers about all this? Full name and valid e-mail address required.
September 21, 2012
Judge Posner v. Justice Scalia, Once AgainJudge Posner's decisive response to Justice Scalia's intemperate and inaccurate charge that Posner "lied" is here. This is really a side-issue, though, to the main subject.
September 19, 2012
A logic lesson for an anonymous foolA philosopher who blogs as "Spiros" (and who really hates the music of Billy Joel) takes down an anonymous commenter who was angry at him. It's very funny.
On "Joint" Appointments Prior to Tenure
The proliferation of JD/PhDs over the past generation has resulted in many junior faculty candidates facing the question: should I seek a "joint" appointment between the Law School and the cognate PhD discipline?
"Joint" appointments come in various forms, of which the two main ones are: (1) tenure-track status in two units, with two separate tenure reviews, and two separate tenure decisions; and (2) a "courtesy" or "secondary" appointment in the cognate department, with the tenure home residing in the Law School. The former is, I suppose, a fully "joint" appointment, but it is also to be avoided (perhaps even after tenure, since it is likely to increase your administrative burdens [committee work, faculty meetings etc.]). Although it's still easier, alas, to get tenure in a law school than in most academic departments, the bottom line is having two different tenure masters is a bad position to be in. (There are cases of faculty who didn't get tenure in the non-law department, but did get it in the law school, and those situations are unhappy ones all around.) On the other hand, (2) can have benefits for the faculty member (perhaps teaching in the cognate department, involvement with PhD students and the like) without any of the costs.
But a JD/PhD on the rookie law market should be careful about raising the question of courtesy appointments. Law schools understand full well that they offer better terms of employment (in teaching load, salary, and research support) than almost every academic department in the humanites and social sciences, and so a key question for them in hiring JD/PhDs is: why do you want to be in the Law School rather than in the cognate field? The answer had better turn on intellectual and pedagogical considerations. After a JD/PhD has an offer, you can raise the question about courtesy appointments (assuming they exist, not all schools have them), but if you're hired by a Law School, do understand your primary obligations reside there.
September 18, 2012
A list of Fellowships for Aspiring Law ProfessorsUpdated.
September 15, 2012
Phillips & Yoo Citation Study Has Some Serious Problems
A colleague elsewhere writes:
The results looked odd to me, and I checked a few of their reported results, which appear to be very sloppy.
You mention on your webpage that when you generated your citation statistics, you searched the JLR database using the string “first /2 last” and then audited a subsample for false positives. I believe Yoo and Phillips failed to perform this audit. Their appendix lists Kathryn Judge as having 122 citations in her first year. If you search for “Kathryn /2 Judge” you get 124 hits in the JLR database, but only about 35 are true citations. Their results for Michelle Wilde Anderson and Michael Gilbert appear to have come from searching for “Michelle /2 Anderson” and “Michael /2 Gilbert,” which generate mostly false positives in both cases. Oskar Liivak is #3 on the list because Web of Science lists over 400 citations from physics articles he wrote before he went to law school. Even if one thinks that physics citations might be relevant for assessing the quality of a law professor, it certainly doesn’t make sense to divide his total citation count by the number of years he has been a *law* professor.
This also explains why Katherine Strandburg is #10 on the list of most cited professors. She has a total of 389 hits in JLR (not all of which are citations) and almost 1700 citations from physics publications she wrote before she went to law school. This total is once again divided by the number of years she has been a law professor.
Obviously, I don’t mean to disparage these particular professors. The fact that Yoo and Phillips inflated their citation measures doesn’t say anything about the actual quality of their work. But these errors are enough to convince me that Yoo and Phillips aren’t even measuring citations correctly, let alone quality.
We had noted earlier the risk that Web of Science cites would not necessarily pick up citations that reflect impact on legal scholarship, but these are even more extreme cases than I had imagined. The use of Web of Science also explains how economist Wesley Cohen at Duke (who isn't even a member of the core law faculty there!) fares so well in the Phillips & Yoo study, even though, I imagine, most law faculty have never heard of him. If they really didn't correct for false positives, that is also a rather serious error. Hopefully they will correct for these and other mistakes before long. I still think there are virtues to this approach, but it does need to be carried out correctly!
UPDATE: Katherine Strandburg (NYU) writes:
I've been traveling without consistent Internet access and the Phillips-Yoo citation paper just came to my attention because it was pointed out to me by a colleague. I just sent the authors an email pointing out that, based on a quick look at the paper, I believe their methodology is fishy. As I told them, "the problem is that when you count all publications, in my case that includes my physics publications. Cites to those are probably not too relevant to my relevance as a legal scholar. I don't know how many such cites there are, but those papers have been around for awhile. I'm also not sure how you figure "per year". In fact, I can't actually think of any sensible way to do it in my case. It wouldn't make sense to count only my years as a law professor, since my physics papers have been collecting citations (presumably -- I don"t really know whether anyone still cites them) since long before then. But it also doesn't seem to make much sense to count all the years since my first physics publication, since there were about ten years while I was going to law school and practicing law when I didn't do any research at all. All in all, unless I am misunderstanding something, the method doesn't seem to make much sense for someone in my situation (which, admittedly, is a rather weird situation)."
I now see on your blog that someone else has made a similar critique. Just wanted to say that I agree (though it's nice to see how many cites my physics papers have received).