Thursday, October 4, 2012
Wednesday, October 3, 2012
Alas, yesterday's poll got linked from a blog with a less academic audience, so I've decided to close it. In any case, here are the results, with over 300 votes:
How many currently accredited ABA law schools (there are about
200) do you think will close over the next ten years? (Assume that there are no
changes to federally guaranteed student loans and that there is a modest
improvement in the job market for lawyers.)
|Total Votes :
A few observations of my own, and then I'll open comments. That 15% think no law schools at all will close may be wishful thinking, but perhaps there is a sound explanation for thinking that correct. My own opinion was that we'll see several law schools close during the next decade, but probably not more than ten--and that was the majority view among readers by a wide margin. Most vulnerable are going to be free-standing law schools that are relatively young. Relatively young law schools part of universities that are in vulnerable financial shape are also likely candidates. 25% of respondents expected more serious carnage, on the order of 5% or much more of current law schools closing. I would agree with the prophets of doom at least to this extent: I expect more than 5% of current law schools, and probably more like 30-50%, will contract in various ways over the next decade: they will admit fewer students (we're already seeing that), and will shrink their faculties. That, of course, would be a sensible response to the economic climate generally, and for lawyers in particular.
Of course, all of this is predicated on two assumptions: a modest economic rebound over the next decade and, even more importantly, continued federal guarantees of student loans, which supply the operating budget of the majority of law schools. A change in student loans--for example, shifting it back to the private sector--would have far more disruptive effects, since private lenders are, one suspects, more likely to do the due diligence on probable employment and salary outcomes that some students are not presently doing (and they are also likely to set the bar higher for what would make the loans risk-worthy). In a world with only private lending for higher education, I would expect the number of law schools that close over the next decade to be considerably higher. So, too, another economic collapse, or an extended period of economic stagnation, will also push the number of closures higher.
Or so, in any rate, it seems to me. Thoughts from readers? Signed comments only: full name in the signature line, plus valid e-mail address.
Tuesday, October 2, 2012
There are about 200 of them, and the belief certainly seems widespread in the bowels of cyber-space that half of them are destined to disappear, or something like that, due to the cost of legal education relative to the actual professional outcomes in the current market (indeed, in some cases, even before the current economic crisis). Of course, we've already seen some law schools reduce enrollments, and others withdraw from the market for new faculty--so 'shrinkage' is already happening. But will accredited law schools actually close? Assume that there are no changes to the current student loan structure (i.e., the federal government still backs them), and assume that there is some improvement in the legal market in the years ahead. How many of the 200 accredited law schools do you think will close their doors over the next decade? UPDATE: So with 140 votes cast, here's the breakdown: 12% think no law schools will close in the next decade; 68% expect 1-10 to close; 14% expect 11-25 to close; 4% think 26-50 will close; and about 1% think more than 50 will close. So an overwhelming majority of respondents so far, 88%, expect at least one or more law schools to close, and nearly one in five expect a non-trivial number to close, i.e., 5% or more.
ANOTHER UPDATE: Several hours later and 209 votes, here's the breakdown:
POLL IS CLOSE and the results and discussion are here.
Friday, September 28, 2012
Ohio State's Davidoff in The New York Times; the key paragraph:
The problem of law school is one that is ubiquitous to higher education — the current model is inherently expensive but even today, lower-priced alternatives don’t seem to meet the standards or be desired by many students.
Thursday, September 27, 2012
When faculty groups effectively sue the school, there's trouble. And most departments apparently oppose President Sexton's development plans. (The Law School and Philosophy Department, two big beneficiaries of the Sexton era, are not signatories.)
(Thanks to Vicky Brandt for the pointer.)
Wednesday, September 26, 2012
Monday, September 24, 2012
Mr. Phillips writes, in reply to the criticisms noted last week:
Prof. Leiter’s colleague’s concerns are about false positives, which would inflate scores. We find this the lesser evil, and thus diverged from the Leiter method by not using his sampling technique (which we failed to make clear in our methods section, and have since corrected) because we find the technique problematic from a sampling methodology and measurement theory perspective. Leiter looks at the first and last ten citations, counts up the number of “legitimate” ones, and multiplies that percentage by the total number of cites to get his initial raw value. Thus, someone with 1000 “cites” in Westlaw’s JLR, who had 16 legitimate cites of the first and last 10, would have a raw value of 800. This has three major problems. First, Leiter is using a non-random sample to represent the underlying population. That is a statistical no-no unless there is some kind of sophisticated statistical “correction.” Second, even if the sample was randomly drawn, it is too small to make useful inferences. The hypothetical professor we listed above (the average number of cites a professor had in our study was 976), with a random sample of 20 (with 16 legitimate), and 1000 total cites, would have a 95% confidence interval of 626-974, mean the “true” number of legitimate cites is most likely somewhere in that range—which is not very useful. Finally, the Leiter method makes it more difficult to compare scholars since some professors’ scores will be biased high and some biased low due to the non-random nature of the sampling, negating the value of the Leiter scores as a comparative metric, which is the only real value such scores have.
Our methodology just counts everything in the JLR database, biasing the scores higher than the “truth”, but treating everyone the same—equality of inflation—so that comparisons can be more easily made. Our method is also very easily reproduced, as Prof. Leiter’s colleague demonstrated. And we are not claiming our method (or any citation-based measure) is a measure of quality, but of relevance (and given that many citations are put in by student editors, citation studies are a long way from perfect). As to Prof. Strandburg, her situation is so rare—having highly cited works in an unrelated field, then completely shifting career trajectory and turning to the law—that the one or two people that are like her can be easily corrected when brought to our attention (as we did with her score). That is a lesser evil than completely excluding relevant work in peer-reviewed journals, in our opinion. And as for Prof. Cohen, while we have received much feedback wondering who he is and why he is included, we have also received feedback that “he should be included [because] he is well known in the IP field by those who read economics as well as law journals…[and] has done path breaking empirical research in IP for many years.” We appreciate the numerous feedback we have been receiving as we seek to refine our measure and paper.
I have to say this strikes me as unpersuasive. A few quick points: (1) ideally, random sampling for false positives would have been best, but in all the years of doing it non-randomly, no one has ever come forward with a single case where this method distorted the results; (2) by contrast, it is both a "statistical" and intellectual "no-no" to fail to correct for huge rates of false positives, since such rates are not evenly distributed across all names for the obvious reasons (e.g., someone with the last name "Judge"), and several cases of large false positives have now been identified; (3) in any case, it's an empirical, not statistical, question which method yields the most reliable outcomes, but I'm betting on the approach that I and now Sisk have used for quite some time; (4) using Web of Science was a good addition to the mix, but there clearly needs to be some sensible protocols in place to screen out citations utterly irrelevant to legal scholarship and also more sensible protocols about who counts as a member of a law faculty (tenure stream status in law was our criterion, which would eliminate a lot of the strange inclusions in the Phillips & Yoo lists). James Heckman and Gary Becker are now cross-appointed to the law faculty at Chicago, and they crush Cohen (and almost everyone else!) on Web of Science, but it would be bizarre to think that should be decisive in a ranking of law faculties!
Thoughts from readers about all this? Full name and valid e-mail address required.
Friday, September 21, 2012
Wednesday, September 19, 2012