May 31, 2016
May 17, 2016
News release. I hope this works out (being a big SSRN user myself). Elsevier, alas, has a terrible reputation in various academic communities.
UPDATE: For some concerns, see this post. I'm opening this for comments from readers, in law or other fields.
May 13, 2016
May 11, 2016
Sarah Lawsky's entry-level hiring report for 2015-16--plus the percentage of successful job seekers from each school
Professor Lawsky (currently UC Irvine, moving this fall to Northwestern) has produced her annual, informative report on rookie hiring this year. As she notes, it reflects only those who accepted tenure-track jobs, not tenure-track offers. (This matters for Chicago this year, since two alumni turned down tenure-track offers for personal reasons; as I noted earlier, 75% of our JD and LLM candidates on the market received tenure-track offers.)
Here are the statistics based on the percentage of JD, LLM and SJD (or Law PhD) seekers from each school who accepted a tenure-track position this year (I excluded clinical and LRW jobs, since that market operates differently from the market for "doctrinal" faculty--there were 80 of the latter, as I had estimated--a 20% uptick from recent years, but still about half of the pre-recession numbers); only schools that placed at least two candidates and which had at least nine job seekers* are listed:
1. University of Chicago (58%: 7 of 12)
2. Yale University (50%: 21 of 42)
3. Stanford University (42%: 8 of 19)
4. Columbia University (29%: 6 of 21)
5. Harvard University (27%: 12 of 45)
6. New York University (24%: 7 of 29)
7. University of Michigan, Ann Arbor (22%: 2 of 9)
8. University of California, Berkeley (19%: 3 of 16)
9. University of Virginia (17%: 2 of 12)
UCLA had just five job seekers, but two (40%) got tenure-track jobs.
*I used 9 rather than 10 is the cut-off, since Michigan was just under ten, but still had enough candidates to make the figure somewhat meaningful.
May 09, 2016
Briefly: the University of Arizona decided to admit some students using their GRE scores, rather than the LSAT; LSAC, protecting its LSAT monopoly, threatened to oust Arizona from the LSAC system (which includes the application system); nearly 150 law deans protested (rightly); LSAC is backing down, at least for now. (Blog Emperor Caron, to whom I link, has at the end of his post links to other items about LSAC's bad behavior.)
May 08, 2016
At SSRN; the abstract:
This essay discusses a lengthy review by Professor Michael McConnell of the Stanford Law School in the Yale Law Journal of my 2013 book WHY TOLERATE RELIGION? (Princeton University Press). I identify two important objections that Prof. McConnell raises, but also identify eight different mistakes or misunderstandings that mar other parts of the review. I conclude by taking Prof. McConnell to task for several rhetorical cheap shots that, together with the other errors, suggests that his essay was more a partisan brief than a scholarly evaluation of the arguments. Most surprisingly, the fact that Professor McConnell, in his lengthy review, never actually responds to my book's central thesis--namely, that the inequality between religious and non-religious claims of conscience is not morally defensible--suggests that there may really be no serious argument on the other side.
May 05, 2016
Lawyers traditionally bill a specified hourly rate for the time they spend working on a case. This ideally incentivizes lawyers to work hard and improve outcomes for their clients, and it provides clients transparency with respect to lawyer effort.
However, an hourly rate can reduce the predictability of costs for clients. Some clients worry that hourly rates might encourage inefficient over-work. As a result, some have shifted toward fixed-fee arrangements for their legal services, in which lawyers are paid a flat fee for completion of a task, regardless of how much time it takes to complete.
Preliminary results from empirical research that will be presented at this years’ American Law & Economics Association Conference suggest that a fixed-fee approach to compensating lawyers reduces lawyers’ efforts to assist clients and leads to worse outcomes for clients.
Two separate studies by two groups of researchers using similar research designs with different data sets both come to substantially the same conclusions. (Benjamin Schwall, High-Powered Attorney Incentives: A Look at the New Indigent Defense System in South Carolina and Amanda Y. Agan, Matthew Freedman & Emily Owens, Counsel Quality and Client Match Effects).
One potential obstacle in assessing the effects of different billing practices is reverse causation. Better lawyers may normally be able to bill by the hour because they are better and have more power to negotiate, not because billing by the hour makes them better.
The studies control for differences in lawyer quality by looking at the same lawyers (lawyer fixed effects) sometimes as court–appointed attorneys paid a flat fee and sometimes as attorneys billing by the hour. Schwall’s paper exploits changes in how South Carolina compensates its public defenders, while Agan, Freedman & Owen focus on random assignment of criminal defense counsel in Texas. The studies also attempt to control for differences in the type of case and defendant characteristics. The research designs for causal inference appear to be rigorous, and the results seem intuitive and plausible.
While the context of these studies is the criminal justice system, it would be surprising if the conclusions did not also hold true in civil litigation or in transactional practice. A lawyer on a fixed-fee is likely to be more willing to concede important points to bring a case or transaction to a speedy conclusion than one who can bill by the hour and be compensated for his or her extra efforts. Sophisticated clients may be better able to monitor their attorneys than indigent defendants and criminal courts, but clients probably cannot eliminate agency costs (If they could, an hourly rate would make at least as much sense as a fixed-fee).
Assuming the preliminary results of these studies hold, the incentive problems created by fixed fee arrangements may be an opportunity for shrewd business people or plaintiffs lawyers to target counterparties or defendants. If the businessperson pays his own lawyers by the hour to negotiate opposite lawyers on a fixed-fee, the reward could be contracts with lopsided terms in his favor. Plaintiffs’ lawyers may similarly expect civil defense lawyers on fixed-fee arrangements to advocate a swift settlement on terms relatively favorable to plaintiffs.
Lawyers are likely to know which clients use fixed fee arrangements because such clients often have an RFP process in which law firms bid for their work.
May 04, 2016
Interesting chart, though note two things about the data: first, it includes both district and circuit court placement, without distinguishing between them; and second, it includes only clerkships secured before graduation from law school. With the current turmoil in the clerkship market, securing federal clerkships after graduation is increasingly common. One thing that leaps out is that being a regional powerhouse is advantageous for clerkship placement.
April 30, 2016
New research from Dan Schwarcz and Dion Farganis at Minnesota argues that providing students with practice problems and exercises that are similar to final exams and giving individual feedback prior to the final examination can help improve grades for first year law students.
Schwarcz and Farganis tracked the performance of first year students who were randomly assigned to sections, and as a result took courses with professors who either provided exercises and individual feedback prior to the final examination, or who did not provide feedback.
When the students who studied under feedback professors and the students who studied under no-feedback professors took a separate required class together, the feedback students received higher grades after controlling for several factors that predict grades, such as LSAT scores, undergraduate GPA, gender, race, and country of birth. The increase in grades appears to be larger for students toward the bottom half of the distribution. The paper also attempts to control for variation in instructor ability using student evaluations of teacher clarity.
It’s an interesting paper, and part of a welcome trend toward assessing proposed pedagogical reform through quasi-experimental methods.
The interpretation of these results raises a number of questions which I hope the authors will address more thoroughly as they revise the paper and in future research.
For example, are the differences due to instructor effects rather than feedback effects? Students are randomly assigned to instructors who happen to voluntarily give pre-final exam feedback. These might be instructors who are more conscientious, dedicated, or skilled and who also happen to give pre-exam feedback. Requiring other instructors to give pre-exam feedback—or having the same instructors provide no pre-exam feedback—might not affect student performance.
Controlling for instructor ability based on teaching evaluations is not entirely convincing, even if students are ostensibly evaluating teacher clarity. There is not very strong evidence that teaching evaluations reflect how much students learn. An easier instructor who covers less substance might receive higher teaching evaluations across the board than a rigorous instructor who does more to prepare students for practice. Teaching evaluations might reflect friendliness or liveliness or attractiveness or factors that do not actually affect student learning outcomes but that have consumption value for students. Indeed, high feedback professors might receive lower teaching evaluations for the same quality of teaching because they might make students work harder and because they might provide negative feedback to some students, leading students to retaliate on teaching evaluations.
These issues could be addressed in future research by asking the same instructor to teach two sections of the same class in different ways and measuring both long term student outcomes and teaching evaluations.
Another question is: are students simply learning how to take law school exams? Or are they actually learning the material better in a way that will provide long-term benefits, either in bar passage rates or in job performance? At the moment, the data is not sufficient to know one way or the other.
A final question is how much providing individualized feedback will cost in faculty time, and whether the putative benefits justify the costs.
It’s a great start, and I look forward to more work from these authors, and from others, using quasi-experimental designs to investigate pedagogical variations.