May 05, 2016
Lawyers traditionally bill a specified hourly rate for the time they spend working on a case. This ideally incentivizes lawyers to work hard and improve outcomes for their clients, and it provides clients transparency with respect to lawyer effort.
However, an hourly rate can reduce the predictability of costs for clients. Some clients worry that hourly rates might encourage inefficient over-work. As a result, some have shifted toward fixed-fee arrangements for their legal services, in which lawyers are paid a flat fee for completion of a task, regardless of how much time it takes to complete.
Preliminary results from empirical research that will be presented at this years’ American Law & Economics Association Conference suggest that a fixed-fee approach to compensating lawyers reduces lawyers’ efforts to assist clients and leads to worse outcomes for clients.
Two separate studies by two groups of researchers using similar research designs with different data sets both come to substantially the same conclusions. (Benjamin Schwall, High-Powered Attorney Incentives: A Look at the New Indigent Defense System in South Carolina and Amanda Y. Agan, Matthew Freedman & Emily Owens, Counsel Quality and Client Match Effects).
One potential obstacle in assessing the effects of different billing practices is reverse causation. Better lawyers may normally be able to bill by the hour because they are better and have more power to negotiate, not because billing by the hour makes them better.
The studies control for differences in lawyer quality by looking at the same lawyers (lawyer fixed effects) sometimes as court–appointed attorneys paid a flat fee and sometimes as attorneys billing by the hour. Schwall’s paper exploits changes in how South Carolina compensates its public defenders, while Agan, Freedman & Owen focus on random assignment of criminal defense counsel in Texas. The studies also attempt to control for differences in the type of case and defendant characteristics. The research designs for causal inference appear to be rigorous, and the results seem intuitive and plausible.
While the context of these studies is the criminal justice system, it would be surprising if the conclusions did not also hold true in civil litigation or in transactional practice. A lawyer on a fixed-fee is likely to be more willing to concede important points to bring a case or transaction to a speedy conclusion than one who can bill by the hour and be compensated for his or her extra efforts. Sophisticated clients may be better able to monitor their attorneys than indigent defendants and criminal courts, but clients probably cannot eliminate agency costs (If they could, an hourly rate would make at least as much sense as a fixed-fee).
Assuming the preliminary results of these studies hold, the incentive problems created by fixed fee arrangements may be an opportunity for shrewd business people or plaintiffs lawyers to target counterparties or defendants. If the businessperson pays his own lawyers by the hour to negotiate opposite lawyers on a fixed-fee, the reward could be contracts with lopsided terms in his favor. Plaintiffs’ lawyers may similarly expect civil defense lawyers on fixed-fee arrangements to advocate a swift settlement on terms relatively favorable to plaintiffs.
Lawyers are likely to know which clients use fixed fee arrangements because such clients often have an RFP process in which law firms bid for their work.
April 30, 2016
New research from Dan Schwarcz and Dion Farganis at Minnesota argues that providing students with practice problems and exercises that are similar to final exams and giving individual feedback prior to the final examination can help improve grades for first year law students.
Schwarcz and Farganis tracked the performance of first year students who were randomly assigned to sections, and as a result took courses with professors who either provided exercises and individual feedback prior to the final examination, or who did not provide feedback.
When the students who studied under feedback professors and the students who studied under no-feedback professors took a separate required class together, the feedback students received higher grades after controlling for several factors that predict grades, such as LSAT scores, undergraduate GPA, gender, race, and country of birth. The increase in grades appears to be larger for students toward the bottom half of the distribution. The paper also attempts to control for variation in instructor ability using student evaluations of teacher clarity.
It’s an interesting paper, and part of a welcome trend toward assessing proposed pedagogical reform through quasi-experimental methods.
The interpretation of these results raises a number of questions which I hope the authors will address more thoroughly as they revise the paper and in future research.
For example, are the differences due to instructor effects rather than feedback effects? Students are randomly assigned to instructors who happen to voluntarily give pre-final exam feedback. These might be instructors who are more conscientious, dedicated, or skilled and who also happen to give pre-exam feedback. Requiring other instructors to give pre-exam feedback—or having the same instructors provide no pre-exam feedback—might not affect student performance.
Controlling for instructor ability based on teaching evaluations is not entirely convincing, even if students are ostensibly evaluating teacher clarity. There is not very strong evidence that teaching evaluations reflect how much students learn. An easier instructor who covers less substance might receive higher teaching evaluations across the board than a rigorous instructor who does more to prepare students for practice. Teaching evaluations might reflect friendliness or liveliness or attractiveness or factors that do not actually affect student learning outcomes but that have consumption value for students. Indeed, high feedback professors might receive lower teaching evaluations for the same quality of teaching because they might make students work harder and because they might provide negative feedback to some students, leading students to retaliate on teaching evaluations.
These issues could be addressed in future research by asking the same instructor to teach two sections of the same class in different ways and measuring both long term student outcomes and teaching evaluations.
Another question is: are students simply learning how to take law school exams? Or are they actually learning the material better in a way that will provide long-term benefits, either in bar passage rates or in job performance? At the moment, the data is not sufficient to know one way or the other.
A final question is how much providing individualized feedback will cost in faculty time, and whether the putative benefits justify the costs.
It’s a great start, and I look forward to more work from these authors, and from others, using quasi-experimental designs to investigate pedagogical variations.
April 22, 2016
Professor Paula Franzese of Seton Hall law school is something of a patron saint of law students. Widely known for her upbeat energy, kindness, and tendency to break into song for the sake of helping students remember a particularly challenging point of law, Paula has literally helped hundreds of thousands of lawyers pass the bar exam through her video taped Property lectures for BarBri.
Paula is such a gifted teacher that she won teacher of the year almost ever year until Seton Hall implemented a rule to give others a chance: no professor can win teacher of the year more than two years in a row. Since the rule was implemented, Paula wins every other year. She’s also incredibly generous, leading seminars and workshops to help her colleagues improve their teaching.
Paula recently wrote a book encouraging law students to have a productive, upbeat happy, and grateful outlook on life (A short & happy guide to being a law school student).
Paula’s well-intentioned book has rather bizarrely been attacked by scambloggers as “dehumanizing”, “vain”, “untrustworthy” and “insidious.” The scambloggers are not happy people, and reacted as if burned by Paula’s sunshine. They worry that Paula’s thesis implies that “their failure must be due to their unwillingness to think happy and thankful thoughts.”
Happiness and success tend to go together. Some people assume that success leads to happiness. But an increasing number of psychological studies suggest that happiness causes success. (here and here) Happiness often precedes and predicts success, and happiness appears to be strongly influenced by genetic factors.
Leaving aside the question of how much people can change their baseline level of happiness, being happier—or at least outwardly appearing to be happier—probably does contribute to success, and being unhappy probably is a professional and personal liability.
People like working with happy people. They don’t like working with people who are unhappy or unpleasant. This does not mean that people who are unhappy are to blame for their unhappiness, any more than people who are born with disabilities are to blame for being deaf or blind.
But it does raise serious questions about whether studies of law graduates’ levels of happiness are measuring causation or selection. We would not assume that differences between the height of law graduates and the rest of the population were caused by law school attendance, and we probably should not assume that law school affects happiness very much either.
April 01, 2016
March 16, 2016
Statistician and data visualization expert Hans Rosling recently took the media to task for misleading readers and viewers using unrepresentative anecdotes and ignoring contradictory data.
Rosling says "You can't trust the news outlets if you want to understand the world. You have to be educated and then research basic facts."
While journalists often depict the developing world as full of "wars, conflicts, chaos" Rosling says "That is wrong. [The press] is completely wrong.. . . You can chose to only show my shoe, which is very ugly, but that is only a small part of me. . . . News outlets only care about a small part but [they] call it the world."
Rosling complains that the slow but steady march of progress is not considered news.
Rosling is famous for his data visualizations, especially this video briefly illustrating 200 years of global progress toward health and prosperity. It's optimism for the data-driven set (and is a big hit in my business law classes).
March 05, 2016
That’s the question Frank McIntyre and I try to answer in Value of a law degree by College Major. Economics seems to be the “best” major for aspiring law students, with both high base earnings with a bachelor’s degree and a large boost to earning with a law degree. History and philosophy/religion get a similarly large boost from a law degree but start at a lower undergraduate base and, among those with law degrees, typically end up earning substantially less than economics majors.
The abstract and a figure are below:
We estimate the increase in earnings from a law degree relative to a bachelor’s degree for graduates who majored in different fields in college. Students with humanities and social sciences majors comprise approximately 47 percent of law degree holders compared to 23 percent of terminal bachelor’s. Law degree earnings premiums are highest for humanities and social sciences majors and lowest for STEM majors. On the other hand, among those with law degrees, overall earnings are highest for STEM and Business Majors. This effect is fairly small at the low end of the earnings distribution, but quite large at the top end. The median annual law degree earnings premium ranges from approximately $29,000 for STEM majors to $45,000 for humanities majors.
These results raise an intriguing question: should law schools offer larger scholarships to those whose majors suggest they will likely benefit less from their law degrees? Conversely, should law schools charge more to those who will likely benefit the most?
Figure 3: ACS Mean Earnings for Professional Degree Holders (Narrow) by Selected Field of Study* (2014 USD Thousands)
- Includes degree fields with more than 700 professional degree holders in sample.
COMMENT FROM BRIAN LEITER: The lumping of philosophy majors together with religion invariably pulls down the performance of philosophy majors!
February 09, 2016
The latest unscientific fad among law school watchers is comparing job openings projections for lawyers from the Bureau of Labor Statistics* with the number of students expected to graduate from law school. Frank McIntyre and I tested this method of predicting earnings premiums--the financial benefits of a law degree--using all of the available historical projections from the BLS going back decades. This method of prediction does not perform any better than random chance.** Labor economists--including those working at the BLS--have explicitly stated that BLS projections should not be used to try to value particular courses of study. Instead, higher education should be valued based on earnings premiums.
Bloggers who report changes in BLS projections and compare projected job openings to the number of students entering law school might as well advise prospective law students to make important life decisions by flipping a coin.
Many law graduates won't practice law. Many engineering graduates won't become engineers. Many students in every field end up working jobs that are not directly related to what they studied. They still typically benefit financially from their degrees by using them in other occupations where additional education boosts earnings and likelihood of employment.
And if one's goal really is to practice law even if practicing law is not more lucrative than other opportunities opened by a law degree, then studying law may not be a guarantee, but it still dramatically improves the odds.
* BLS job opening projections--which are essentially worthless as predictors for higher education--should not be confused with BLS occupational employment statistics, which provide useful data about earnings and employment in many occupations, including for lawyers.
** There isn’t even strong evidence that changes in the ratio between BLS projected lawyer job openings and law class size predict changes in the percent of law graduates who will practice law, although the estimates are too noisy to be definitive. Historically, the ratio of BLS projected openings to law graduates (or first year enrollments 3 years prior) has systematically under-predicted by a wide margin the proportion of law graduates practicing law shortly after graduation, although it is clear that a large minority of law graduates do not practice law.
February 9, 2016 in Guest Blogger: Michael Simkovic, Law in Cyberspace, Legal Profession, Ludicrous Hyperbole Watch, Of Academic Interest, Professional Advice, Science, Student Advice, Web/Tech, Weblogs | Permalink
February 03, 2016
February 02, 2016
Smaller or Larger Law Class Sizes Don’t Predict Changes in Financial Benefits of Law School (Michael Simkovic)
One of the most surprising and controversial findings from Timing Law School was that changes in law school graduating class size do not predict changes in the boost to earnings from a law degree.* Many law professors, administrators, and critics believe that shrinking the supply of law graduates must surely improve their outcomes, because if supply goes down, then price—that is, earnings of law graduates—should go up.
In a new version of Timing Law School, Frank McIntyre and I explore our counterintuitive results more thoroughly. (The new analysis and discussion appear primarily in Part III.C. “Interpreting zero correlation for cohort size and earnings premium” on page 18-22 of the Feb. 1, 2016 draft and in Table 10 on the final page).
Our results of no relationship between class size and earnings premiums were robust to many alternative definitions of cohort size that incorporated changes in the number of law graduates over several years. This raises questions about whether our findings are merely predictive, or should be given a causal interpretation.
We considered several interpretations that could reconcile our results with a supply and demand model and with the data. The most plausible interpretation seemed to be that when law class sizes change, law graduates switch between practicing law and other employment opportunities that are equally financially rewarding. While changes in the number of law graduates might have an impact on the market for entry-level lawyers, such changes are much less likely to have a discernible impact on the much larger market for highly educated labor.
Although law graduates who practice law on average earn more than those who do not, at the margin, those who switch between practicing law and other options seem to have law and non-law opportunities that are similarly lucrative. We found that the proportion of recent law graduates who practice law does decline as class size increases, but earnings premiums remain stable. This is consistent with the broader literature on underemployment, and supports the view of law school as a flexible degree that provides benefits that extend to many occupations. (See here and here).
A related explanation is that relatively recent law school graduates may be reasonably good substitutes for each other for several years, blunting the impact of changes in class size on earnings.
Interpretations that depend on law students and law schools perfectly adjusting class size in anticipation of demand for law graduates or future unemployment seem implausible given the unrealistic degree of foresight this would require. Offsetting changes in the quality of students entering law school—an explanation we proposed in an earlier version of the paper—seems able to explain at most a very small supply effect. Although credentials of entering classes appear to decline with class sizes, these changes in credentials are relatively small even amid dramatic changes in class size, and probably do not predict very large changes in earnings. Imprecision in our estimates is another possibility, although for graduates with more than a few years of experience, our estimates are fairly precise.
Even if there are effects of law class size on law earnings premiums, they are probably not very large and not very long lasting.
* The finding is consistent with mixed results for cohort size in other econometric studies of cohort effects, but nevertheless was contrary to many readers’ intuitions.
December 02, 2015
Developer of Law School Admission Test (LSAT) Disputes Advocacy Group's Bar Exam Claims (Michael Simkovic)
The Law School Admission Council (LSAC)--the non-profit organization which develops and administers the Law School Admission Test (LSAT)--recently issued a press release disputing claims by the advocacy group "Law School Transparency" about the relationship between LSAT scores and bar passage rates. "Law School Transparency," headed by Kyle McEntee, prominently cited the LSAC National Longitudinal Bar Passage Study (1998) as a key source for "Law School Transparency's" claims that many law schools are admitting students who are unlikely to pass the bar exam based largely on their LSAT scores. McEntee's group's claims of bar passage risk were widely disseminated by the national press.
However, according to LSAC, the Longitudinal Bar Passage Study does not provide much support for "Law School Transparency's" claims. Moreover, "Law School Transparency's" focus on first time bar passage rates is potentially misleading:
"The LSAC [National Longitudinal Bar Passage] study did state that 'from the perspective of entry to the profession, the eventual pass rate is a far more important outcome than first-time pass rate.' This statement is as true today as it was 25 years ago. As noted by LST, the LSAC study participants who scored below the (then) average LSAT score had an eventual bar passage rate of over 90 percent.
Kyle McEntee and David Frakt responded to some of LSAC's critiques--partly on substance by pointing out disclaimers in the full version of "Law School Transparency's" claims, partly by smearing the technical experts at LSAC as shills for law school--but notably did not explain why "Law School Transparency" chose to focus on first time bar passage rates rather than seemingly more important--and much higher--eventual bar passage rates.
Eventual bar passage rates were the focus of the National Longitudinal Bar Passage Study. The LSAC study's executive summary highlights eventual bar passage rates and detailed data is presented on page 32 and 33. Even among graduates of the lowest "cluster" of law schools, around 80 percent eventually passed the bar exam.
According to LSAC:
"The LSAC National Longitudinal Bar Passage Study was undertaken primarily in response to rumors and anecdotal reports suggesting that bar passage rates were so low among examinees of color that potential applicants were questioning the wisdom of investing the time and resources necessary to obtain a legal education."
"Law School Transparency" has revived similar concerns, but without a specific focus on racial minorities.*
There may be legitimate concerns about long term eventual bar passage rates for some law students. However, "Law School Transparency's" back-of-the-envelope effort, focused on short term outcomes, does not provide much insight into long-term questions. The most rigorous study of this issue to date--the LSAC National Longitudinal Bar Passage Study--concluded that "A demographic profile that could distinguish first-time passing examinees from eventual-passing or never-passing examinees did not emerge from these data. . . . Although students of color entered law school with academic credentials, as measured by UGPA and LSAT scores, that were significantly lower than those of white students, their eventual bar passage rates justified admission practices that look beyond those measures."Unfortunately, some newspapers reported "Law School Transparency's" bar passage risk claims in ways that suggested the claims were blessed by LSAC, or even originated from LSAC. For example, one prominent newspaper's editorial board wrote that "In 2013, the median LSAT score of students admitted to [one law school] was in the bottom quarter of all test-takers nationwide. According to the test’s administrators, students with scores this low are unlikely to ever pass the bar exam."