May 12, 2015
Law students are more likely than college students to retain competitive scholarships (Michael Simkovic)
Critics of competitive scholarships tied to GPA or class rank claim that these scholarships are especially troubling when used by law schools, because the mandatory grading curve means that more law students are likely to lose their scholarships than undergraduates. However, as I noted in my last post, the data actually shows that law students are more likely to retain their competitive scholarships than are undergraduates.
The remaining critiques of competitive scholarships are not strong. According to one critique, if competitive scholarships are disproportionately used by law schools who admit students with low LSAT scores and GPA and are not used by the elite law schools, this suggests something suspicious about these scholarships. Lower ranked law schools serve different student populations with spottier academic preparation who are at greater risk of failing the bar exam and may have worse study habits. Some policies and practices that are helpful to motivate this population and encourage greater study effort may not be necessary for higher ranked law schools, whose students are already highly motivated and can pass the bar exam and learn challenging material without much effort.
Another argument is that after law school critics and The New York Times attacked law school competitive scholarships, and the ABA responded by requiring disclosure of this practice, the number of law schools using competitive scholarships declined. Critics claim that the disclosure caused law schools to stop using competitive scholarships, thereby proving the scholarships were unethical all along.
But perhaps law schools were simply attempting to avoid criticism, whether merited or not. In other words, perhaps the criticism caused both the mandatory disclosure and the reduction in the use of competitive scholarships. If The New York Times quoted an impressive sounding source claiming that those who typically tie their left shoe before their right were liars and thieves, and the Justice Department disclosed an annual list of everyone who tied their left shoe first, we might find that the percent of people who tie their left shoe first would drop, notwithstanding the fact that which shoe you tie first has absolutely nothing to do with ethics. Or, as Matt Bruckner suggests, perhaps some other factor, such as changes in relative market power or law school budgets help explain the shift in financial aid policy and neither the criticism nor the disclosure had much to do with it. Without more sophisticated methods of causal inference, its premature to make strong causal claims.
May 10, 2015
Competitive Scholarships, Mandatory Courses, and the Costs and Benefits of Disclosure (Michael Simkovic)
There is a wide range of views about the benefits, costs, and appropriate use of conditional merit scholarships—scholarships that under their terms, will only be retained after the first year of law school if students maintain a minimum GPA or minimum class rank (if there is a mandatory grading curve, a minimum GPA effectively is a class rank requirement). These questions implicate both broad value judgments and also very specific empirical questions to which we many not have clear answers.
1) Is competition for grades a help or a hindrance to learning?
2) Is competition, with greater rewards for winners than for losers, inherently moral or immoral?
- Does the answer depend on whether the outcome of the competition is driven by luck, skill, or effort?
- Does the answer depend on how large the differences in rewards are between winners and losers?
3) Does disclosure alter student decision-making?
- If so, how?
- Is this a good thing or a bad thing?
- If it is a good thing do the benefits of disclosure outweigh the costs of providing disclosure?
- Are some ways of providing disclosure clearer and more meaningful than others? Could too much disclosure be overwhelming?
Disclosures are sometimes very effective at improving market efficiency. Sometimes disclosures appear to have no effect. Sometimes they have the opposite of the intended or expected effect. For example, disclosure of compensation of high level corporate executives of publicly traded companies may have contributed to an increase in executive pay (see also here.)
In the case of conditional merit scholarships, the direct administrative costs of providing disclosure appear minimal. The effects of such disclosure, if any, remain unknown. I support access to greater information about conditional scholarship retention rates, not only for law schools but also for all educational institutions.
Scholarship retention rates at many undergraduate institutions under government-backed programs appear to be lower than scholarship retention rates at most law schools. Around half of Georgia Hope Scholarship recipients lost their scholarship after the first year. Around 25 to 30 percent of Georgia Hope Scholarship recipients retained their scholarships for all four years of college. Nevertheless, conditional merit scholarships can have positive effects on undergraduate enrollment and academic performance. A fascinating randomized experiment by Angrist, Lang and Oreopolous found that financial incentives improved grades for women but not for men. A recent experiment also found evidence that merit scholarships tied to grades can increase student effort and academic performance at community colleges.
Unfortunately, there is some evidence that the use of merit scholarships tied to GPA by undergraduate institutions—where grade distributions and course workload vary widely by major—can reduce the likelihood that students complete their studies in science technology engineering and math (STEM) fields. Students who major in STEM fields have a higher chance of losing their scholarships
In other words, if students can shop for “easy As” rather than study harder to improve their performance, they can reduce their own future earning prospects. The approach law schools take—merit scholarships tied to mandatory grading curves and a required curriculum—may be better for students in the long run. Indeed, law students might benefit financially if additional courses, such as instruction in financial literacy, were mandatory.*
Greater disclosure of grading distributions may exacerbate grade shopping and grade inflation, which can undermine student effort and learning. Some models suggest that grade inflation is contagious across institutions (see also here). (It should be possible to disclose scholarship retention rates without disclosing grade distributions).
In some contexts, such as securities regulation or pharmaceuticals, disclosure requirements tend to be high. In other areas, such as employment contracts, disclosure tends to be more limited. We may not always get the balance right. These questions have lead to a rich research literature in law, economics, and psychology (see Bainbridge, Lang, Mathios, Coffee, Kaplow, Easterbrook and Fischel, Romano, and Schwartz). In all cases, whether and how disclosures alter behavior is an empirical question. How the benefits compare to the costs are empirical questions mixed with subjective value judgments.
Given the current limited state of knowledge, and good faith and understandable disagreements about subjective value differences, strident views on one side or another, and moral condemnations of those entertaining different viewpoints, are not appropriate.
Law professors have an obligation to teach students to think like lawyers, weigh evidence, and consider different arguments and different perspectives. We should not shut down discussion with swaggering declarations of the moral superiority of our own views or ad-hominem attacks against those with whom we disagree.
A recent post (in the comments) by Brian Tamanaha (or someone posting under his name and with a similar rhetorical style**) highlights the unfortunate tendency by some toward moral posturing. Tamanaha writes:
“[Those who condemn conditional scholarships are] speaking up for the integrity of legal academia. It is embarrassing that law professors would now rise up to defend employment reporting standards … criticized by outsiders (see New York Times "Bait and Switch" piece), practices which have since been repudiated and reformed by new ABA standards. I do not understand why Simkovic is re-raising these resolved issues, but it does not help us regain our collective credibility.
After reading these posts, I have begun to wonder whether a sense of professional responsibility is what separates the two sides in this discussion. It is not a coincidence that John Steele, [Bernard Burk], and others who strongly condemn these practices have taught legal ethics.”
In other words, if you question Brian Tamanaha’s reasoning and conclusions—as I have—then you have no integrity and dubious ethics, are irresponsible and unprofessional, and are an embarrassment to the legal academy.
Bernard Burk, though declaring his disdain for ad-hominem attacks, accuses those with whom he disagrees of being “partisan.” He compares competition for grades and scholarships to physically beating students. Burk compares law schools to gangsters and evil witches. He claims that the positive effects of conditional scholarships on student motivation and learning “smells of post-hoc rationalization.” (Most of the labor economics studies demonstrating positive effects of financial incentives on student performance were available before The New York Times and the law school critics targeted law school conditional scholarships; the critics overlooked the peer-reviewed literature).
Deborah Merritt, though generally providing an intelligent discussion of conditional scholarship issues, compares conditional scholarships in which adults who lose the competition for grades receive a free year of law school to the fictional “Hunger Games” in which children who lose a physical struggle are murdered. (Paul Caron repeats this unfortunate comparison when summarizing the debate; so does Bernard Burk).
Paul Campos compares those who disagree with him about data disclosure standards to “Holocaust deniers.”
Law school critics have not persisted through the force of argument or evidence, but rather through their ability to make an honest discussion of the issues so unpleasant that very few who disagree with them wish to engage. We should thank Professor Telman for his courage and for elevating the conversation from polemics to evidence-based inquiry. As more professors and journalists raise substantive questions about law school critics’ narrative, it will become increasingly difficult for the critics to foreclose factual and ethical inquiry through ad-hominem attacks and hyperbole.
* A recent survey by John Coates, Jessie Fried, and Kathryn Spier at Harvard suggests that large law firm employers believe instruction in certain technically challenging business electives, especially accounting, corporate finance, and corporations, is particularly valuable on the job. Data does not exist to evaluate whether enrollment in such courses actually boosts earnings or employment, or is even correlated with greater earnings or employment. However, one working hypothesis is that such courses might be the law school equivalent of undergraduate STEM or economics majors. A study of high school financial literacy mandates suggests positive long-term effects on enrollees’ financial well-being.
** The first and only time I met Brian Tamanaha in person was at the 2013 Law & Society meeting in Boston where he spoke on a panel. Professor Tamanaha shut down questions from the audience about whether his presentation of law school data was misleading by insisting that in our hearts surely we all knew he was right and that any question about whether he was wrong on the facts, and any attempt to rely on data rather than emotionally charged anecdotes, was a sign of flawed moral character.
May 10, 2015 in Guest Blogger: Michael Simkovic, Law in Cyberspace, Legal Profession, Ludicrous Hyperbole Watch, Of Academic Interest, Professional Advice, Science, Student Advice, Web/Tech, Weblogs | Permalink
May 05, 2015
A better grading system
Professor Merritt argues that mandatory grading curves can be unfair when one class has stronger students than another. I agree.
Statistician Valen Johnson—whom I cite in my last post as an authority on grade inflation— has developed a clever solution to this problem which involves adjusting grading curves within each class based on the ability levels of the students. A Johnson-inspired proposal was nearly adopted at Duke University in the late 1990s, but was blocked by departments that offered higher grades and attracted weaker students.
Most law schools try to balance their sections in term of student ability levels and overall quality of faculty. Nevertheless, anomalies like a “smart section” (as Professor Merritt calls it) may occasionally occur. Johnson’s proposal would be an excellent solution to this problem.
Professor Merritt asserts that there is some sort of problem with the market for lawyers and law graduates that makes competition and inequality uniquely bad in the context of law. These assertions are implausible given the low barriers to entry for both law schools and lawyers, aggressive competition between law schools for students and between lawyers for clients, and widespread inequality outside of law school and legal practice. Some form of regulation is the norm in many areas of employment and in many industries, and a licensing regime for lawyers and an accreditation system for law schools do not in any way make these occupations and institutions unique or unusual. According to a recent study, nearly a third of U.S. workers are licensed, licensing is more common as education and skill levels increase, and licensing does not affect inequality among the licensed.
As a general matter, deregulated market competition and greater inequality are a package deal. Inequality can be reduced through regulation, taxation, and politicization of compensation through unionization or growth of public sector employment.
Professor Merritt’s critiques follow the standard playbook of law school critics—take something about law schools that is widespread and common out of context, claim that it is somehow unique to law schools when it is neither unique nor unusual, and then demonize it.
Jeremy Telman responds.
May 04, 2015
Many critics have attacked law schools for offering merit scholarships that can only be retained if students meet minimum GPA requirements. Jeremy Telman has a fascinating new post analyzing these scholarships in light of common practices in higher education and the peer-reviewed social science literature. It’s a powerful counterpoint to a previously unanswered critique of law school ethics.
Professor Telman notes that similar conditional scholarships are widely used by undergraduate institutions, and even some state government programs. Undergraduates behave as if they understand how conditional scholarships work, which suggests that most law students, who are older, wiser, and more sophisticated, probably understand the terms of these agreements as well.
Moreover, minimum GPA requirements can motivate students to study harder, pay closer attention, and learn more. This seems particularly likely in the context of the first year of law school where mandatory grading curves and required curriculums remove the opportunity to shop for “easy A’s”. (Professor Telman does, however, express concern about inadequate performance feedback to law students until the final exams at the end of their first semester).
Professor Telman notes that law schools may struggle to predict at the time of admission which students will be the most successful. Conditional scholarships help institutions gather additional information about students’ abilities and work ethic and ensure that limited merit scholarship resources go to the students who are most deserving. Students who are deemed undeserving and lose their scholarships retain the option of transferring to another institution for their remaining years of law school.
Professor Telman doesn't object to additional disclosure about the percent of students retaining their scholarships, but he doubts it would have made much of a difference in prospective law students' matriculation decisions.
April 27, 2015
New York Times relies on unrepresentative anecdotes and flawed study to provide slanted coverage of legal education (Michael Simkovic)
Just when you thought The New York Times was rounding the corner and starting to report responsibly about legal education based on hard data and serious labor economics studies, their reporting reverts to the unfortunate form it has taken for much of the last 5 years*—relying on unrepresentative anecdotes and citing fundamentally flawed working papers to paint legal education in a negative light.
Responsible press coverage would have put law graduate outcomes in context by noting that:
(1) law graduates continue to do better in terms of employment (both overall and full time) and earnings than similar bachelor’s degree holders, even in an economy that has generally been challenging for young workers
(2) law students, even from some of the lowest ranked and most widely criticized law schools, continue to have much lower student loan default rates than the national average across institutions according to standardized measurements reported by the Department of Education
(3) law graduate earnings and employment rates typically increase as they gain experience
(4) Data from After the JD shows that law graduates continue to pay down their student loans and approximately half of graduates from the class of 2001 paid them off completely within 12 years of graduation
Instead, The New York Times compares law graduate outcomes today to law graduate outcomes when the economy was booming. But not all law graduates. The Times focuses on law graduates who have been unusually unsuccessful in the job market or have unusually large amounts of debt. For example, The New York Times focused on a Columbia law school graduate working as an LSAT tutor** as if that were a typical outcome for graduates of elite law schools. But according to the National Law Journal, two-thirds of recent Columbia graduates were employed at NLJ 250 law firms (very high paying, very attractive jobs),*** and the overwhelming majority of recent Columbia graduates appear to work in attractive positions. (Columbia outcomes are much better than most, but the negative outcomes discussed in The New York Times are substantially below average for law graduates as a whole).
In Timing Law School, Frank McIntyre’s and I analyze long term outcomes for those who graduated into previous recessions, using nationally representative data and well-established econometric methods. Our results suggest that law graduates continue to derive substantial benefits from their law degrees even when graduating into a recession. The recent recession does not appear to be an exception. (See also here and here). This analysis is not mentioned in the recent The New York Times article, even though it was cited in The New York Times less than a month ago (and alluded to in The Washington Post even more recently).
The implication of The New York Times’ story “Burdened With Debt, Law School Graduates Struggle in Job Market” is that there is some law specific problem, when the reality is that the recession continues to negatively affect all young and inexperienced workers and law graduates continue to do better than most. Law school improves young workers’ chances of finding attractive employment opportunities and reduces the risk of defaulting on debt. The benefits of law school exceed the costs for the overwhelming majority of law school graduates.
The New York Times relies heavily on a deeply flawed working paper by Professor Deborah Merritt of Ohio State. Problems with this study were already explained by Professor Brian Galle:
“My problem is that instead DJM wants to offer us a dynamic analysis, comparing 2014 to 2011, and arguing that the resulting differential tells us that there has been a "structural shift" in the market for lawyers. It might be that the data exist somewhere to conduct that kind of analysis, but if so they aren't in the paper. Nearly all the analysis in the paper is built on the tend line between DJM's 2014 Ohio results and national-average survey results from NALP.
Let me say that again. Almost everything DJM says is built on a mathematical comparison between two different pools whose data were constructed using different methods. I would not blame you if now stopped reading."
In other words, it is difficult to tell whether any differences identified by Professor Merritt are:
(1) Due to differences between Ohio and the U.S. as a whole
(2) Due to differences in methodology between Merritt, NALP, and After the JD
(3) Actually due to differences between 2011 and 2014 for the same group
After Professor Galle’s devastating critique, journalists should have been extremely skeptical of Merritt’s methodology and her conclusions. Professor Merritt’s response to Galle’s critique, in the comments below his post, is not reassuring:
“Bottom line for me is that the comparison in law firm employment (62.1% for the Class of 2000 three years after graduation, 40.5% for the lawyers in my population) seems too stark to stem solely from different populations or different methods—particularly because other data show a more modest decline in law firm employment over time. But this is definitely an area in which we need much, much more research.”
Judging from this response and the quotes in The New York Times, Merritt appears to be doubling down on her inapposite comparisons rather than checking how much of her conclusions are due to potentially fatal methodological problems. What Professor Merritt should have done is replicate her 2014 Ohio-only methodology in 2000/2001 or 2010/2011, compared the results for Ohio only at different points in time, and limited her claims to an analysis of the Ohio legal employment market.
There are additional problems with Professor Merritt’s study (or at least the March 11 version that I reviewed).****
- Ohio is not a representative legal employment market, but rather a relatively low paying one where lawyers comprise a relatively small proportion of the workforce.
- A disproportionate share of the 8 or 9 law schools in Ohio (9 if you include Northern Kentucky) are low ranked or unranked, and this presumably is reflected in their employment outcomes.
- Merritt’s sample is subject to selection bias because of movement of the most capable law graduates out of Ohio and into higher paying legal markets. Ohio law graduates who do not take the Ohio bar after obtaining jobs in Chicago, New York, Washington D.C., or other leading markets will not show up in Merritt’s sample.
- Whereas Merritt concludes that law graduate outcomes have not improved, the data may simply reflect the fact that Ohio is a less robust employment market than the U.S. as a whole.
- Merritt’s analysis of employment categories does not take into account increases in earnings within employment categories. After the JD and follow-ups suggests that these within-category gains are substantial, as does overall increases in earnings from Census data.
- Merritt makes a biased assumption that anyone she could not reach is unemployed instead of gathering additional information about non-respondents and weighting the results to take into account response bias. Law schools may have been more aggressive in tracking down non-respondents than Professor Merritt was.
For the benefit of those who are curious, I am making my full 8 page critique of Professor Merritt's working paper available here, but please keep in mind that it was written in mid March and Professor Merritt may have addressed some of these issues in more recent versions of her paper. If that is the case, I trust that she’ll highlight any changes or improvements in a blog post response.
* A few weeks ago I asked a research assistant (a third year law student) to search for stories in The New York Times and Wall Street Journal about law school. Depending on whether the story would have made my research assistant more likely or less likely to want to go to law school when he was considering it or would have had no effect, he coded the stories as positive, negative, or neutral. According to my research assistant, The New York Times reported 7 negative stories to 1 positive story in 2011 and 5 negative stories to 1 positive story in 2012. In 2013, 2014, and 2015, The New York Times coverage was relatively balanced. In aggregate over the five-year period The New York Times reported about 2 negative stories for every 1 positive story. The Wall Street Journal’s coverage was even more slanted—about 3.75 negative stories for every positive story—and remained heavily biased toward negative stories throughout the five-year period.
** Professor Stephen Diamond notes the LSAT tutor’s relatively high hourly wage, more lucrative opportunities the tutor claims he turned down, and how the tutor describes his own work ethic.
*** For the class of 2010, the figure at Columbia was roughly 52 percent 9 months after graduation, but activity in the lateral recruitment market suggests things may be looking up.
**** The comments that follow summarize a lengthy (8 page) critique I sent to Professor Merritt privately in mid March after reviewing the March 11 draft of her paper. I have not had a chance to review Professor Merritt’s latest draft, and Professor Merritt may have responded to some of these issues in a revision.
April 27, 2015 in Advice for Academic Job Seekers, Guest Blogger: Michael Simkovic, Law in Cyberspace, Legal Profession, Of Academic Interest, Professional Advice, Science, Student Advice, Web/Tech, Weblogs | Permalink
April 10, 2015
Did law schools behave unethically by providing employment and earnings information without simultaneously reporting survey response rates? Or is this standard practice?
The answer is that not reporting response rates is standard practice in communication with most audiences. For most users of employment and earnings data, response rates are a technical detail that is not relevant or interesting. The U.S. Government and other data providers routinely report earnings and employment figures separate from survey response rates.*
Sometimes, too much information can be distracting.** It’s often best to keep communication simple and focus only on the most important details.
Nonresponse is not the same thing as nonresponse bias. Law school critics do not seem to understand this distinction. A problem only arises if the individuals who respond are systematically different from those who do not respond along the dimensions being measured. Weighting and imputation can often alleviate these problems. The critics’ claims about the existence, direction, and magnitude of biases in the survey data are unsubstantiated.
High non-response rates to questions about income are not a sign of something amiss, but rather are normal and expected. The U.S. Census Bureau routinely finds that questions about income have lower response rates (higher allocation rates) than other questions.
Law school critics claim that law school graduates who do not respond to questions about income are likely to have lower incomes than those who do respond. This claim is not consistent with the evidence. To the contrary, high-income individuals often value privacy and are reluctant to share details about their finances.***
Another potential problem is “response bias”, in which individuals respond to survey questions in a way that is systematically different from the underlying value being measured. For example, some individuals may under report or over-report their incomes.
The best way to determine whether or not we have nonresponse bias or response bias problems is to gather additional information about non-responders and responders.
Researchers have compared income reported to Census surveys with administrative earnings data from the Social Security Administration and Internal Revenue Service. They find that highly educated, high-income individuals systematically under-report their incomes, while less educated, lower income individuals over-report. (Assuming the administrative data is more accurate than the survey data).
Part of the problem seems to be that bonuses are underreported, and bonuses can be substantial. Another problem seems to be that high-income workers sometimes report their take-home pay (after tax withholding and deductions for benefits) rather than their gross pay.
Other studies have also found that response bias and nonresponse bias lead to underestimation of earnings and employment figures.
In other words, there may indeed be biases in law school earnings data, but if there is, it is likely in the opposite direction of the one the law school critics have claimed.
Of course, the presence of such biases in law school data would not necessarily be a problem if the same biases exist in data on employment and earnings for alternatives to law school. After all, earnings and employment data is only useful when compared to a likely alternative to law school.
As with gross employment data, the critics are yet again claiming that an uncontroversial and nearly universal data reporting practice, regularly used by the United States Government, is somehow scandalous when done by law schools.
The only thing the law school critics have demonstrated is their unfamiliarity with basic statistical concepts that are central to their views.
* Reporting earnings and employment estimates without response rates in communication intended for a general audience—and even some fairly technically sophisticated audiences—is standard practice for U.S. government agencies such as the U.S. Census Bureau and the U.S. Department of Labor, Bureau of Labor Statistics. A few examples below:
- Earnings and unemployment by education level
- Unemployment rates
- Employment population ratio
- Tabular summaries from
** Information on response rates is available for researchers working with microdata to develop their own estimates, and for those who want to scour the technical and methodological documentation. But response rates aren’t of much interest to most audiences.
*** After the JD researchers noted that young law graduates working in large urban markets—presumably a relatively high-income group—were particularly reluctant to respond to the survey. From After the JD III:
“Responses . . . varied by urban and rural or regional status, law school rank, and practice setting. By Wave 2, in the adjusted sample, the significant difference between respondents and nonrespondents continued to be by geographic areas, meaning those from larger legal markets (i.e. New York City) were less likely to respond to the survey. By Wave 3, now over 12 years out into practice, nonrespondents and respondents did not seem to differ significantly in these selected characteristics.”
In the first wave of the study, non-respondents were also more likely to be male and black. All in all, it may be hard to say what the overall direction of any nonresponse bias might be with respect to incomes. A fairly reasonable assumption might be that the responders and non-responders are reasonably close with respect to income, at least within job categories.