May 04, 2015

Are Conditional Scholarships Good for Law Students? (Michael Simkovic)

Many critics have attacked law schools for offering merit scholarships that can only be retained if students meet minimum GPA requirements.  Jeremy Telman has a fascinating new post analyzing these scholarships in light of common practices in higher education and the peer-reviewed social science literature.  It’s a powerful counterpoint to a previously unanswered critique of law school ethics.

Professor Telman notes that similar conditional scholarships are widely used by undergraduate institutions, and even some state government programs.  Undergraduates behave as if they understand how conditional scholarships work, which suggests that most law students, who are older, wiser, and more sophisticated, probably understand the terms of these agreements as well.

Moreover, minimum GPA requirements can motivate students to study harder, pay closer attention, and learn more.  This seems particularly likely in the context of the first year of law school where mandatory grading curves and required curriculums remove the opportunity to shop for “easy A’s”.  (Professor Telman does, however, express concern about inadequate performance feedback to law students until the final exams at the end of their first semester). 

Professor Telman notes that law schools may struggle to predict at the time of admission which students will be the most successful.  Conditional scholarships help institutions gather additional information about students’ abilities and work ethic and ensure that limited merit scholarship resources go to the students who are most deserving.  Students who are deemed undeserving and lose their scholarships retain the option of transferring to another institution for their remaining years of law school.

Professor Telman doesn't object to additional disclosure about the percent of students retaining their scholarships, but he doubts it would have made much of a difference in prospective law students' matriculation decisions.

It’s a powerful argument.  Are conditional scholarships yet another example of critics applying a double standard to paint law schools in the worst possible light?

May 4, 2015 in Guest Blogger: Michael Simkovic, Law in Cyberspace, Legal Profession, Of Academic Interest, Professional Advice, Science, Student Advice, Web/Tech, Weblogs | Permalink

April 30, 2015

Fortune's Best Graduate Degrees Based on Earnings, Job Satisfaction, and "Stress"

An interesting, and not implausible, list.  The JD comes in 6th, though most of the other options are unlikely to be pursued by an undergraduate humanities major--one reason, among others, why we have probably hit bottom in terms of the applicant pool and will probably see a slight uptick in the next couple of years.

April 30, 2015 in Legal Profession, Of Academic Interest, Student Advice | Permalink

April 29, 2015

Geography Matters (Michael Simkovic)

A number of critics have argued against extrapolation from Professor Merritt’s study of the Ohio legal market to the national legal market.  In her response, Professor Merritt makes some good points, and also several key points with which I disagree. 

Professor Merritt suggests that an important contribution of her study is providing up-to-date information about national legal employment through the prism of Ohio. However, there is no shortage of up-to-date data that can provide a more accurate picture of national trends than a study specifically focused on Ohio.*  The primary value of Professor Merritt’s study is as an isolated snapshot of a single cohort in Ohio at a particular point in time.  Without additional information, it is hard to know how much, if at all, Professor Merritt’s findings should be generalized to other legal markets or other time periods.

There is no reason to believe that the single Ohio cohort tracked by Professor Merritt will better predict outcomes for those currently enrolling in law school than a national cohort.  The single Ohio cohort will likely be less predictive than a long-term national average across multiple cohortsIndeed, as Professor Merritt acknowledges, her study is not a study of going to law school in Ohio because of selection issues from law graduates leaving for larger markets, coming to Ohio from other markets, and from non-bar passage. **

Year-to-year changes in employment, earnings, and economic growth can vary widely from state to state.  Absent evidence of a history of correlated economic activity, a single state should not be used as a proxy for the U.S. as a whole or for other states.

There is no reason to believe that the trajectory of Ohio’s legal market from year to year will closely track national trends, particularly when the national legal market is heavily concentrated elsewhere.  Washington D.C. and the top 5 states by size of legal market*** collectively account for more than half of the national legal market

If Professor Merritt wishes to use Ohio as a proxy for the rest of the U.S., then she should supply evidence that Ohio tracks national trends, and she should compare Ohio to Ohio at different points in time and Ohio to the U.S. at the same point in time.

Second, Professor Merritt suggests that focusing on Ohio is just as reasonable as focusing on New York or California.  New York and California collectively constitute 28 percent of the national legal market.***  Ohio constitutes 2.5 percent of the national legal market.  Moreover, the New York legal market is unusually large relative to the New York economy, while Ohio has a legal market that is small relative to its economy. 

Third, Professor Merritt suggests that Ohio can be made nationally representative by deflating salaries elsewhere by cost of living differences.  Cost of living differences are not the reason corporations—who can send legal work anywhere— pay a premium for lawyers in the major legal markets such as New York, D.C., Los Angeles, Boston and Houston.  Rather, corporate clients believe that differences in quality of work justify higher billing rates for important matters.  New York, D.C. and other high-paying markets are importers of top legal talent from across the country. 

Differences in costs of living are not random, but rather reflect real differences in quality.  Cost of living indexes often focus on quantitative rather than qualitative factors.  For example, a restaurant meal in Manhattan may cost more than a restaurant meal in Buffalo, but the quality of the experience in the restaurant in Manhattan will on average be higher because the high prices restaurants in Manhattan can charge will attract the most talented restaurateurs.  Similarly, there may be differences in the quality of healthcare, legal services, education, policing, parks and recreation, environmental safety, transit, housing and other factors.  Money attracts talent.  Some amenities or opportunities may only be available in particular locations, and people are willing to pay for proximity to consumption, employment, and social opportunities. 

Many costs are not local, but rather national.  These include automobiles, items ordered online, higher education at major universities, and investments (stocks, bonds, etc.).  For law school graduates—who will typically be able to earn far more than they consume in a given year—it is financially better to work where both income and costs are proportionately higher because this will maximize the dollar value of savings.  Law graduates can always retire to a lower-cost location later in life if they wish.

One quantitative measure for differences in quality of life is differences in life expectancy.****   High cost, high income, high infrastructure states like New York, Connecticut, and Massachusetts generally rank well on this measure, while lower cost, lower income states rank less well.  This pattern can also be seen internationally and individually—higher income and higher life expectancy are correlated.*****  

There will indeed be some lucky individuals who find low-cost locales both more attractive and less expensive, and some unlucky individuals who find high cost locales unworthy of the price.  Costs of living reflect the aggregation by the market of many individual preferences, not any particular person’s idiosyncratic views.  Nevertheless, local prices can contain important information about quality of life that we should not assume away.

* There are numerous sources of up-to-date (2013 or even 2014) national information, including data from:

NALP and ABA data are for the most recent graduating class shortly after graduation.  SIPP earnings data includes earnings as recently as 2013, but only through the class of 2008.  ACS and CPS have young lawyers and young professional degree holders, but cannot specifically identify young law degree holders.  The Department of Education also has information on student loan default rates for recent cohorts.  Default rats remain much lower for former law students than for most other borrowers.

Another valuable source of information is After the JD III.  Professor Merritt notes that response rates for higher income individuals may be higher in After the JD, but the After the JD researchers, like the U.S. Census, weight their sample to take into account differences in response rates.

**The selection bias issues may be more severe than Merritt has acknowledged.   Looking at Ohio State’s 509 report for 2011, there were 24 students who took the NY bar vs. 136 who took the Ohio bar—a substantial percentage of the class taking a bar in a non-adjacent state.  The New York bar takers had much higher bar passage rates (11% above the state average for N.Y. vs. 1.3% above the state average for Ohio), which is consistent with positive selection out of state. In any given year, roughly 25 to 50 percent of Ohio State law school graduates who are employed 9 or 10 months after graduation are employed outside of Ohio.  For Case Western graduates, employment seems to be even less Ohio-centered than Ohio State.

*** Size of the legal market calculated using ACS data, multiplying number of lawyers by average total personal income per lawyer to get aggregate pay to all lawyers.  In other words, the measure is a dollar count, not a body count.

**** It is probably preferable to consider life expectancy within race (life expectancy varies by race, and racial demographics vary by geography).

***** After controlling for GDP per capita, societies with less income dispersion tend to have higher life expectancy.  Another issue is selection effects vs. causation.  For example, those with higher life expectancy to begin with may choose to pursue additional education and therefore have the opportunity to live in high cost, high income states.

April 29, 2015 in Guest Blogger: Michael Simkovic, Legal Profession, Of Academic Interest, Professional Advice, Rankings, Science, Student Advice, Travel, Web/Tech, Weblogs | Permalink

April 27, 2015

New York Times relies on unrepresentative anecdotes and flawed study to provide slanted coverage of legal education (Michael Simkovic)

Just when you thought The New York Times was rounding the corner and starting to report responsibly about legal education based on hard data and serious labor economics studies, their reporting reverts to the unfortunate form it has taken for much of the last 5 years*—relying on unrepresentative anecdotes and citing fundamentally flawed working papers to paint legal education in a negative light.

Responsible press coverage would have put law graduate outcomes in context by noting that:

(1) law graduates continue to do better in terms of employment (both overall and full time) and earnings than similar bachelor’s degree holders, even in an economy that has generally been challenging for young workers

(2) law students, even from some of the lowest ranked and most widely criticized law schools, continue to have much lower student loan default rates than the national average across institutions according to standardized measurements reported by the Department of Education

(3) law graduate earnings and employment rates typically increase as they gain experience

(4) Data from After the JD shows that law graduates continue to pay down their student loans and approximately half of graduates from the class of 2001 paid them off completely within 12 years of graduation

Instead, The New York Times compares law graduate outcomes today to law graduate outcomes when the economy was booming.  But not all law graduates.  The Times focuses on law graduates who have been unusually unsuccessful in the job market or have unusually large amounts of debtFor example, The New York Times focused on a Columbia law school graduate working as an LSAT tutor** as if that were a typical outcome for graduates of elite law schools.  But according to the National Law Journal, two-thirds of recent Columbia graduates were employed at NLJ 250 law firms (very high paying, very attractive jobs),*** and the overwhelming majority of recent Columbia graduates appear to work in attractive positions.   (Columbia outcomes are much better than most, but the negative outcomes discussed in The New York Times are substantially below average for law graduates as a whole).

In Timing Law School, Frank McIntyre’s and I analyze long term outcomes for those who graduated into previous recessions, using nationally representative data and well-established econometric methods.  Our results suggest that law graduates continue to derive substantial benefits from their law degrees even when graduating into a recession.  The recent recession does not appear to be an exception. (See also here and here).  This analysis is not mentioned in the recent The New York Times article, even though it was cited in The New York Times less than a month ago (and alluded to in The Washington Post even more recently).

The implication of The New York Times’ story “Burdened With Debt, Law School Graduates Struggle in Job Market” is that there is some law specific problem, when the reality is that the recession continues to negatively affect all young and inexperienced workers and law graduates continue to do better than most.   Law school improves young workers’ chances of finding attractive employment opportunities and reduces the risk of defaulting on debt.  The benefits of law school exceed the costs for the overwhelming majority of law school graduates.

The New York Times relies heavily on a deeply flawed working paper by Professor Deborah Merritt of Ohio State.  Problems with this study were already explained by Professor Brian Galle:

“My problem is that instead DJM wants to offer us a dynamic analysis, comparing 2014 to 2011, and arguing that the resulting differential tells us that there has been a "structural shift" in the market for lawyers.  It might be that the data exist somewhere to conduct that kind of analysis, but if so they aren't in the paper.  Nearly all the analysis in the paper is built on the tend line between DJM's 2014 Ohio results and national-average survey results from NALP.  

Let me say that again.  Almost everything DJM says is built on a mathematical comparison between two different pools whose data were constructed using different methods.  I would not blame you if now stopped reading."

In other words, it is difficult to tell whether any differences identified by Professor Merritt are:

(1) Due to differences between Ohio and the U.S. as a whole

(2) Due to differences in methodology between Merritt, NALP, and After the JD

(3) Actually due to differences between 2011 and 2014 for the same group

After Professor Galle’s devastating critique, journalists should have been extremely skeptical of Merritt’s methodology and her conclusions.  Professor Merritt’s response to Galle’s critique, in the comments below his post, is not reassuring:

“Bottom line for me is that the comparison in law firm employment (62.1% for the Class of 2000 three years after graduation, 40.5% for the lawyers in my population) seems too stark to stem solely from different populations or different methods—particularly because other data show a more modest decline in law firm employment over time. But this is definitely an area in which we need much, much more research.”

Judging from this response and the quotes in The New York Times, Merritt appears to be doubling down on her inapposite comparisons rather than checking how much of her conclusions are due to potentially fatal methodological problems.  What Professor Merritt should have done is replicate her 2014 Ohio-only methodology in 2000/2001 or 2010/2011, compared the results for Ohio only at different points in time, and limited her claims to an analysis of the Ohio legal employment market.

There are additional problems with Professor Merritt’s study (or at least the March 11 version that I reviewed).**** 

  • Ohio is not a representative legal employment market, but rather a relatively low paying one where lawyers comprise a relatively small proportion of the workforce.  
  • A disproportionate share of the 8 or 9 law schools in Ohio (9 if you include Northern Kentucky) are low ranked or unranked, and this presumably is reflected in their employment outcomes. 
  • Merritt’s sample is subject to selection bias because of movement of the most capable law graduates out of Ohio and into higher paying legal markets.  Ohio law graduates who do not take the Ohio bar after obtaining jobs in Chicago, New York, Washington D.C., or other leading markets will not show up in Merritt’s sample.  
  • Whereas Merritt concludes that law graduate outcomes have not improved, the data may simply reflect the fact that Ohio is a less robust employment market than the U.S. as a whole. 
  • Merritt’s analysis of employment categories does not take into account increases in earnings within employment categories.  After the JD and follow-ups suggests that these within-category gains are substantial, as does overall increases in earnings from Census data. 
  • Merritt makes a biased assumption that anyone she could not reach is unemployed instead of gathering additional information about non-respondents and weighting the results to take into account response bias. Law schools may have been more aggressive in tracking down non-respondents than Professor Merritt was. 

For the benefit of those who are curious, I am making my full 8 page critique of Professor Merritt's working paper available here, but please keep in mind that it was written in mid March and Professor Merritt may have addressed some of these issues in more recent versions of her paper.  If that is the case, I trust that she’ll highlight any changes or improvements in a blog post response.


*    A few weeks ago I asked a research assistant (a third year law student) to search for stories in The New York Times and Wall Street Journal about law school.  Depending on whether the story would have made my research assistant more likely or less likely to want to go to law school when he was considering it or would have had no effect, he coded the stories as positive, negative, or neutral.  According to my research assistant, The New York Times reported 7 negative stories to 1 positive story in 2011 and 5 negative stories to 1 positive story in 2012.  In 2013, 2014, and 2015, The New York Times coverage was relatively balanced.  In aggregate over the five-year period The New York Times reported about 2 negative stories for every 1 positive story.  The Wall Street Journal’s coverage was even more slanted—about 3.75 negative stories for every positive story—and remained heavily biased toward negative stories throughout the five-year period.

**   Professor Stephen Diamond notes the LSAT tutor’s relatively high hourly wage, more lucrative opportunities the tutor claims he turned down, and how the tutor describes his own work ethic.

***  For the class of 2010, the figure at Columbia was roughly 52 percent 9 months after graduation, but activity in the lateral recruitment market suggests things may be looking up.

**** The comments that follow summarize a lengthy (8 page) critique I sent to Professor Merritt privately in mid March after reviewing the March 11 draft of her paper.  I have not had a chance to review Professor Merritt’s latest draft, and Professor Merritt may have responded to some of these issues in a revision.  


UPDATE:  Additional responses from Professors Galle and Merritt.

April 27, 2015 in Advice for Academic Job Seekers, Guest Blogger: Michael Simkovic, Law in Cyberspace, Legal Profession, Of Academic Interest, Professional Advice, Science, Student Advice, Web/Tech, Weblogs | Permalink

April 20, 2015

Latest NALP salary data

The percentage of firms paying $160,000 to start is up quite a bit since last year, but not yet back to 2009 levels, among other tidbits.

April 20, 2015 in Legal Profession, Of Academic Interest, Professional Advice, Student Advice | Permalink

April 11, 2015

Offsetting Biases (Michael Simkovic)

Deborah Merritt and Kyle McEntee conflated “response rates” with nonresponse bias and response bias.  After I brought this error to light, Professor Merritt explained that she and Mr. McEntee were not confused about basic statistical terminology, but rather were being intentionally vague in their critique to be more polite* to the law schools.

Professor Merritt also changed the topic of conversation from Georgetown’s employment statistics—which had been mentioned in The New York Times and discussed by me, Professor Merritt, and Kyle McEntee—to the employment statistics of the institution where I teach.**  

What Professor Merritt meant to say is that law schools have not been properly weighting their data to take into account nonresponse bias.  This is an interesting critique.  However, proper weights and adjustments to data should take into account all forms of nonresponse bias and response bias, not just the issue of over-representation of large law firms in NALP salary data raised by Professor Merritt.

While such over-representation would have an effect on the mean, it is unclear how much impact, if any, it would have on reported medians—the measure of central tendency used by The New York Times and critiqued by Mr. McEntee.

Other biases such as systematic under-reporting of incomes by highly educated individuals,*** under-reporting of bonuses and outside income, and the like should be taken into account.****   To the extent that these biases cut in opposite directions, they can offset each other.  It’s possible that in aggregate the data are unbiased, or that the bias is much smaller than examination of a single bias would suggest.  

Moreover, focusing on first year salaries as indicative of the value of a lifetime investment is itself a bias. As The Economic Value of a Law Degree, showed, incomes tend to rise rapidly among law graduates. They do not appreciably decrease, either, until the fourth decade of employment.



If Professor Merritt’s view is that differences between NALP, ABA, and U.S. Census Bureau data collection and reporting conventions make law school-collected data more difficult to compare to other data sources and make law school data less useful, then I am glad to see Professor Merritt coming around to a point I have made repeatedly.

I have gone further and suggested that perhaps the Census Bureau and other government agencies should be collecting all data for graduate degree programs to ensure the accuracy and comparability of data across programs and avoid wasting resources on duplicative data collection efforts.

This could also help avoid an undue amount of focus on short-term outcomes, which can be misleading in light of the rapid growth of law graduate earnings as they gain experience.  The inappropriate focus on the short term can be misleading if students are not aware of the growth trajectory and how it compares to the growth trajectory of likely earnings without a law degree.

*    Readers of Professor Merritt’s blog posts will be familiar with Professor Merritt’s general level of politeness.   In her latest, Professor Merritt describes me as “clueless.”

**   This tactic, bringing up the employment statistics of the institution where those whom she disagrees with teach, is something of a habit for Professor Merritt.  See her response Anders Walker at St. Louis).

***  Law graduates outside of the big firms are highly educated, high-income individuals compared to most of the rest of individuals in the United States.  That is the benchmark used by researchers when they identified the reporting biases in census data that lead to under-reporting of incomes.

 **** The risk of under-reporting income in law may be particularly high because of opportunities for tax evasion for those who run small businesses or have income outside of their salary.


UPDATE (4/14/2015):  I just confirmed with NALP that their starting salary data does not include end of year bonuses.

April 11, 2015 in Guest Blogger: Michael Simkovic, Legal Profession, Of Academic Interest, Professional Advice, Science, Student Advice, Weblogs | Permalink

April 10, 2015

Information overload and response rates (Michael Simkovic)

Did law schools behave unethically by providing employment and earnings information without simultaneously reporting survey response rates?  Or is this standard practice?   

The answer is that not reporting response rates is standard practice in communication with most audiences.  For most users of employment and earnings data, response rates are a technical detail that is not relevant or interesting.  The U.S. Government and other data providers routinely report earnings and employment figures separate from survey response rates.*

Sometimes, too much information can be distracting.**  It’s often best to keep communication simple and focus only on the most important details.

Nonresponse is not the same thing as nonresponse bias.  Law school critics do not seem to understand this distinction.  A problem only arises if the individuals who respond are systematically different from those who do not respond along the dimensions being measured.  Weighting and imputation can often alleviate these problems.  The critics’ claims about the existence, direction, and magnitude of biases in the survey data are unsubstantiated.

High non-response rates to questions about income are not a sign of something amiss, but rather are normal and expected.  The U.S. Census Bureau routinely finds that questions about income have lower response rates (higher allocation rates) than other questions.

Law school critics claim that law school graduates who do not respond to questions about income are likely to have lower incomes than those who do respond.  This claim is not consistent with the evidence.  To the contrary, high-income individuals often value privacy and are reluctant to share details about their finances.*** 

Another potential problem is “response bias”, in which individuals respond to survey questions in a way that is systematically different from the underlying value being measured.  For example, some individuals may under report or over-report their incomes.

The best way to determine whether or not we have nonresponse bias or response bias problems is to gather additional information about non-responders and responders.

Researchers have compared income reported to Census surveys with administrative earnings data from the Social Security Administration and Internal Revenue Service.  They find that highly educated, high-income individuals systematically under-report their incomes, while less educated, lower income individuals over-report.  (Assuming the administrative data is more accurate than the survey data).  

Part of the problem seems to be that bonuses are underreported, and bonuses can be substantial.  Another problem seems to be that high-income workers sometimes report their take-home pay (after tax withholding and deductions for benefits) rather than their gross pay.

Other studies have also found that response bias and nonresponse bias lead to underestimation of earnings and employment figures.

In other words, there may indeed be biases in law school earnings data, but if there is, it is likely in the opposite direction of the one the law school critics have claimed.

Of course, the presence of such biases in law school data would not necessarily be a problem if the same biases exist in data on employment and earnings for alternatives to law school.  After all, earnings and employment data is only useful when compared to a likely alternative to law school.

As with gross employment data, the critics are yet again claiming that an uncontroversial and nearly universal data reporting practice, regularly used by the United States Government, is somehow scandalous when done by law schools. 

The only thing the law school critics have demonstrated is their unfamiliarity with basic statistical concepts that are central to their views.



*    Reporting earnings and employment estimates without response rates in communication intended for a general audience—and even some fairly technically sophisticated audiences—is standard practice for U.S. government agencies such as the U.S. Census Bureau and the U.S. Department of Labor, Bureau of Labor Statistics.  A few examples below:

 **  Information on response rates is available for researchers working with microdata to develop their own estimates, and for those who want to scour the technical and methodological documentation.  But response rates aren’t of much interest to most audiences. 

*** After the JD researchers noted that young law graduates working in large urban markets—presumably a relatively high-income group—were particularly reluctant to respond to the survey. From After the JD III:

“Responses . . . varied by urban and rural or regional status, law school rank, and practice setting.  By Wave 2, in the adjusted sample, the significant difference between respondents and nonrespondents continued to be by geographic areas, meaning those from larger legal markets (i.e. New York City) were less likely to respond to the survey.  By Wave 3, now over 12 years out into practice, nonrespondents and respondents did not seem to differ significantly in these selected characteristics.”

In the first wave of the study, non-respondents were also more likely to be male and black.  All in all, it may be hard to say what the overall direction of any nonresponse bias might be with respect to incomes.  A fairly reasonable assumption might be that the responders and non-responders are reasonably close with respect to income, at least within job categories.

April 10, 2015 in Guest Blogger: Michael Simkovic, Law in Cyberspace, Legal Profession, Of Academic Interest, Professional Advice, Science, Student Advice, Weblogs | Permalink

April 08, 2015

Opportunities, College Majors, and Occupations (Compared to what? continued) (Michael Simkovic)

Opportunity costs and tradeoffs are foundational principles of micro-economics. Comparison between earnings with a law degree and earnings with likely alternatives to law school is the core of The Economic Value of a Law Degree.

In her recent post, Professor Merritt raises interesting questions about whether some students who now go to law school could have had more success elsewhere if they had majored in a STEM (Science Technology Engineering & Math) field rather than humanities or social sciences. 

These questions, however, don’t invalidate our analysis.  A percentage of those who major in STEM fields of course go on to law school, and our data suggest that they also receive a large boost to their earnings compared to a bachelor’s degree.  Some studies suggest that among those who go to law school, the STEM and economics majors earn more than the rest. 

Research on college major selection reveals that many more individuals intend to major in STEM fields than ultimately complete those majors.  STEM/Econ majors who persist have higher standardized test scores than humanities/social science majors at the same institution and also higher scores than those who switch from STEM/Econ to humanities or social science.  Those who switch out of STEM received lower grades in their STEM classes than those who persist.  Compared to Humanities and Social Science majors, the STEM majors spend more time studying, receive lower grades, and take longer to complete their majors. 

In other words, many of the individuals who end up majoring in the humanities and social sciences may have attempted, unsuccessfully, to major in STEM fields. (For a review of the literature, see Risk Based Student Loans and The Knowledge Tax).

In The Economic Value of a Law Degree, Frank McIntyre and I investigated whether the subset of humanities majors who go to law school had unusually high earning potential and found no evidence suggesting this.  The humanities majors who attend law school are about as much above the average humanities major in terms of earning potential as the STEM majors who attend law school are above the average STEM major.

In her recent post, Professor Merritt does not suggest alternatives to law school.  Instead she selectively discusses occupations other than being a lawyer.  These are generally very highly paid and desirable occupations, such as senior managerial roles, and many individuals who pursue such jobs will be unable to obtain them.  In other words, these high paid jobs cited by Professor Merritt are not the likely alternative outcome for most of those who now go to law school if they chose another path.  (Indeed, given the high earnings premium to law school including the 40 percent of graduates who do not practice law, a law degree probably increases the likelihood of obtaining highly paid jobs other than practicing law).

Occupations are outcomes.  Education is a treatment.  Students choose education programs (subject to restrictive admissions policies and challenges of completing different programs), but have more limited control over their ultimate occupation.  Comparing occupations as if they were purely choices would be an error.  Not every MBA who sets out to be a Human Resources Manager will land that job, just as not every law school graduate will become a lawyer at a big firm.  Analysis of nationally representative data from the U.S. Census Bureau using standard statistical techniques from labor economics to consider realistic earnings opportunities--rather than selective focus on the very highest paid occupations tracked by the BLS--suggests that most of the folks who go to law school would be in much less attractive positions if they had stuck with a bachelor’s degree.

Frank McIntyre and I have previously noted the importance of additional research into how the value of a law degree varies by college major, and how the causal effect of different kinds of graduate degrees varies for different sorts of people.

We appreciate Professor Merritt’s interest in these issues and look forward to discussing them in the future when more methodologically rigorous research becomes available.  Professor Merritt raises some interesting ancillary issues about response rates, but discussion of those issues will have to wait for a future post. 


April 8, 2015 in Guest Blogger: Michael Simkovic, Legal Profession, Of Academic Interest, Professional Advice, Science, Student Advice, Weblogs | Permalink

April 07, 2015

Kyle McEntee Attacks The New York Times for Using Data Honestly (Michael Simkovic)

Recently, The New York Times reported on law school and the legal profession based on hard data and peer reviewed research rather than anecdote and innuendo. The New York Times came to the conclusion that anyone looking honestly at the data would naturally come to—law school seems to be a pretty good investment, at least compared to a terminal bachelor’s degree. 

This contradicts the story that Mr. McEntee has been telling for the last few years. Mr. McEntee is not happy with The New York Times.

Mr. McEntee suggests incorrectly that The New York Times reported Georgetown’s median private sector salary without providing information on what percentage of the class or of those employed were working in the private sector.   (Mr. McEntee also seems to be confused about the difference between response rates—the percentage of those surveyed who respond to the survey or to a particular question—and response bias—whether those who respond to a survey are systematically different along the measured variable from those who do not).

The New York Times wrote: 

Last year, 93.2 percent of the 645 students of the Georgetown Law class of 2013 were employed. Sixty percent of the 2013 graduates were in the private sector with a median starting salary of $160,000.

Deborah Merritt disputes the accuracy of these numbers, suggesting it is 60 percent of the 93.2 percent of the graduating class who were employed that were employed in the private sector.  This would come to 56 percent of the class employed in the private sector and is a small enough difference that The New York Times may have simply rounded up. 

In any case, it is clear that The New York Times provided information about the percent of graduates working in the private sector.

Mr. McEntee also repeats the odd claim that by reporting employment numbers that appear to be close to consistent with the standard definition of “employment” established by the U.S. Census Bureau and promulgated internationally by the International Labor Organization, The New York Times is somehow misleading its readers.  

To the contrary, it is Mr. McEntee’s non-standard definitions of employment, taken out of context, that are likely to mislead those attempting to compare law school statistics to the employment statistics of the next best alternative.  Mr. McEntee discusses full-time employment statistics for law schools without noting that the full-time employment rate for law graduates is higher than the full-time employment rate for bachelor’s degree holders with similar levels of experience and backgrounds under consistent definitions and survey methods.  And he overlooks the evidence that those who do not practice law still benefit from their law degrees.

Mr. McEntee also inaccurately describes my research with Frank McIntyre, claiming incorrectly that we do not take into account those who graduated after 2008.  Timing Law School was specifically designed to address this limitation of our earlier research

Timing Law School includes an analysis of two proxies for law school graduates from the American Community Survey: (1) young professional degree holders excluding those working in medical professions, and (2) young lawyers.  This analysis includes individuals who graduated as recently as 2013, and finds no evidence of a decline in recent law graduates’ outcomes relative to those of similar bachelor’s degree holders.  (See also here for a discussion of recent data for the subset of law graduates who work as lawyers).  

ACS medical ex-professional

ACS Lawyers

Timing Law School also simulates the long term effects on the earnings premium of graduating into a recession based on the experiences of those who have graduated into previous recessions.  The differences between graduating into a recession and graduating into an average economy are not very large (there is a large boost for those graduating into a boom, but booms and recessions are not predictable at the time of law school matriculation).

Moreover, in Timing Law School we find that fluctuations in short-term outcomes for recent graduates are not good predictors of outcomes for those who are currently deciding whether or not to enter law school; long term historical data is a better predictor.

The Economic Value of a Law Degree did not include data on those who graduated after 2008 because such data was not available in the Survey of Income and Program Participation.  However, it did include earnings data through 2013, and found no evidence of the earnings premium for law graduates declining in recent years to below its historical average.


Frank and I have noted repeatedly that our analysis compares a law degree to a terminal bachelor’s degree and that we think an important area for future research is careful comparative analysis of alternate graduate degrees, being mindful of selection effects (read The Economic Value of a Law Degree or for the most recent example, see our post from two days ago).  While a casual (i.e., not causal) examination of raw data suggests that a law degree likely compares reasonably well to most alternatives other than a medical degree, we’ve noted that it’s possible that more rigorous analysis will reveal that another graduate degree is a better option for some prospective law students, especially when subjective preferences are taken into account along with financial considerations. 



Mr. McEntee claims incorrectly that when it comes to other graduate degrees, “McIntyre and Simkovic don’t know and don’t care; they’re convinced that the value of a law degree as immutable as the laws of nature.” 

Mr. McEntee insists that law graduates, even at the higher ranked schools, will find it challenging to repay their student loans.  However, data from After the JD shows that law school graduates from the class of 2000/2001 have been paying down their loans rapidly. 


What about those who entered repayment more recently, when tuition was higher and job prospects less plentiful? 

Data from the U.S. Department of Education shows that law students, even at low ranked law schools, remain much less likely to default than most student borrowers.  This is true even though law students typically graduate with higher debt levels.

Slide1 Slide2Slide3


Indeed, The Economic Value of a Law Degree suggests that law graduates generally have higher incomes after taxes and after paying additional debt service than they likely would have had with a terminal bachelor’s degree, even before taking into account debt forgiveness available under Income Based Repayment plans. 

Based in part on our research, private student lenders have noticed how unlikely law graduates are to fail to repay their loans.  These lenders offer refinancing at substantially lower rates than those charged by the federal government, further reducing the costs of legal education for many graduates (while earning a profit in the process).

Mr. McEntee’s problem is not that The New York Times got the facts wrong.  His problem is that The New York Times got too many of the facts right.  Mr. McEntee simply dislikes the facts. 

No matter what new information becomes available, Mr. McEntee insists that law school is financially disastrous.  This is curious for a public figure who claims that his goal is providing prospective law students more accurate information about law school.  

April 7, 2015 in Guest Blogger: Michael Simkovic, Legal Profession, Of Academic Interest, Professional Advice, Science, Student Advice, Weblogs | Permalink

April 01, 2015

What “Employment” and “Unemployment” Mean (Michael Simkovic)

Recently, two criticisms have been leveled against law schools.  The first is an economic critique—law school is not worth it financially compared to a terminal bachelor’s degree.  This critique is incorrect for the overwhelming majority of law school graduates.

The second is a moral critique—that law schools behaved unethically or even committed fraud (see here, here, and here) by presenting their employment statistics in a misleading way.  (While at least one of the 200+ American Bar Association (ABA) approved law schools misreported LSAT scores and GPAs of incoming students, and a former career services employee at another alleges specific misreporting of unemployment data at that law school, I am focusing here not on the outliers, but on the critique against all law schools generally).

The moral critique against law schools comes down to this:  The law schools used the same standard method of reporting data as the U.S. Government. 

According to the critics’ line of reasoning, “employment” means only full-time permanent work as a lawyer.  Anything else should count as either “unemployment” or some special category of pseudo-unemployment (i.e., underemployment) .  (This is apparently based on an incorrect belief that law school only benefits the subset of graduates who practice law).  

Employment and unemployment statistics are not meaningful in a vacuum.  They only become useful when they can be compared across time, for different groups, or for a different set of choices.  For example, prospective law students might want to know that law school graduates are generally less likely to be unemployed or disabled than similar bachelor’s degree holders.  (Frank McIntyre and I combine the unemployment and disability rates whenever possible because of research showing that disability is often a mask for unemployment, although we’d generally get similar results for relative rates if we just used unemployment).

To avoid confusion and ensure that data are comparable, the standard definitions used by the U.S. Government should be used when reporting employment statistics, unless there is an indication that non-standard definitions are being used.  

The standard government definitions of “employment” and “unemployment” are the way we all use these words in ordinary speech when we say things like “the unemployment rate went down this year.”  These are not obscure definitions.  Googling “unemployment definition” and checking the first few results—Investopedia , Wikipedia,, and the U.S. Bureau of Labor Statistics (BLS) website —will get you to the right answer.

So how does the United States government define “employment”?

The most commonly reported and cited official government employment statistics include individuals as “employed” whether such individuals are employed full-time or part-time, whether in permanent or nonpermanent positions, whether in jobs that do or do not require the level of education they have obtained.*

In other words, the U.S. Government counts individuals as employed even if they are employed in part-time, temporary jobs that do not require their level of education.  Indeed, individuals count as employed even if they are self-employed or worked without pay in a family-owned business.

When the government reports education-level-specific employment statistics** it uses the same definitions and does not restrict employment to those who are employed in jobs that require their education level.   Employment includes any employment, whether full-time or part-time, whether temporary or permanent, whether in a job that requires a given level of education or not. 

What about the standard definition of “unemployment”? 

Unemployment is not the absence of employment.  Instead, there are three categories—employed, not-in-labor-force, and unemployed.  An individual only counts as “unemployed” if he or she “had no employment during the reference week”, was “available for work, except for temporary illness” and recently “made specific efforts to find employment.”

Those who are not working and are not actively seeking work for whatever reason—for example, caring for dependents, disability, pursuing additional education—are not counted as part of the labor force.  Unemployed persons as defined by CPS are used to calculate the widely cited “unemployment rate.” The unemployment rate is defined as unemployed persons as a percent of the labor force--in other words, excluding those who are neither working nor seeking work.***

Some law school critics have claimed that anyone who fails to respond to a survey about their employment status should be assumed to be unemployed.  The Census and BLS disagree, and instead weight the data to account for non-respondents. 

In addition to top-level information about employment status, some data sources such as the CPS may also include fields with more detailed information about full- or part-time work-status, industry or sector, and occupation.  Law schools have also historically provided a detailed breakdown of employment categories shortly after graduation in the ABA-LSAC Official Guide To ABA-Approved Law Schools.  In the last few years, law schools have provided even more detail in ABA-required disclosures. (We’ve previously noted some of the problems with focusing on employment outcomes shortly after graduation rather than long-term value added; The ABA's new employment data protocols have additional problems with their definition of "unemployed" discussed below ****).  The National Association for Law Placement (NALP) also provides high level data and a more detailed breakdown. 

The inclusion or non-inclusion of more detailed information does not alter the meaning of top-level information about employment status: the meaning of “employed” is established and well understood by users of employment data.  Commonly used and cited employment statistics have been reported by the BLS from 1948 through the present, and are widely understood by users of employment data.

Indeed, the BLS has noted for decades in its Occupational Outlook Handbook that many law school graduates do not work as lawyers.  Law schools and bar examiners publish bar passage rate statistics which clearly show that many recent law school graduates cannot legally be working as lawyers (unless everyone who failed a bar exam in one state passed a bar exam in another).

Comparing apples to apples using standard definitions reveals that law school graduates are doing relatively well compared to similar bachelor’s degree holders.   By contrast, critics of law schools and plaintiffs lawyers have used non-standard definitions and compared apples to oranges.  

It is not surprising that the courts have dismissed the lawsuits against law schools.  If only the New York Times and the Wall Street Journal were as fair and judicious.


*     The primary source of labor force statistics for the population of the United States is the Current Population Survey (CPS), sponsored jointly by the United States Department of Labor, Bureau of Labor Statistics and the United States Census Bureau (Census).  CPS is the source of numerous high-profile economic statistics, including the national unemployment rate. CPS defines "Employed persons"* to broadly include anyone who has done any paid work during the week when it is measured, who worked for themselves or a family member, or who was temporarily absent from work. 

“Employed persons”* as defined by CPS are used to calculate the “Employment-population ratio”.  The Employment Population Ratio resembles the “Percent Employed” statistics reported by law schools.  

Employed Persons” includes:


  • “Persons
    • 16 years and over
    • in the
      • civilian
      • noninstitutional population
  • who, during the reference week,
    • (a)
      • did any work at all (at least 1 hour) as paid employees;


  • worked in their own business, profession, or on their own farm,
  • or
  • worked 15 hours or more as unpaid workers in an enterprise operated by a member of the family;


  • (b)
    • all those who were not working
    • but who had jobs or businesses
    • from which they were temporarily absent
    • because of
      • vacation,
      • illness,
      • bad weather,
      • childcare problems,
      • maternity or paternity leave,
      • labor-management dispute,
      • job training,
      • or
      • other family or personal reasons,
      • whether or not they were paid for the time off or were seeking other jobs. . . . “

(emphasis added)


**    The BLS also reports Employment Population Ratios for specific education levels and age groups such as bachelor’s degree holders and above ages 25 to 34. These statistics are also reported by the United States Department of Education, National Center for Education Statistics.  (To the extent economists have tried to define and measure “underemployment” (see here and here ), it appears to be as or more common among bachelor’s degree holders compared to similar law degree holders).


***   The “labor force” as defined by CPS consists only of persons who are either “employed” or “unemployed” under CPS definitions.


****  The ABA’s new data protocol counts individuals as “Unemployed” who would instead be considered “Not-in-labor-force” by the U.S. government.  The ABA subcategory, “Unemployed—Seeking” is probably the closest to the standard definition of unemployment.  This misalignment between ABA definitions and standard government definitions of unemployment could lead individuals comparing ABA data to standard and widely used government employment data to erroneously conclude that unemployment for law school graduates is higher relative to other groups than it really is.

April 1, 2015 in Guest Blogger: Michael Simkovic, Legal Profession, Of Academic Interest, Professional Advice, Science, Student Advice, Weblogs | Permalink