October 30, 2013
Present Value and Cash Flows
Several critics of the Economic Value of a Law Degree have made mathematical errors or misunderstood the contents of the study. One example relates to a fundamental financial concept, net present value. The net present value is the value today of cash flows or payments that will be given or received in the future.
The psychological and financial costs to the recipient of delay in payment are already incorporated into present value—present value is the equivalent of an immediate lump sum payment with no delay.
The difference between present value and nominal future value can be large. For example, the value of a single $1,000,000 payment forty years from now is just over $97,200 today (assuming a 6 percent nominal discount rate). In other words, receiving $1,000,000 in forty years is financially and psychologically the same as receiving $97,200 today.
In The Economic Value of a Law Degree, Frank McIntyre and I describe the law degree earnings premium—the difference in earnings between law degree holders and similar bachelor’s degree holders—on both an annual basis and, for the lifetime value, in present value terms. In other words, we show what a lifetime of higher earnings is worth immediately, as of the start of law school, not spread out over the course of a lifetime.
The pre-tax, pre-tuition present value of a lifetime of higher earnings is approximately $1,000,000 at the mean and $600,000 at the median. This includes the opportunity costs of lower earnings while in school, and the cost of interest payments on student loans.
The law graduate will not get to keep the full present value. Approximately 30 percent will go to the government as income and payroll tax revenue, and some of the remainder will go to the law school to pay for the cost of the legal education.
One critic, Steven Harper, took an estimate of the after-tax, after tuition net present value at the median ($330,000) and erroneously claimed that this amount of money would be spread out over a 40-year career. Dividing by 40 years and again by 12 months, Harper claimed that the law graduate would receive “at most a lifetime average of $687 a month.” (Or $8,175 per year).
In other words, Harper conflated present value with future values and miscalculated the private return on a legal education. If cash flows were level during the 40 years after law school, it would take more than $25,000 per year in after tax, after debt-service payment, nominal dollars to equal a present value of $330,000 as of the start of law school. In 2012 inflation-adjusted dollars, it would require about $16,000 per year. Harper is off by a factor of about 2 or 3.
In practice, cash flows will not be level—they will be lower in the initial years and rise through middle age. The present value calculation already incorporates the cost of lower cash flows in the initial years. To the extent that cash flows in the initial yeasrs are a concern, some students may use debt repayment options with lower payments in the initial years. The costs of these programs are already incorporated into our present value calculations.
October 29, 2013
The Economic Value of a Law Degree: Means, Medians, Modes (Michael Simkovic)
The three averages—means, medians, and modes—are basic mathematical concepts. Nevertheless, they seem to have generated an inordinate amount of confusion among some critics of the Economic Value of a Law Degree (see here, here, and here).
Some of the critics have emphasized modes and medians while downplaying the importance of means. Steven Harper, for example, has claimed that the mean is a “meaningless” statistic and we should instead focus on the medians and modes while ignoring the mean.
To understand his error, imagine two lecture halls, each with 100 seats. Underneath each of those seats is a suitcase full of cash. The individuals sitting in those seats will get to keep whatever cash they find when they open the suitcase.
In Lecture Hall A, every suitcase contains $600,000. $600,000 is the mean, median, and mode value.
In Lecture Hall B, 60 of the suitcases contain $600,000, but the remaining 40 suitcases each contain $1,600,000. The median and mode is exactly the same as in Lecture Hall A--$600,000. But the mean is much higher in Lecture Hall B—it is $1,000,000 instead of $600,000.
If you didn’t know how much money would be in your suitcase, but you could choose between sitting in Lecture Hall A and Lecture Hall B, which room would you choose?
You would be wise to choose Lecture Hall B. But the only reason to choose Lecture Hall B is because the mean (average) is higher in Lecture Hall B. The median and mode in both lecture halls is identical.
Now imagine a slightly different fact pattern. In Lecture Hall A there are three suitcases each containing $1.6 million, while the remaining 97 suitcases contain amounts that are close to $600,000 (i.e., a range from $599,950 to $600,050), with none of these 97 suitcases containing the exact same dollar value. The mode value in Lecture Hall A is $1.6 million, while the median is $600,000 and the mean is $630,000.
Lecture Hall B is the same as in the previous fact pattern—60 suitcases contain $600,000 while 40 suitcases contain $1.6 million. The mode value in Lecture Hall B is $600,000, while the mean is $1,000,000.
In other words, the mode is higher in Lecture Hall A, but the mean is higher in Lecture Hall B. The medians are identical.
Which room would you choose to sit in?
Once again, you would be wise to choose Lecture Hall B. This suggests that you believe that means (averages) are more important than modes.
The money at the top (or the bottom) matters. Means provide useful information that is not available from medians alone, and that is not reflected in modes. That’s why we provide both means and medians, as well as 75th percentile and 25th percentile values in Economic Value of a Law Degree.
(Posted by Michael Simkovic)
August 05, 2013
Sample size, standard errors, and confidence intervals
At law school café (reposted on Tax Prof) Deborah Merritt asks several questions about The Economic Value of a Law Degree related to sample size and uncertainty. We thank Professor Merritt for her comments and hope they helped clarify the annual results for those who were having trouble interpreting Figures 5 and 6. In the paper we are careful to display the large confidence intervals for Figure 6, which looks at young law graduates over time, and we avoid drawing any strong conclusions from them. Also, as we'll discuss below, one can readily reject that Figure 5's ups and downs are just noise.
This post includes brief discussions of some of the interesting points raised.
The estimates in the paper don't depend on cyclical law school premia
We want to be clear that our underlying results do not rely on cyclicality. SIPP annual estimates do not show a recent post-recession decline in the overall law graduate earnings premium that needs to be explained. The recent decline in earnings for law graduates in our sample is matched by a decline in earnings for bachelor’s degree holders, and the law graduates retained their relative advantage. But as one can see in Figure 6, the small sample for young lawyers makes it hard to be sure about the recent outcomes for that group in isolation. Whether the premium cycles up and down or stays flat, over a lifetime every law grad will see many such transitions over their life, averaging out over time.
Is our overall sample size big enough?
Yes, our sample size is more than sufficient to support our conclusions on lifetime earnings. The standard errors in Tables 1 to 4 reflect the degree of uncertainty about our estimates, which pool data over many years to increase precision. The standard errors are very small relative to our law degree earnings premium coefficient estimates, and our results are statistically significant well beyond conventional levels of statistical significance. Deborah Merritt's discussion is focused specifically on what we can say about how the premium has changed over time (Figures 5 and 6). As one can see in Figure 5, any changes in that premium have been fairly small relative to its size.
How strong is the specific evidence from SIPP for cyclicality of earnings premiums?
Consistent with cyclicality, there is evidence of fluctuations of the earnings premium (measured on a percentage basis) in the 1996-2011 period. Prompted by Deborah Merritt's concerns, we went ahead and added the joint test statistics to the figures in question. We can reject the hypothesis that the law degree earnings premium was the same in all years from 1996-2011 (p<0.001). In other words, fluctuations in the point estimate in Figure 5 are not all simply random noise. Further, we don’t see evidence of a notable long term upward or downward trend. Indeed, despite the occasional fluctuations we think the most noticeable feature of the law school premium recently is its stability.
Several previous studies have found evidence of fluctuations in law degree holder earnings premiums and starting salaries. We cite many of these studies in the paper. It would be a bad idea to extrapolate gloom or boom from a downward or upward trend in earnings using the last few years of data. Trends, even when present, can stop or reverse themselves through dynamic labor market responses or exogenous shocks. A sustained 85 percent decline in the lifetime earnings premium would be required for our main result--that a law degree is a value-creating investment for most law graduates--to no longer hold true. Such a steep decline seems unlikely.
Though not crucial to our inqiury into lifetime earnings, it would be interesting to know if the premium rises and falls with the business cycle. Prompted by the interest in this question, we did some exploratory analysis of data from the much larger, but less precise, American Community Survey which also seems to be consistent with fairly stable earnings premiums for recent cohorts of law graduates, but more research on the question will be useful, especially as passing time provides us more data.
How should we understand confidence intervals and point estimates?
Professor Merritt’s description of confidence intervals may seem to suggest that the true population parameter is equally likely to fall close to the point estimate as it is at the outer edges at the top or bottom of the confidence interval.
This interpretation would be incorrect. The probability density is highest at the center of the confidence interval, near the point estimate, and lowest at the outer edges of the confidence interval. The point estimate is the best estimate of the population parameter.
Professor Merritt’s description also doesn't discuss the relationship between different point estimates, looking instead only at the confidence interval for each point estimate individually. In a nutshell, two estimates may have overlapping confidence intervals and still be statistically separable.
How strong is the evidence for a bi-modal distribution of earnings?
We don’t think the evidence for a bimodal distribution of lifetime earnings for law graduates is very compelling. Recent full time starting salaries from NALP are not the same thing as lifetime earnings because:
- Full time salary excludes those who are working less than full time
- Salaries exclude bonuses, which may be more variable than earnings
- Starting salaries tend to be fairly lockstep compared to later earnings
- After the JD II suggests faster growth of earnings (on a percentage basis) for graduates of lower ranked schools who have lower average initial earnings, which suggests convergence of earnings over time
Because earnings across people are close to log normally distributed it is typical to see a few people making a lot more than most people.
Would bimodality cast doubt on the results of our analysis?
Bimodality does not really call for change to our approach, even if present. As the sample gets larger the sampling distribution is asymptotically normal, so standard errors on our key results should be consistent. Regression techniques are consistent regardless of the underlying distribution, but for those concerned about a thick right tail, we'd suggest they concentrate on the results in Tables 1 and 2 that use a log transformation—reducing such concerns. Bi-modality in the earnings distribution would also not change how we did our quantile regressions. Quantile regressions estimate the earnings premium at different points in the distribution independent of the shape of the overall distribution.
August 01, 2013
The Economic Value of a Law Degree: Correcting Misconceptions
- Ability sorting and selection
- Occupation and the versatile law degree
- Long term versus short term
- The broader labor market
- Present value and opportunity costs
In The Economic Value of a Law Degree, Frank McIntyre and I estimate the increase in annual and lifetime earnings that is attributable to a law degree. To do so, we compare those with law degrees to similar individuals with less education.
Because those who matriculate at law schools may be different from the average bachelor’s degree holder, we compare law degree holders to a group of similar bachelor’s degree holders.
There is a misperception—apparently started by Brian Tamanaha (here and here) and repeated by others—that we simply compare law degree holders to all bachelor’s degree holders, or that we compare the 25th percentile of law degree holders to the 25th percentile of all bachelor’s degree holders. This is not true.
At a high level, what we essentially did was to create two subgroups of bachelor’s degree holders—all bachelor’s degree holders, and a subset of bachelor’s degree holders who look like the law degree holders with respect to many observable characteristics that predict earnings—demographics, academic achievement, parental socio-economic status, measures of motivation and values. It is this second group of bachelor’s degree holders that we compare to the law degree holders.
To check for ability sorting and selection, we use statistical techniques including:
- Ordinary Least Squares (OLS) regression (at the mean)
- Quantile Regression at the:
- 25th percentile
- 50th percentile
- 75th percentile
- Propensity score matching (for our lifetime earnings premium estimates)
- Heckman Selection (in an appendix)
The observable characteristics (pretreatment covariates) that we focus on as controls in the Survey of Income and Program Participation include:
- Number of years of high school coursework in
- Foreign Language
- Type of High School
- Private vs. Public
- College preparatory classes in high school
- College major (divided into five categories
based on the International
Standard Classification of Education)
These controls bring down our earnings premium estimates by around 10 percent at the mean and around 8 percent at the 25th percentile.
In other words, the data and statistical techniques that we use suggest that the kinds of people who go to law school would probably earn about 10 percent more than the average bachelor’s degree holder even if they hadn’t gone to law school. But the law school earnings premium is much greater than that, and the earnings premiums we report are after controls for ability sorting.
We do an additional check for ability sorting using another data set called the National Education Longitudinal Study (NELS). NELS follows a cohort from 8th grade through their late 20s, and includes additional pretreatment control variables that are not available in SIPP.
Controls that are available in NELS include:
- college quality
- standardized test scores
- college GPA and major
- motivation and interest in careers
- subjective expectations about future income
- Parent SES
The results of the analysis using NELS are very similar to the results of the analysis in SIPP. The bachelor’s degree holders who go on to law school would probably earn about 10 percent more than the average bachelor’s degree holder, even if they had not gone to law school.
Because this level of ability sorting was already taken into account in our SIPP analysis, we do not believe that any further adjustment to our SIPP results would be justified based on the analysis in NELS. Because different measures of ability that predict earnings are often correlated with each other, adding more and more control variables that measure essentially the same thing often won’t substantially change the estimate of the earnings premium.
Thus we found very little to suggest that law graduates’ above average undergraduate academic performance translates into higher earnings other than what we had already accounted for. This may be surprising to people for two reasons. First, law degree holder undergraduate academic performance is better but not fantastically better than the typical BA. Second, that above average performance does not actually translate into much of a boost to earnings. It turns out higher undergraduate grades, for example, do not show a strong correlation with later earnings. We find that this is especially true, by the way, in the majors preferred by law students in the humanities and social sciences.
Eric Rasmusen has an interesting blog post qualitatively describing the "typical" law student.
There are several other issues related to selection on unobservables and offsetting biases that are worth mentioning.
Annual vs. Lifetime and regression to the median:
Annual earnings tend to be much more varied than longer-term lifetime earnings. For one example, job losses or transitions can cause a sharp drop in one year, but tend to be resolved by the next year. People going through such temporary rough spots show up low in the earnings distribution. So the 25th percentile of one year earnings is much lower than the 25th percentile over average lifetime earnings.
When reporting earnings, people tend to not report periods of unemployment and such. The SIPP returns to interview people every four months, so this is not as much of a problem as it could be, but it means that low income people tend to over-report their income relative to those higher up. This typically will bias down estimates of how much more one group earns than another.
People tend to pick the career they will succeed at. Thus those who are bad at some jobs but good at jobs available to law degree holders will gravitate towards law. But, in fact, had they not gone in to law they might end up doing very badly. This has several effects – it means that we will tend to underestimate the value of law school to those who choose law because that is their particular advantage but at the same time we may be overestimating it for those who are not choosing law. It is hard to know for sure if this is a large effect or not. It is very difficult to nail down statistically.
The 25th Percentile:
When we look at the 25th percentile earnings lawyer we use quantile regression to make these ability adjustments to the data before comparing them to the 25th percentile earnings BA, thus we’re correcting for ability as much as possible. Though not reported in the paper we find the ability gap (that we adjust for in our lifetime value estimates) between BA and law grads is about eight percentage points at the 25th percentile. This is completely in line with what we found at the mean both in the SIPP and in our more refined estimates from the NELS survey. It is possible that the gap is larger (or smaller) at the bottom than our data show, so that would be a great place for future research, but we think this is the best currently available estimate, especially given issues (1) and (2) biasing the premium down.
Occupation and the versatile law degree
A very large fraction of law degree holders do not end up practicing law. For some, this is a disappointment and for others it is a preferred outcome. We include all these people in our estimates of the value of a law degree. That is because the question we are interested in answering is the value of the law degree, not the earnings of the subset of individuals who practice law. Controlling for occupation would have been methodologically improper because occupation is an outcome variable, not a pretreatment covariate.
As MIT labor economist Joshua Angrist and LSE labor economist Jörn-Steffen Pischke explain in Mostly Harmless Econometrics:
Some variables are bad controls and should not be included in a regression model even when their inclusion might be expected to change the short regression coefficients. Bad controls are variables that are themselves outcome variables . . . That is, bad controls might just as well be dependent variables too. The essence of the bad control problem is a version of selection bias . . .
To illustrate, suppose we are interested in the effects of a college degree on earnings and that people can work in one of two occupations, white collar and blue collar. A college degree clearly opens the door to higher-paying white collar jobs. Should occupation therefore be seen as an omitted variable in a regression of wages on schooling? After all, occupation is highly correlated with both education and pay. Perhaps it’s best to look at the effect of college on wages for those within an occupation, say white collar only.
The problem with this argument is that once we acknowledge the fact that college affects occupation, comparisons of wages by college degree status within an occupation are no longer apples-to-apples, even if college degree completion is randomly assigned . . . [because of selection bias].
We would do better to control only for variables that are not themselves caused by education.
In a recent article, David Neumark and co-authors also include a helpful explanation of the problems with controlling for occupation and “underemployment”, or relying on BLS occupational earnings projections when trying to measure education earnings premiums:
“For nearly every occupational grouping, wage returns are higher for more highly-educated workers even if the BLS says such high levels of education are not necessary. For example . . . for management occupations, the estimated coefficients for Master’s, professional, and doctoral degrees are all above the estimated coefficient for a Bachelor’s degree, which is the BLS required level. . . ..
If the BLS numbers are correct, we might expect to see higher unemployment and greater underemployment of more highly-educated workers in the United States. As noted earlier, we do not find evidence of this kind of underemployment based on earnings data. Similarly, labor force participation rates are higher and unemployment rates are lower for more highly educated workers.”
Even economists at the BLS emphasize that educational earnings premiums, and not BLS employment projections, are the key measure of the value of education:
The general problem with addressing the question whether the U.S. labor market will have a shortage of workers in specific occupations over the next 10 years is the difficulty of projecting, for each detailed occupation, the dynamic labor market responses to shortage conditions. . . .
Since the late 1970s, average premiums paid by the labor markets to those with higher levels of education have increased.
It is the growing distance, on average, between those with more education, compared with those with less, that speaks to a general preference on the part of employers to hire those with skills associated with higher levels of education.
Long term versus short term
We value a law degree based on the present value of a lifetime of increased earnings. The valuation literature is unambiguous about the correct time period to value the cash flows generated by an asset: the entire life of the asset. The delay and higher risks of cash flows in the distant future are already taken into account through the application of a discount rate and the present value formula.
Our approach, using the typical span of a working life and discounting back to present value, is the correct one for the majority of potential law students who obtain their degrees relatively early, in their 20s or 30s. A much shorter time period would only be appropriate for individuals who complete their law degrees later in life, closer to retirement, or who anticipated working only a few years during their lifetimes.
In a recent post post, Brian Tamanaha suggests that the difference between his approach and ours is that he focused on the short-term value of a law degree while we focused on the long-term value of a law degree.
Michael Froomkin wonders if law degree holders will experience a cash crunch early in their careers when their incomes are lower and debt levels are higher.
It is unlikely that a debt financed law degree would create a cash crunch. Young bachelor’s degree holders also have lower incomes early in their careers. The earnings premium associated with the law degree will typically exceed required debt service payments on law school debt, particularly in light of the availability of extended repayment, deferment, forbearance, and income based repayment plans. Graduate degrees can readily be financed entirely with federal student loans.
The costs of delayed repayment (i.e., higher interest) are already taken into account in our present value calculation, because we discount back at the weighted average interest rate on law school debt. We’re pretty conservative in this respect: we ignore the (likely) possibility that students will prepay their highest interest rate debts first. Indeed, After the JD II found evidence of rapid pre-payment of law school debt.
Our results suggest that most young law degree holders most of the time likely have more positive cash flow—even after debt service payments—than they would likely have had with only a bachelor’s degree.
Because the economic value of a given level of education can generally be maximized by completing that level of education early—and thereby maximizing the number of years of subsequent work with the benefit of higher wages from the education earnings premium—delaying graduate school to try to time the market is a high-cost strategy. And timing the market three or four years in advance is difficult.
We recommend long-term historical data on lifetime earnings premiums as a guide rather than short-term fluctuations in starting salaries. Indeed, starting salaries tell us very little—earnings premiums are what matters, and there is no evidence that premiums have compressed, even for the young.
In a supplemental exploratory analysis using ACS data, we find some evidence that post 2008 cohorts of individuals who are probably young law degree holders (professional degree holders excluding those in medical practice) continue to have the same earnings advantage over bachelor’s as they had prior to 2008.
Ben Barros has done some interesting work comparing outcomes 9 months after graduation to subsequent outcomes for recent graduates of Widener Law School.
The broader labor market
Tamanaha argues that law continues to be depressed while the rest of the labor market has recovered. The data does not support this view. As can be seen from the chart below, the broader employment population ratio remains below 2007 levels across levels of education, and the more educated continue to be more likely to work than those with less education.
Present value and opportunity costs
Many of our critics have made mistakes relating to net present value, opportunity costs, and direct costs of a law degree. Some general guidelines are provided below.
- Everything has to be discounted back to the start of law school
- Costs can't be something that is already taken into account through opportunity cost of lower in school earnings
- Costs have to be something that the law student would only incur for law school and not matched by any other comparable expense if the student were a working BA; the cost has to be something that is a necessary expense to attend law school
- The cost can't provide consumption benefits that justify the greater expense
- The cost has to be what the student actually spends, and not hypothetically what a student might have spent if the student had paid full price
For example, since living expenses would be paid out of higher earnings if law students were working, we have already taken cost of living into account.
Since many students receive scholarships and grants, full-sticker tuition should not be used as a base-case.
Our estimates of in-school earnings are based on data from the SIPP and other Census Bureau Surveys. As we note in footnote 101:
Footnote 101: We assume that law students earn $5,000 in their first year, $7,000 in their second year and $12,000 in their third year with part time and summer work, for a total of $24,000 during law school. SIPP data suggests typical three-year in-school earnings between $21,800 (median) and $48,000 (mean) for fulltime graduate and professional school students. Census data suggests substantial work hours among fulltime graduate and professional students See Jessica Davis, U.S. CENSUS BUREAU, SCHOOL ENROLLMENT AND WORK STATUS: 2011 (Oct. 2012).”
Thanks and Goodbye
It’s been a fun couple of weeks. We’d like to thank Brian Leiter, Brian Tamanaha, and others for the wonderful opportunity they’ve given us to explain our research to a wider audience. And I’d like to thank Frank McIntyre for his contributions to this post and previous posts. This will hopefully be our last post about The Economic Value of a Law Degree, at least for a little while.
July 29, 2013
Brian Tamanaha’s Straw Men (Part 4): We would have to be off by 85 percent for our basic conclusion to be incorrect
“I believe the doubts I raised about the study in my previous three posts have not been answered satisfactorily.”
We therefore continue our response to Tamanaha’s first three posts before addressing Tamanaha’s fourth post.
BT Claim 4: Historical economic data tells us nothing about the future
"It is exeedingly rare to find reliably predictive 'historical norms' in the social sciences because social life is too complex and circumstances are constantly changing . . . S&M have produced a narrow, partial, time-bound study that has zero predictive relevance for anyone thinking about attending law school today." A proper study "may require data over several centuries."
Response: We would have to be off by 85 percent for our basic conclusion to be incorrect
In finance, valuation entails using historical data to establish a baseline scenario. This baseline is generally viewed as the center of a distribution of possible future outcomes. The baseline can be modified to construct upside and downside scenarios to get a sense of what could happen if the future is better or worse than the past. Scenario analysis can help understand how robust the findings are--that is, how much the future would need to deviate from the past to change the basic directional conclusion of the valuation analysis. For the extreme downside, this is sometimes called "break-even analysis."
For general background focused on the corporate context, I recommend Tim Koller, Marc Goedhart & David Wessel, McKinsey, Valuation: Measuring and Managing the Value of Companies (4th Edition), and Brealey, Myers & Allen, Principles of Corporate Finance.
We estimate the present value of a law degree at the median as $610,000 as of the start of law school. This figure is pre-tax and pre-tuition, but includes opportunity costs and financing costs.
In other words, some combination of the student and the federal government could pay up to $610,000 for the law degree and break even. The government might contribute to the cost through debt forgiveness through Income Based Repayment, or through some other method.
As we note in the paper, ABA data suggest that the typical tuition cost for law school, less scholarships and grants, is roughly around $30,000 per year. Spread over 3 years, and assuming tuition rises 6 percent per year nominal (i.e., at our discount rate), this comes to $90,000 in present value terms as of the start of law school.
For law school to cease to be a value-creating investment for the majority of law students, the present value of the lifetime earnings premium would have to fall to below $90,000—a drop of 85 percent.
At the “25th percentile” (more like the 15th because of regression to the median), toward the bottom of the distribution, the law school earnings premium is $350,000. Assuming tuition (less scholarships and grants) remains at $30,000, the 25th percentile premium would need to fall by 74 percent for a law degree to no longer be value-creating proposition toward the bottom of the distribution. At the mean, we’d have to be off by 91 percent.
These would be extreme deviations from the pattern seen in 1996-2011.
July 28, 2013
Repetitive (and avoidable) mistakes
At the American Lawyer, Matt Leichter repeats many misrepresentations of our research that originally appeared in the tabloid Above the Law, even after Above the Law posted corrections and after we refuted many of these misrepresentations. He also refers to anonymous comments attacking our research from people who did not read it.
- He erroneously claims that we "assume law school pays off equally for non-lawyers" when we in fact measure the earnings premium regardless of occupation. We do not assume anything.
- Leicther's description of our take on BLS projections is lifted out of context, since we note that even BLS economists are skeptical of these sorts of projections.
- Leicther ignores our careful controls for ability sorting and selection.
- He ignores the fact that we show not only current student loan default data, but also data that predates IBR. Law school student loan default rates were low even before IBR was available.
Steve Harper makes many of the same mistakes, and throws in a few disparaging remarks to boot.
- Harper repeats Tamanaha's claim--which the Washington Post reported as false--that we only look at means and do not consider different points in the distribution. And he throws in a red herring about a bi-modal distribution.
- Harper gets confused about present value and about the difference between medians and means, much like Campos and Tamanaha. Harper incorrectly reports that "a [law] degree returns at most a lifetime average of $687 a month" spread "over a 40-year career."
- The average (mean) is in fact around $53,000 per year before taxes and the median is around $32,000 per year in real terms (after taking inflation into account). After taxes, the annual average benefit is greater than $37,000 per year.
- Harper gets confused about causal inference and controls for ability sorting and selection, and repeats erroneous claims from Paul Campos that the United States Census Bureau's Survey of Income and Program Participation does not constitute a representative sample. Harper throws in some new errors about the relationship between sample size and statistical significance.
- Harper incorrectly claims that our findings of a premium depend on certain assumptions when--as we explicilty note in the paper--our findings are robust and do not depend on those assumptions. And he overlooks the data on which those assumptions are based.
- Harper incorrectly claims that half of law graduates will remain "below the median income" even after they graduate. In fact, the median income for law graduates is 60 percent higher than the median income for similar bachelor's degree holders.
We've already responded to many of these same misrepresentations of our research from Above the Law, Brian Tamanaha, and Paul Campos. Simple fact-checking, either by reading the article or by checking our blog posts, could have prevented these errors.
Hopefully the editors at the American Lawyer will promptly post corrections and have a serious discussion with Mr. Leichter and Mr. Harper about the differences between critiquing research on the merits and misrepresenting the contents of that research--and impugning the integrity of its authors--in a nationally distributed publication.
July 26, 2013
Brian Tamanaha’s Straw Men (Part 3): We use better (and more) data than studies Tamanaha praised in his book
BT Claim 3: 16 years of data is not enough
“S&M’s bold assertion that their 16-year study establishes valid ‘historical norms’ on law degree earnings would be scoffed at by social scientists who take the notion of ‘historical norms’ seriously. That is more than enough time to confirm norms governing the mating behavior of fruit flies, but 16 years is laughably inadequate for predicting something as complex and subject to change as the lifetime earnings of future law grads.”
Response Part 1: A fine idea for historical research
We will be delighted to read the results of similar work on earnings premia carried back into the distant past. We certainly are not claiming to have uncovered hundreds of years of data on law school earnings premia. But, ultimately, we are not sure how valuable such a retrospective would be for today's graduate.
Response Part 2: Professor Tamanaha and other critics of law school relied on—and praised—studies that use far less than 16 years of data
The literature has numerous studies using smaller data sets than ours (citations available), including several studies using only 3 years of data that were cited by Professor Tamanaha in Chapter 11 (starting on page 137-38) of his recent book, Failing Law Schools. Professor Tamanaha cited these studies without comment or criticism regarding the number of years of data used (although he did criticize them on other grounds), so we find it odd that he views our study as somehow deficient on this ground.
Professor Tamanaha and other law school critics have cited and praised studies that were much less rigorous and used much less data, including Herwig Schlunk’s “Mama’s Don’t Let Your Babies Grow Up to Be Lawyers.” Schlunk used a couple of years of data from Payscale.com, law.com, AbovetheLaw.com, and other websites.
The Economic Value of a Law Degree uses 16 years of data regarding earnings across age groups from the United States Census Bureau.
On page 217, note 18 of his book, Tamanaha called Schunk’s study “An excellent example of how [to determine whether a law degree is a good investment] in economic terms.” Tamanaha also praised an article by Jim Chen that used only starting salaries.
While we're happy to admit that no study is perfect, if these studies have enough years of data for Professor Tamanaha to cite in his book, then we struggle to understand his objection to 16 years of data across age groups.
And we're now going to take this opportunity to cite a personal favorite line from Professor Tamanaha's post:
“Let me also confirm that [Simkovic & McIntyre’s] study is far more sophisticated than my admittedly crude efforts.”
We'll take what we can get.
For a brief critique of Professor Schlunk’s work, see the discount rate appendix of The Economic Value of a Law Degree. A more thorough critique of Professors Schlunk’s work—and Professor Tamanaha’s reliance on it—is contained in our book review of Failing Law Schools, which will be posted on SSRN soon.
July 25, 2013
Brian Tamanaha’s Straw Men (Part 2): Who's Cherry Picking?
BT Claim 2: Using more years of data would reduce the earnings premium
Response: Using more years of historical data is as likely to increase the earnings premium as to reduce it
We have doubts about the effect of more data, even if Professor Tamanaha does not.
Without seeing data that would enable us to calculate earnings premiums, we can’t know for sure if introducing more years of comparable data would increase our estimates of the earnings premium or reduce it.
The issue is not simply the state of the legal market or entry level legal hiring—we must also consider how our control group of bachelor’s degree holders (who appear to be similar to the law degree holders but for the law degree) were doing. To measure the value of a law degree, we must measure earnings premiums, not absolute earnings levels.
As a commenter on Tamanaha’s blog helpfully points out:
“I think you make far too much of the exclusion of the period from 1992-1995. Entry-level employment was similar to 1995-98 (as indicated by table 2 on page 9).
But this does not necessarily mean that the earnings premium was the same or lower. One cannot form conclusions about all JD holders based solely on entry-level employment numbers. As S&M's data suggests, the earnings premium tends to be larger during recessions and their immediate aftermath and the U.S. economy only began an economic recovery in late 1992.
Lastly, even if you are right about the earnings premium from 1992-1995, what about 1987-91 when the legal economy appeared to be quite strong (as illustrated by the same chart referenced above)? Your suggestion to look at a twenty year period excludes this time frame even though it might offset the diminution in the earnings premium that would allegedly occur if S&M considered 1992-95.”
There is nothing magical about 1992. If good quality data were available, why not go back to the 1980s or beyond? Stephen Diamond and others make this point.
The 1980s are generally believed to be a boom time in the legal market. Assuming for the sake of the argument that law degree earnings premiums are pro-cyclical (we are not sure if they are), inclusion of more historical data going back past 1992 is just as likely to increase our earnings premium as to reduce it. Older data might suggest an upward trend in education earnings premiums, which could mean that our assumption of flat earnigns premiums may be too conservative. Leaving aside the data quality and continuity issues we discussed before (which led us to pick 1996 as our start year), there is no objective reason to stop in the early 1990s instead of going back further to the 1980s.
Our sample from 1996 to 2011 includes both good times and bad for law graduates and for the overall economy, and in every part of the cycle, law graduates appear to earn substantially more than similar individuals with only bachelor’s degrees.
This might be as good a place as any to affirm that we certainly did not pick 1996 for any nefarious purpose. Having worked with the SIPP before and being aware of the change in design, we chose 1996 purely because of the benefits we described here. Once again, should Professor Tamanaha or any other group wish to use the publicly available SIPP data to extend the series farther back, we'll be interested to see the results.
July 24, 2013
Brian Tamanaha’s Straw Men (Part 1): Why we used SIPP data from 1996 to 2011
BT Claim: We could have used more historical data without introducing continuity and other methodological problems
BT quote: “Although SIPP was redesigned in 1996, there are surveys for 1993 and 1992, which allow continuity . . .”
Response: Using more historical data from SIPP would likely have introduced continuity and other methodological problems
SIPP does indeed go back farther than 1996. We chose that date because it was the beginning of an updated and revitalized SIPP that continues to this day. SIPP was substantially redesigned in 1996 to increase sample size and improve data quality. Combining different versions of SIPP could have introduced methodological problems. That doesn't mean one could not do it in the future, but it might raise as many questions as it would answer.
Had we used earlier data, it could be difficult to know to what extent changes to our earnings premiums estimates were caused by changes in the real world, and to what extent they were artifacts caused by changes to the SIPP methodology.
Because SIPP has developed and improved over time, the more recent data is more reliable than older historical data. All else being equal, a larger sample size and more years of data are preferable. However, data quality issues suggest focusing on more recent data.
If older data were included, it probably would have been appropriate to weight more recent and higher quality data more heavily than older and lower quality data. We would likely also have had to make adjustments for differences that might have been caused by changes in survey methodology. Such adjustments would inevitably have been controversial.
Because the sample size increased dramatically after 1996, including a few years of pre 1996 data would not provide as much new data or have the potential to change our estimates by nearly as much as Professor Tamanaha believes. There are also gaps in SIPP data from the 1980s because of insufficient funding.
These issues and the 1996 changes are explained at length in the Survey of Income and Program Participation User’s Guide.
Changes to the new 1996 version of SIPP include:
- Roughly doubling the sample size
- This improves the precision of estimates and shrinks standard errors
- Lengthening the panels from 3 years to 4 years
- This reduces the severity of the regression to the median problem
- Introducing computer assisted interviewing to improve data collection and reduce errors or the need to impute for missing data
- Introducing oversampling of low income neighborhoods
- This mitigates response bias issues we previously discussed, which are most likely to affect the bottom of the distribution
- New income
topcoding procedures were instituted with the 1996 Panel
- This will affect both means and various points in the distribution
- Topcoding is done on a monthly or quarterly basis, and can therefore undercount end of year bonuses, even for those who are not extremely high income year-round
Most government surveys topcode income data—that is, there is a maximum income that they will report. This is done to protect the privacy of high-income individuals who could more easily be identified from ostensibly confidential survey data if their incomes were revealed.
Because law graduates tend to have higher incomes than bachelor’s, topcoding introduces downward bias to earnings premiums estimates. Midstream changes to topcoding procedures can change this bias and create problems with respect to consistency and continuity.
Without going into more detail, the topcoding procedure that began in 1996 appears to be an improvement over the earlier topcoding procedure.
These are only a subset of the problems extending the SIPP data back past 1996 would have introduced. For us, the costs of backfilling data appear to outweigh the benefits. If other parties wish to pursue that course, we'll be interested in what they find, just as we hope others were interested in our findings.
Brian Tamanaha’s Straw Men (Overview)
Brian Tamanaha previously told Inside Higher Education that our research only looked at average earnings premiums and did not consider the low end of the distribution. Dylan Matthews at the Washington Post reported that Professor Tamanaha’s description of our research was “false”.
In his latest post, Professor Tamanaha combines interesting critiques with some not very interesting errors and claims that are not supported by data. Responding to his blog post is a little tricky as his ongoing edits rendered it something of a moving target. While we're happy with improvements, a PDF of the version to which we are responding is available here just so we all know what page we're on.
Some of Tamanaha’s new errors are surprising, because they come after an email exchange with him in which we addressed them. For example, Tamanaha’s description of our approach to ability sorting constitutes a gross misreading of our research. Tamanaha also references the wrong chart for earnings premium trends and misinterprets confidence intervals. And his description of our present value calculations is way off the mark.
Here are some quick bullet point responses, with details below in subsequent posts:
- Using more historical data from SIPP would likely have introduced continuity and other methodological problems
- Using more years of data is as likely to increase the historical earnings premium as to reduce it
- If pre-1996 historical data finds lower earnings premiums, that may suggest a long term upward trend and could mean that our estimates of flat future earnings premiums are too conservative and the premium estimates should be higher
- The earnings premium in the future is just as likely to be higher as it is to be lower than it was in 1996-2011
- In the future, the earnings premium would have to be lower by **85 percent** for an investment in law school to destroy economic value at the median
- 16 years of data is more than is used in similar studies to establish a baseline. This includes studies Tamanaha cited and praised in his book.
- Our data includes both peaks and troughs in the cycle. Across the cycle, law graduates earn substantially more than bachelor’s.
errors and misreading
- We control for ability sorting and selection using extensive controls for socio-economic, academic, and demographic characteristics
- This substantially reduces our earnings premium estimates
- Any lingering ability sorting and selection is likely offset by response bias in SIPP, topcoding, and other problems that cut in the opposite direction
- Tamanaha references the wrong chart for earnings premium trends and misinterprets confidence intervals
- Tamanaha is confused about present value, opportunity cost, and discounting
- Our in-school earnings are based on data, but, in any event, “correcting” to zero would not meaningfully change our conclusions
- “Let me also confirm that [Simkovic & McIntyre’s] study is far more sophisticated than my admittedly crude efforts.”