August 19, 2016

Sandy Baum challenges media sensationalism and political hype about student loans


"There's a new book out about the student loan crisis [Student Debt: Rhetoric and Realities of Higher Education], or what author Sandy Baum suggests is a "bogus crisis." Baum, a financial aid expert and senior fellow at the Urban Institute, claims it has been [sensationalized and exaggerated] by the media in search of a spicy story and fueled by politicians pushing "debt free college" proposals. . . . "

Sandy Baum:

"People who earn bachelor's degrees, by and large, do fine.

The problem is that we have a lot of people actually borrowing small amounts of money, going to college, not completing [a degree] or completing credentials that don't have labor market value. They tend to be older. They tend to come from disadvantaged or middle-income families and they're struggling. [But] not because they owe a lot of money. . . .

Its not realistic to say we're going to pay people to go to college [for free]. Someone has to pay. We can have everyone pay much higher taxes. But short of that, it's not clear how we would pay. . . . 

There are some people who borrowed under fraudulent, deceptive situations and their debt should be forgiven. There are people for whom education did not work out through no fault of their own and their debt should be forgiven.  . . . We don't give people very much advice and guidance about where [and] when to go to college, how to pay for it, what to study. . . .

[[There are facts that]  get little or no attention because they don't fit the "crisis" narrative:

  • A third of college students who earn a four-year degree graduate with no debt at all. Zero.
  • A fourth graduate with debt of no more than $20,000.
  • Low-income households hold only 11 percent of all outstanding [student] debt.
  • Almost half of the $1.3 trillion in student loan debt is held by 25 percent of graduates who are actually making a pretty high income.]

This is an investment that pays off really well. The median earnings for young bachelor's degree recipients is about $20,000 a year higher than the median earnings for high school graduates.

Student debt is really creating a lot of opportunities for people. People wouldn't be able to go to college otherwise."

Baum notes that many graduates with high debt levels (>$100,000) have advanced degrees, high expected incomes, and low default rates.

"The highest debt levels are for those earning professional degrees . . .  Despite high debt levels, default rates among graduate borrowers are very low."  However, Baum expresses some concern about those pursuing expensive master's degrees in fields "that rarely lead to the kind of earnings that doctors, lawyers, and MBAs can expect."

Baum's findings are broadly consistent with recent research by Beth Akers and Matthew Chingos, reviewed by David Leonhardt for the New York Times.  Akers and Chingos have a new book coming out this fall.

August 19, 2016 in Guest Blogger: Michael Simkovic, Of Academic Interest, Science, Weblogs | Permalink

August 15, 2016

“Glass Half Full” author concedes problems with estimates of solo practitioner incomes and headcounts (updated 8/18)

Professor Benjamin H. Barton recently responded to critiques of his estimates of solo practitioner incomes. Barton does not answer the specific questions that I posed about his use of IRS data, but he generally concedes that the IRS data is problematic. 

  1. Barton wrote:

“Is it possible that the IRS data undersells the earnings of solo practitioners?  Yes, for the reasons I state above and for some of the reasons that you and Professor Diamond point out.”  

I applaud Professor Barton’s honesty.  I encourage him to acknowledge the problems with the IRS data in future editions of “Glass Half Full” and to correct his CNN and Business Insider posts.

  1. Barton wrote:

“Do I think that the IRS data are off by a factor of 3.5 or even 2?  No.”  

I encourage Professor Barton to present a revised estimate that he thinks is more accurate. Several studies that he cites for support suggest that his solo income estimates are off by a factor of approximately 2 to 3 (see below for details).

  1. Barton defends his use of IRS data on three grounds, each of which is problematic:

a. “The IRS data on lawyer earnings is the longest running data I could find and thus the best dataset for a discussion of long term trends.”

Professor Barton overlooked the U.S. Census Bureau’s Decennial Census, which has data on Lawyer’s incomes since 1950 (which reports 1949 incomes).[i]  The IRS data presented by Barton starts 18 years later, in 1967.

When considering long term trends in occupational incomes, it’s important to consider changes in the race and sex of members of the occupation.  Across occupations, women and minorities generally earn less than white men.  Race and sex variables are available in Census Household data, but not public-use IRS data.

b. The IRS data “separates lawyer earnings into solo practitioners and law firm partners”

Professor Barton acknowledges that his data misses incorporated self-employed lawyers, and that this group likely has higher incomes than those that he captures.[ii]

This means that Professor Barton’s IRS data is much less useful for identifying small and solo practitioners in 2013 than it was in 1970.  This is because the proportion of solo and small attorneys who incorporated has likely increased dramatically.  In 1970, 5 percent of full-time self-employed lawyers were incorporated.  By 2014, the share increased to more than 50 percent.[iii].  Barton is missing many solo and small time practitioners.  If trends toward incorporation continue, his data will become less useful every passing year.  The IRS data has different biases at different points in time, making trends potentially unreliable. 

Continue reading

August 15, 2016 in Guest Blogger: Michael Simkovic, Legal Profession, Of Academic Interest, Science, Weblogs | Permalink

July 28, 2016

Some questions for Professor Benjamin H. Barton about his use of IRS data to estimate solo practitioner incomes (Michael Simkovic)

After Tuesday's post explaining why IRS schedule C data dramatically underestimates incomes for solo practitioners and other sole proprietors, Professor Benjamin H. Barton emailed to indicate that his views remained unchanged and he did not intend to respond beyond his previous comments on Professor Stephen Diamond's blog.  Barton's comments did not address many of the issues I raised. 

On Wednesday, I asked Professor Barton to consider the following questions:

1) Do you think that 20 million or so U.S. small business owners are living below the poverty threshold for a 2 person household?

2) Do you think the IRS is wrong about its own data and schedule C does not in fact understate net income?  Why do you think that you understand IRS data, IRS enforcement capabilities, and the level of tax evasion better than the IRS?

3) Do you think that everyone who files schedule C has no other sources of income?

4) Do you think that Treasury and JCT estimates of tax expenditures are way off and exclusions and deductions from tax concepts of income are negligible?

5) If apples to apples comparisons using schedule C data show that legal services sole proprietorships are more profitable than 97 percent of sole proprietorships, is that something you should mention?  Would you at least agree that using schedule C data for legal services and census data for everyone else is a methodological error?

Professor Barton has not yet responded.


Aug. 11, 2016. Professor Barton responded without specifically answering the questions above, but generally conceded that IRS data is problematic.

Aug. 15, 2016.  I replied to Barton.

July 28, 2016 in Guest Blogger: Michael Simkovic, Law in Cyberspace, Legal Profession, Of Academic Interest, Science, Weblogs | Permalink

July 26, 2016

How much do lawyers working in solo practice actually earn? (Michael Simkovic)

In 2015, Professor Benjamin Barton of the University of Tennessee estimated for, and Business Insider that attorneys working in solo practice earn an average of slightly less than $50,000 per year.  Barton made similar estimates in his book, “Glass Half Full.”  Professor Stephen Diamond of Santa Clara argues that solo incomes are quite a bit higher. (Barton responded in the comments section).

There is little doubt that solo practitioners typically earn substantially less than lawyers working in large Wall Street Law firms.  However, a closer reading of the Internal Revenue Service data on which Barton relies and Census data both suggest that solo practitioner average (mean) annual earnings are likely closer to $100,000.  


    I. Average (Mean) Incomes of Lawyers: Census Income Data vs. IRS Schedule C Net Income Data

According to the U.S. Census Bureau’s American Community Survey, average (mean) total personal income for lawyers who are “self employed, not incorporated” (a proxy for those in small legal practice) was around $140,000 in 2012 and 2013.  For those who were self-employed, incorporated (a proxy for those who are owners of larger legal practices) average total personal income was around $180,000 to $190,000.  These average figures include those working part time.  Restricting the sample to those working full-time increases average earnings for “self employed, not incorporated” lawyers to around $160,000 to $165,000 and for the “self-employed, incorporated” lawyers to $185,000 to $200,000.

Barton based his earnings estimates on average “net income” data from the Internal Revenue Services Statistics of Income for Non-farm Sole Proprietorships  for “Legal Services (NAICS Code 5411)”.  This data is based on Schedule C of form 1040, which is used to calculate one of several sources of income on an individual tax return (“Business Income or Loss”). 

Looking at the same IRS schedule-C net-income data for all non-farm sole proprietorships and applying Barton’s reasoning suggests that in 2013, 24 million American small business owners earned an average (mean) income of $12,500.  This is barely above the poverty threshold for a 1 person household, and considerably lower than average (mean) earned income figures for all Americans reported by the U.S. Census’s American Community Survey (around $47,000 including only those who are employed in some capacity, and $22,000 averaging in everyone—children, the retired, and those not in the work force).


    A. IRS Schedule C Data Is Biased Downward:

What explains the large discrepancy between low IRS sole proprietor net income data and higher Census earnings data—for lawyers and for everyone else?  There are several problems with IRS sole proprietor data that are likely to lead to dramatic underestimation of individual earnings. 

Continue reading

July 26, 2016 in Guest Blogger: Michael Simkovic, Legal Profession, Science | Permalink

June 24, 2016

Why The New York Times Should Correct Remaining Factual Errors in Its Law School Coverage

Last week I wrote an open letter to New York Times reporter Noam Scheiber discussing problems with his law school coverage and his reliance on low quality sources such as internet blogs and "experts" who lack relevant expertise rather than peer reviewed labor economics research.  By email, Scheiber insisted that there was nothing wrong with his coverage, but he'd be happy to hear of any specific factual problems I could identify.  

I identified 6 clear factual errors and multiple misleading statements.  I also reinterviewed his lead source, John Acosta and found important discrepancies between how Scheiber depicted Acosta as someone who was suckered into un-repayable debt, while Acosta describes his own situation as hopeful and law school as a worthwhile and carefully researched investment.  New York Times Dealbook reporter and U.C. Berkeley Professor Steven Davidoff Solomon weighed in, citing my research and supporting my points.

Scheiber posted a response to his facebook page, after running it by his editors at the New York Times.  The New York Times agreed to correct the most minor of the six errors I identified. They also "tweaked" two sentences so that the language was less definitive.

Scheiber's response includes some good points (many students from Valparaiso might be below the 25th percentile of law school graduates) as well as strained interpretations of the language of his original article: "fewer" did not actually mean "fewer"'; "Harvardesque" did not actually mean "similar to Harvard."  Scheiber describes my presentation of data that contradicts his factual claims as "strange", "bizarre", "odd", "overly-literal" and (on Twitter) "gripes."   Interestingly, Scheiber thinks that "most law school graduates who pass the bar are going to have at least a few hundred thousand dollars in assets like 401k and home equity by the time they work for 20 years."  This level of savings would make them far more financially secure than the vast majority of the U.S. population.

My response to Scheiber is below.  I explain why The New York Times has an obligation to its readers to correct the remaining uncorrected factual errors in Scheiber's story.

Scheiber embedded his response in my explanation of the 6 clear factual errors in his story, and I in turn embedded my response within his response.  To ease readability, I have color coded Scheiber's response in orange, and my new response in blue.  Scheiber's response is indented once, and my new response is indented twice.  The least indented black text at the beginning of each thread is from the list of 6 clear factual errors, and can be skipped (scroll down until you see orange or blue text) by those who have followed the discussion thus far.

UPDATE: June 25, 2016:  Yesterday, The New York Times posted an additional minor correction to its discussion of taxation of debt forgiveness, stating that debt forgiveness would "probably" be treated as taxable income.  This is an improvement over the original, but could still mislead or confuse readers.  It also leaves many of the most important errors uncorrected.  

Scheiber  tells me that the "tweaks" to the language which he communicated to me in his facebook post from Tuesday 6/21 actually happened on Friday evening 6/17.   This would make them coincide with the timing of my open letter, but before my more detailed explanation of 6 clear factual errors. Scheiber tells me that these "tweaks" were not made in response to my letter, although he has not specified when on Friday evening the changes were made. They appear to have been made after I sent him the letter. 


Continue reading

June 24, 2016 in Guest Blogger: Michael Simkovic, Law in Cyberspace, Legal Profession, Of Academic Interest, Science, Weblogs | Permalink

June 20, 2016

How to Count: Choosing the Right Data Source (Michael Simkovic)

How to Count: Choosing the Right Data Source

In response to my last post, a reader asked why different data sources give different counts for the total number of lawyers in a given year.

Continue reading

June 20, 2016 in Guest Blogger: Michael Simkovic, Legal Profession, Science | Permalink

June 18, 2016

6 factual errors and several misleading statements in recent New York Times story by Noam Scheiber

New York Times reporter Noam Scheiber was kind enough to respond to my open letter and ask if I could point to anything specifically factually wrong with his story.  My response is below.



Thanks so much for responding. Yes, there are at least 6 factual errors in the article, and several misleading statements.

I’ll start with my interview with Acosta from earlier today, and then we can discuss empirics. Here’s what Acosta said:

"There’s no way I could pay back my student loans under a 10-year standard payment plan. With my current income, I can support myself and my family, but I need to keep my loan payments low for now. I’ve been practicing law since May, and I’m on track to make $40,000 this year. I think my income will go up over time, but I don’t know if it will be enough for me to pay back my loans without debt forgiveness after 20 years. What happens is up in the air.   I’m optimistic that I can make this work and pay my student loans. I view the glass now as half full.


Valparaiso did not mislead me about employment prospects. I had done my research. I knew the job market was competitive going in. I knew what debt I was walking into. I think very few Americans don’t have debt, but for me it was an investment. I saw the debt as an investment in my career, my future, and my family.


Valparaiso gave a guy like me, a non-traditional student a shot at becoming a lawyer. Most law schools say they take a holistic approach, but they don’t really do it. I had to work hard to overcome adversity, and they gave me a shot to go to law school and to succeed. They gave me a shot at something that I wanted to do where most law schools wouldn’t.


My situation might be different from other law students who start law school right out of college. I was older and I have a family to support."

On to empirics.

The story states that:

“While demand for other white-collar jobs has rebounded since the recession, law firms and corporations are finding that they can make do with far fewer full-time lawyers than before.”

This is incorrect.

First, the number of jobs for lawyers has increased beyond pre-recession levels (2007 or earlier), both in absolute terms and relative to growth in overall employment. (error #1)

Focusing only on lawyers working full-time in law firms or for businesses (I’m not sure why you exclude those working in government), there are more full-time corporate and law firm lawyers in 2014 according to the  U.S. Census Bureau’s Current Population Survey (CPS)—870,000—than in 2007—786,000. There have been more full-time corporate and law firm lawyers in every year from 2009 on than there were in 2007 and earlier.

You were looking at NALP or ABA data, which is measured at a single point in time—9 or 10 months after graduation—and is therefore much less representative of outcomes for law graduates—even recent law graduates—than Census data. Indeed, many law graduates who will eventually gain admission to a state bar will not have done so as of the date when NALP collects data. NALP and the ABA also use different definitions from the Census, so you cannot readily use their data to compare law graduates to others.

The trend of growth in lawyer jobs holds true for other cuts of the data (all lawyers; all full time lawyers) using other data sources—U.S. Census or Department of Labor (BLS OES) data.[i]

This is in spite of large declines in law school enrollments, which would be expected to reduce the number of working lawyers.

Second, employment has not rebounded to pre-recession (2007 or earlier) levels outside of law. (error #2)

Continue reading

June 18, 2016 in Guest Blogger: Michael Simkovic, Legal Profession, Professional Advice, Science, Student Advice, Weblogs | Permalink

June 09, 2016

Journalism researcher: To correct misinformation, essential to monitor and respond immediately (Michael Simkovic)

Scholars Strategy Network's No Jargon: 13: The Misinformation Age

Professor Brian Southwell explains why people tend to believe false information and discusses strategies for correcting the public perception of misinformation. Southwell is a professor of Mass Communication at University of North Carolina at Chapel Hill.

June 9, 2016 in Guest Blogger: Michael Simkovic, Law in Cyberspace, Legal Profession, Of Academic Interest, Professional Advice, Science, Weblogs | Permalink

May 05, 2016

Do Clients Lose When Lawyers Work for a Fixed-Fee? (Michael Simkovic)

Lawyers traditionally bill a specified hourly rate for the time they spend working on a case. This ideally incentivizes lawyers to work hard and improve outcomes for their clients, and it provides clients transparency with respect to lawyer effort.

However, an hourly rate can reduce the predictability of costs for clients. Some clients worry that hourly rates might encourage inefficient over-work. As a result, some have shifted toward fixed-fee arrangements for their legal services, in which lawyers are paid a flat fee for completion of a task, regardless of how much time it takes to complete.

Preliminary results from empirical research that will be presented at this years’ American Law & Economics Association Conference suggest that a fixed-fee approach to compensating lawyers reduces lawyers’ efforts to assist clients and leads to worse outcomes for clients.

Two separate studies by two groups of researchers using similar research designs with different data sets both come to substantially the same conclusions. (Benjamin Schwall, High-Powered Attorney Incentives: A Look at the New Indigent Defense System in South Carolina and Amanda Y. Agan, Matthew Freedman & Emily Owens, Counsel Quality and Client Match Effects).

One potential obstacle in assessing the effects of different billing practices is reverse causation. Better lawyers may normally be able to bill by the hour because they are better and have more power to negotiate, not because billing by the hour makes them better.

The studies control for differences in lawyer quality by looking at the same lawyers (lawyer fixed effects) sometimes as court–appointed attorneys paid a flat fee and sometimes as attorneys billing by the hour. Schwall’s paper exploits changes in how South Carolina compensates its public defenders, while Agan, Freedman & Owen focus on random assignment of criminal defense counsel in Texas. The studies also attempt to control for differences in the type of case and defendant characteristics. The research designs for causal inference appear to be rigorous, and the results seem intuitive and plausible.

While the context of these studies is the criminal justice system, it would be surprising if the conclusions did not also hold true in civil litigation or in transactional practice. A lawyer on a fixed-fee is likely to be more willing to concede important points to bring a case or transaction to a speedy conclusion than one who can bill by the hour and be compensated for his or her extra efforts. Sophisticated clients may be better able to monitor their attorneys than indigent defendants and criminal courts, but clients probably cannot eliminate agency costs (If they could, an hourly rate would make at least as much sense as a fixed-fee).

Assuming the preliminary results of these studies hold, the incentive problems created by fixed fee arrangements may be an opportunity for shrewd business people or plaintiffs lawyers to target counterparties or defendants. If the businessperson pays his own lawyers by the hour to negotiate opposite lawyers on a fixed-fee, the reward could be contracts with lopsided terms in his favor. Plaintiffs’ lawyers may similarly expect civil defense lawyers on fixed-fee arrangements to advocate a swift settlement on terms relatively favorable to plaintiffs.

Lawyers are likely to know which clients use fixed fee arrangements because such clients often have an RFP process in which law firms bid for their work.

May 5, 2016 in Guest Blogger: Michael Simkovic, Legal Profession, Of Academic Interest, Professional Advice, Science | Permalink

April 30, 2016

Should professors give more feedback before the final exam? (Michael Simkovic)

New research from Dan Schwarcz and Dion Farganis at Minnesota argues that providing students with practice problems and exercises that are similar to final exams and giving individual feedback prior to the final examination can help improve grades for first year law students.

Schwarcz and Farganis tracked the performance of first year students who were randomly assigned to sections, and as a result took courses with professors who either provided exercises and individual feedback prior to the final examination, or who did not provide feedback.

When the students who studied under feedback professors and the students who studied under no-feedback professors took a separate required class together, the feedback students received higher grades after controlling for several factors that predict grades, such as LSAT scores, undergraduate GPA, gender, race, and country of birth. The increase in grades appears to be larger for students toward the bottom half of the distribution. The paper also attempts to control for variation in instructor ability using student evaluations of teacher clarity.

It’s an interesting paper, and part of a welcome trend toward assessing proposed pedagogical reform through quasi-experimental methods.

The interpretation of these results raises a number of questions which I hope the authors will address more thoroughly as they revise the paper and in future research.

For example, are the differences due to instructor effects rather than feedback effects? Students are randomly assigned to instructors who happen to voluntarily give pre-final exam feedback. These might be instructors who are more conscientious, dedicated, or skilled and who also happen to give pre-exam feedback. Requiring other instructors to give pre-exam feedback—or having the same instructors provide no pre-exam feedback—might not affect student performance.

Controlling for instructor ability based on teaching evaluations is not entirely convincing, even if students are ostensibly evaluating teacher clarity. There is not very strong evidence that teaching evaluations reflect how much students learn. An easier instructor who covers less substance might receive higher teaching evaluations across the board than a rigorous instructor who does more to prepare students for practice. Teaching evaluations might reflect friendliness or liveliness or attractiveness or factors that do not actually affect student learning outcomes but that have consumption value for students.  Indeed, high feedback professors might receive lower teaching evaluations for the same quality of teaching because they might make students work harder and because they might provide negative feedback to some students, leading students to retaliate on teaching evaluations.

These issues could be addressed in future research by asking the same instructor to teach two sections of the same class in different ways and measuring both long term student outcomes and teaching evaluations.

Another question is: are students simply learning how to take law school exams? Or are they actually learning the material better in a way that will provide long-term benefits, either in bar passage rates or in job performance? At the moment, the data is not sufficient to know one way or the other.

A final question is how much providing individualized feedback will cost in faculty time, and whether the putative benefits justify the costs.

It’s a great start, and I look forward to more work from these authors, and from others, using quasi-experimental designs to investigate pedagogical variations.

April 30, 2016 in Guest Blogger: Michael Simkovic, Legal Profession, Of Academic Interest, Professional Advice, Science | Permalink