Wednesday, April 8, 2015

Empower the Census Bureau to Collect Graduate Employment Data (Michael Simkovic)

After my first post on employment definitions, a law school dean emailed me to suggest that perhaps the ABA felt it needed to be extra tough because it was worried it couldn’t trust some of the law schools to make close judgment calls in categorizing employment data. 

The Census Bureau does a wonderful job collecting and reporting earnings and employment data using standard methods and definitions.  Why not empower the Census Bureau to collect the relevant data about law schools and all programs of higher education?

There are two potential uses of employment outcome data of law school graduates. 

(1) Comparing law school to alternatives to law school

(2) Comparing law schools to teach other

Census Bureau data is very well suited to the first use, and could also be useful for high level information about geography or rank even if not for comparisons of individual institutions.  If the Current Population Survey and the American Community Survey—which have larger sample sizes and release data more regularly than the Survey of Income and Program Participation—were expanded to include questions on graduate education field (i.e., law, medicine, business) as well as level (B.A., PhD, Master’s, or Professional degree), and specific information about institution or caliber or geography of institution attended, that would go along way toward making law school data redundant.  Census surveys will not have data on every law graduate, but as long as the sample is representative, that is not much of a problem.

The Census Bureau data would likely be superior to law school data in the most important respects because it would be comparable to data for those with other educational backgrounds.  Since Census Bureau data is for a representative sample of the population, it would not encourage an unhealthy and misleading fixation on short-term outcomes.

As far as comparing individual law schools to each other, student loan default data from the Department of Education might serve this function at least as well as ABA data.  To the extent we are concerned about poor outcomes at any particular law school, such poor outcomes will show up in higher student loan default rates. 

Default rates will reflect outcomes not only for graduates, but also for those who fail to complete the program.  This data would also not be sensitive to response bias on the low end—individuals who do not respond to their student loan bills will be counted as defaulters.  Another advantage of this data is that it can be compared with other educational programs.  Of course, we would still need to be mindful of the issue of selection versus causation.  (Although we could quibble about how the Department of Education calculates its default rates (they publish more than one), the specifics of the definition are far less important that the fact that it is applied consistently across institutions, is used for comparative purposes, and is correlated with other validated measures).

If the Department of Education required colleges and universities to release separate default rate data for every field of graduate study (and perhaps for every college major), that would go a long way to helping inform students and increasing comparability of information about risk levels across programs.  (I’ve discussed the merits of this kind of granular disclosure before). 

The data won’t capture differences in the boost to earnings across law schools for students in the middle or high end of the distribution, since relatively few students default on their loans.  It also won’t tell us anything about the students who don’t need to borrow.  Nor will it tell us which schools have the strongest alumni networks in specific geographies or industries.  That purpose might be better served by expanding longitudinal studies like After the JD, Baccalaureate and Beyond, National Longitudinal Survey of Youth, and the National Survey of College Graduates to include larger samples, better information about pre-law school differences in characteristics, and more long term information on post-graduate earnings and employment.

The Census Bureau’s ethics and incentives are unimpeachable.  Putting data collection in its capable hands and into the hands of similar agencies charged with broad-based data collection would enable these agencies to do more of what they do best and free law schools from the burdens of a task they may not be well equipped to handle. 

Resources that are now wasted collecting very precise but not very useful data about initial outcomes for law graduates could instead be redeployed to analyzing the higher quality data.  (Or if we still think short term ABA and NALP data provide incremental value that exceeds the costs of collecting, reporting, and interpreting the data—and the costs of predictable misinterpretation and misuse—we could have that much more data to work with). 

Food for thought.

Guest Blogger: Michael Simkovic, Legal Profession, Of Academic Interest, Science, Weblogs | Permalink