Eleven Canadian universities advised Maclean's magazine on Monday that
they will not participate in this year's survey that assigns rankings
to each institution because of concerns about the methodology and the
validity of some of the measures.
a letter to Tony Keller, the magazine's managing editor of special
projects, the universities said they have expressed their "considerable
reservations" to Maclean's for some years, but to little avail.
far, these serious concerns have gone largely unaddressed, and there is
still no evidence that Maclean's intends to respond to them," they said.
universities said they already publish a lot of data online about
themselves and intend to add more to allow people to make valid
"However, it is truly hard for us to justify the
investment of public funds required to generate customized data for
your survey when those data are compiled in ways that we regard as
oversimplified and arbitrary," they said.
The letter was signed by the presidents of:
University of Toronto
University of Ottawa
University of British Columbia
Simon Fraser University
University of Alberta
University of Calgary
University of Lethbridge
University of Manitoba
Université de Montréal
universities said they found it inappropriate that the survey collects
data on a wide range of things — such as class size, faculty, finances,
library and reputation — and then arbitrarily assigns weightings to
generate a single ranking number....
It will be interesting to see what effect this has on the credibility of the Maclean's rankings. (Canadian readers: do they have any credibility? Comments are open for non-anonymous postings.) Certainly the fact that the two preeminent research universities in Canada--Toronto and British Columbia--are participating in the boycott should help. One wonders, though, why McGill was not a signatory (do they fare especially well in Maclean's?).
Of course, the other interesting question is why leading American research universities haven't followed suit? I suppose the worry is that it would be very hard to get schools to stick to an agreement not to participate. Schools like Harvard and MIT and Stanford can weather whatever abuse U.S. News would dole out to them if they didn't complete the surveys, but other schools could ill afford it, especially those (like Duke or Penn, among many others) that tend to be systematically overrated in U.S. News relative to other measures of academic quality.
It will be interesting to see what happens in Canada.
UPDATE: My colleague Les Green's observations, from the comments section, deserve to be read:
The systematically overrated Canadian universities are indeed over-represented among the non-signatories.
But the Macleans rankings have nothing like the influence in Canada
that US News rankings have in the US. There are a number of reasons for
this. One is that the quality gap between the top Canadian universities
and the bottom ones is nothing like the gap between the top and bottom
American universites, so the value of rankings to the prospective
student is much less. This is in turn partly explained by the fact that
there is no equivalent to the spectacularly rich private US schools in
Canada; but neither is there any equivalent to the starved public
degree-mills or the bizarre little religious and ideological
enterprises that pretend to teach at a university level. Regulation
puts a quality floor under Canadian tertiary education (and perhaps,
controversially, also a ceiling above it). Then there is a much more
pronounced regional culture up here. UBC is indeed a very good research
university; but it isn't drawing many top students from Ontario or
Quebec (the largest provinces). Most Ontario students just aren't all
that interested in the differences, if any, between UBC and Simon
Fraser. Finally, remember that the US-Canada border is, for the monied
classes, porous. Occasionally one or the other of the better Canadian
universities pretends to be the "Harvard of the North." But well-to-do
parents aren't fooled. The Harvard of the North is *Harvard*. Finally,
it seems to me that, bad as it is, US News is actually *better* at
ranking than is Macleans, whose staff obviously lack the competence and
contacts to do even a mediocre job of assessing the quality of Canadian
ANOTHER UPDATE: Canadian philosopher Thomas Hurka at the University of Toronto writes:
Interesting that you posted about the Canadian universities and the Maclean's ranking. One issue that was brought up years ago by the Alberta universties but got no response from the magazine is the following. Perhaps the most important factor in the overall Maclean's ranking is average grades of incoming students. But, as Les Green noted on your site, Canadian universities much more than US ones draw the bulk of their students locally, from their own provinces. And high school grading practices in different provinces are very different, e.g. Alberta has province-wide exams, which lowers grades (because you're not being graded by your own teacher, who has a stake in your success), while Ontario has a cash scholarship for averages above 80%, which
inflates grades (since teachers want their students to get the cash). This systematically favours Ontario over Alberta universities, and in particular favours Queen's U, which has the highest incoming average in the country. It wouldn't be too hard to adjust for these differences -- just control by the percentage of a university's incoming class that's out-of-province and normalize by average grade 12 results per province. But Maclean's refused to do anything like that. No doubt that partly
explains why all three Alberta universities are among those pulling out of the exercise, while Queen's is not.
As many readers know, I'm going to be at the University of Chicago in the fall. Driving from Austin is more hassle than it's worth, so I am wondering whether any readers in the Chicago area who might be on leave themselves in the fall, or who have an extra car, would be willing to "rent" me a vehicle for local use. (Because I'll be commuting back and forth at various intervals between Chicago and Austin, I won't even be using a car continuously while there.) If so, please e-mail me. Many thanks.
UPDATE: Someone suggested shipping the car, or getting it driven (try www.autodriveaway.com, which does both), which is what I may end up doing, though for a variety of reasons not worth boring folks with, leaving the car in Austin is preferable.
Daniel Solove (GW) has initiated a discussion of this topic here. Personally, I give most weight to the references (if they are from reliable people) and the quality of any writing the candidate has done. Later on, the job talk can be useful for assessing likely classroom performance, and sometimes for assessing intellectual ability (though written work is a better guide on that score). Excellent grades in law school and clerkships turn out to be quite weak predictors of scholarly ability and intellectual depth. (There are exceptions: successful clerkships with academically-minded judges like Posner or Calabresi can provide valuable information.)
Earlier this summer, I began a series of posts about the U.S. News & World Report's law school rankings. (Please see below for links to each post in the series.) My research uncovered many interesting and troubling things about the rankings. I discovered errors in the data that USN&WR used for the most recent rankings and, consequently, errors in the way that it ranked several law schools. More distressingly, I discovered that almost no safeguards exist to correct or prevent such errors. I think it fair to say that, but for my peculiar obsession with the USN&WR rankings, nobody would have noticed the errors I've documented. That won't do. We cannot rely on one nutty professor to keep the rankings honest. I thus here wrap up my series about the most recent USN&WR law school rankings by describing several reforms designed to make law school rankings more accurate and open. Although I suggest all of them, implementing any one of these reforms would make errors in the rankings less likely, and surviving errors more likely to get corrected.
Professor Bell has several sound suggestions. They do not, of course, address the more general worry, namely, that the US News ranking methodology, with its stew of a dozen different factors, simply makes no sense, and can be neither rationally defended nor even explained. But Professor Bell's core recommendations are certainly good ones; perhaps Bob Morse at US News will take note.
I am the new chairman of my school's hiring committee and would find benchmarking on several issues helpful.
Here are my questions:
1. Is the selection ever delegated to a committee? If so, why? 2. Which categories of faculty do not vote on hiring decisions? LR&W? Clinical? 3. With respect to faculty hiring, do all law school faculties use the majority vote approach or do some require a supermajority? If a supermajority approach is used, what are the reasons for this? 4. Assuming there are multiple candidates for a position, does the faculty rank them in order of preference? Is the dean given authority to hire a lower-ranked candidate if the preferred candidate turns the school down?
I assume there are a variety of institutional practices out there. At Texas the answers are: 1. No, though the Appointments Committee has enormous influence, and its strong recommendations have never been turned down in my recollection. 2. LR&W and Clinical faculty do not vote; only tenure-stream academic faculty vote. 3. We require a kind of supermajority vote for appointments with tenure, though our voting rules are so complicated, I can't explain them! 4. We do not vote an offer unless we have a position for the candidate, so we don't ordinarily get into this situation. The Dean does not have the authority to make offers, only the faculty does.
Non-anonymous responses will be preferred, though I may post anonymous replies if I have independent reasons for thinking they are factually accurate.
Richard Lempert (Michigan) has apt comments here; an excerpt:
I have concerns about the quality of some of the empirical work
being done and how empirical work, even good scholarship, is used. To
put it bluntly too much empirical scholarship is being deployed
normatively, downplaying caveats that should be attached to findings,
and some research, including work by outstanding scholars, seems
strongly agenda driven (sometimes to the point of being financed by
parties building records for litigation). I also see in some work a
divorce of empirical analysis from theory and context which not only
can diminish the utility of empirical studies in building more general
understandings, but can also lead to poor research designs and
misunderstandings of data. Moreover, even when research is of high
quality and done with great care, its results can be hijacked by groups
that oversimplify what was found in order to "sell" positions they
hold and would hold even if the empirical work had come out
Too often researchers encourage misuses of their results in
conclusions that push the practical implications of their research,
even when the more detailed analysis emphasizes proper cautions. While
this occurs with empirical students of the law in liberal arts schools
by political scientists, sociologists, economists and psychologists
among others, the problem tends to be more severe in the empirical work
of law professors, perhaps because most see their business not as
building social or behavioral theory but as criticizing laws and legal
institutions and recommending reform.