Saturday, April 30, 2016

Should professors give more feedback before the final exam? (Michael Simkovic)

New research from Dan Schwarcz and Dion Farganis at Minnesota argues that providing students with practice problems and exercises that are similar to final exams and giving individual feedback prior to the final examination can help improve grades for first year law students.

Schwarcz and Farganis tracked the performance of first year students who were randomly assigned to sections, and as a result took courses with professors who either provided exercises and individual feedback prior to the final examination, or who did not provide feedback.

When the students who studied under feedback professors and the students who studied under no-feedback professors took a separate required class together, the feedback students received higher grades after controlling for several factors that predict grades, such as LSAT scores, undergraduate GPA, gender, race, and country of birth. The increase in grades appears to be larger for students toward the bottom half of the distribution. The paper also attempts to control for variation in instructor ability using student evaluations of teacher clarity.

It’s an interesting paper, and part of a welcome trend toward assessing proposed pedagogical reform through quasi-experimental methods.

The interpretation of these results raises a number of questions which I hope the authors will address more thoroughly as they revise the paper and in future research.

For example, are the differences due to instructor effects rather than feedback effects? Students are randomly assigned to instructors who happen to voluntarily give pre-final exam feedback. These might be instructors who are more conscientious, dedicated, or skilled and who also happen to give pre-exam feedback. Requiring other instructors to give pre-exam feedback—or having the same instructors provide no pre-exam feedback—might not affect student performance.

Controlling for instructor ability based on teaching evaluations is not entirely convincing, even if students are ostensibly evaluating teacher clarity. There is not very strong evidence that teaching evaluations reflect how much students learn. An easier instructor who covers less substance might receive higher teaching evaluations across the board than a rigorous instructor who does more to prepare students for practice. Teaching evaluations might reflect friendliness or liveliness or attractiveness or factors that do not actually affect student learning outcomes but that have consumption value for students.  Indeed, high feedback professors might receive lower teaching evaluations for the same quality of teaching because they might make students work harder and because they might provide negative feedback to some students, leading students to retaliate on teaching evaluations.

These issues could be addressed in future research by asking the same instructor to teach two sections of the same class in different ways and measuring both long term student outcomes and teaching evaluations.

Another question is: are students simply learning how to take law school exams? Or are they actually learning the material better in a way that will provide long-term benefits, either in bar passage rates or in job performance? At the moment, the data is not sufficient to know one way or the other.

A final question is how much providing individualized feedback will cost in faculty time, and whether the putative benefits justify the costs.

It’s a great start, and I look forward to more work from these authors, and from others, using quasi-experimental designs to investigate pedagogical variations.

https://leiterlawschool.typepad.com/leiter/2016/04/should-professors-give-more-feedback-before-the-final-exam-michael-simkovic.html

Guest Blogger: Michael Simkovic, Legal Profession, Of Academic Interest, Professional Advice, Science | Permalink