Steven D Levitt, John A List, Susanne Neckermann, Sally Sadoff
Cited by*: 0 Downloads*: 161

Research on behavioral economics has established the importance of factors such as reference dependent preferences, hyperbolic preferences, and the value placed on non-financial rewards. To date, these insights have had little impact on the way the educational system operates. Through a series of field experiments involving thousands of primary and secondary school students, we demonstrate the power of behavioral economics to influence educational performance. Several insights emerge. First, we find that incentives framed as losses have more robust effects than comparable incentives framed as gains. Second, we find that non-financial incentives are considerably more cost-effective than financial incentives for younger students, but were not effective with older students. Finally, and perhaps most importantly, consistent with hyperbolic discounting, all motivating power of the incentives vanishes when rewards are handed out with a delay. Since the rewards to educational investment virtually always come with a delay, our results suggest that the current set of incentives may lead to under-investment. For policymakers, our findings imply that in the absence of immediate incentives, many students put forth low effort on standardized tests, which may create biases in measures of student ability, teacher value added, school quality, and achievement gaps.
John A List, Jeffrey A Livingston, Susanne Neckermann
Cited by*: None Downloads*: None

In the face of worryingly low performance on standardized test, offering students financial incentives linked to academic performance has been proposed as a potentially cost-effective way to support improvement. However, a large literature across disciplines finds that extrinsic incentives, once removed, may crowd out intrinsic motivation on subsequent, similar tasks. We conduct a field experiment where students, parents, and tutors are offered incentives designed to encourage student preparation for a high-stakes state test. The incentives reward performance on a separate low-stakes assessment designed to measure the same skills as the high-stakes test. Performance on the high-stakes test, however, is not incentivized. We find substantial treatment effects on the incented tests but no effect on the non-incented test; if anything, the incentives result in worse performance on the non-incented test. We also find evidence supporting the conclusion that the incentives crowd out intrinsic motivation to perform well on the non-incented test, but this effect is only temporary. One year later, students who had been in the incentives treatments perform better than those in the control on the same non-incented test.
  • 1 of 1