I'm an assistant professor of economics at the University of Arkansas specializing in behavioral and experimental economics. I use laboratory, field, and online experiments to study questions about education and public policy.
I'm originally from Kansas and received my bachelor's degree from Kansas State University. Before joining the faculty at the University of Arkansas, I studied economics at UC San Diego.
Employer beliefs, employee training, and labor market outcomes: A field experiment in Uganda ($452,000)
Employer beliefs, employee training, and labor market outcomes: A field experiment in Uganda ($16,660)
Examining the impact of waiting periods on improving the use of food subsidies for healthier consumption while maintaining choice ($198,940)
Improving Community College Outcomes through Performance Incentives ($312,000)
Improving Community College Outcomes through Performance Incentives ($25,000)
On the Elicitation of Willingness to Pay for Stigmatized Goods ($4,700)
In a field experiment, we examine the impact of performance-based incentives for community college instructors. Instructor incentives improve student exam scores, course grades, and credit accumulation while reducing course dropout. Effects are largest among part-time adjunct instructors. During the program, instructor incentives have large positive spillovers, increasing completion rates and grades in students’ courses outside our study. One year after the program, instructor incentives increase transfer rates to 4-year colleges with no impact on 2-year college degrees. We find no evidence of complementarities between instructor incentives and student incentives. Finally, while instructors initially prefer gain-framed contracts over our loss-framed ones, preferences for loss-framed contracts significantly increase after experience with them. [PDF]
Disentangling effort and luck is critical when judging performance. In a principal-agent experiment, we demonstrate that principals' judgments of agent effort are biased by luck, despite perfectly observing the agent's effort. We find that two potential solutions to this "outcome bias"—the opportunity to avoid irrelevant information about luck, and outsourcing judgment to independent third parties—are ineffective. When we give control over information about luck to principals and agents in separate treatments, we find asymmetric sophistication: agents strategically manipulate principals' outcome bias, but principals fail to recognize their own bias. Independent third parties are just as biased as principals. These findings indicate that the scope of outcome bias may be larger than previously understood and that outcome bias cannot be driven solely by emotional responses nor distributional preferences. Instead, we hypothesize that luck directly affects beliefs, and we test this hypothesis by eliciting the beliefs of third parties and principals. Lucky agents are believed to exert more effort than identical, unlucky agents. We propose a model of biased belief updating explaining these results.
Social scientists have observed that socially desirable responding (SDR) often biases unincentivized surveys. Nonetheless, media, campaigns, and markets all employ unincentivized polls to make predictions about electoral outcomes. During the 2016 presidential campaign, we conducted three list experiments to test the effect SDR has on polls of agreement with presidential candidates. We elicit a subject's agreement with either Hillary Clinton or Donald Trump using explicit questioning or an implicit elicitation that allows subjects to conceal their individual responses. We find evidence that explicit polling overstates agreement with Clinton relative to Trump. Subgroup analysis by party identification shows that SDR significantly diminishes explicit statements of agreement with the opposing party's candidate, driven largely by Democrats who are significantly less likely to explicitly state agreement with Trump. We measure economic policy preferences and find no evidence that ideological agreement drives SDR. We find suggestive evidence that local voting patterns predict SDR. [PDF]
Grading on the curve is a form of relative evaluation similar to an all-pay auction or rank-order tournament. The distribution of students drawn into the class from the population is predictably linked to the size of the class. Increasing the class size draws students' percentile ranks closer to their population percentiles. Since grades are awarded based on percentile ranks in the class, this reallocates incentives for effort between students with different abilities. The predicted aggregate effort and the predicted effort from high-ability students increases while the predicted effort from low-ability students decreases. Andreoni and Brownback (2017) find that the size of a contest has a causal impact on the aggregate effort from participants and the distribution of effort among heterogeneous agents. In this paper, I randomly assign "class sizes" to quizzes in an economics course to test these predictions in a real-stakes environment. My within-subjects design controls for student, classroom, and time confounds and finds that the lower variance of larger classes elicits greater effort from all but the lowest-ability students, significantly increasing aggregate effort. [PDF]
We model contests with a fixed proportion of prizes, such as a grading curve, as all-pay auctions where higher effort weakly increases the likelihood of a prize. We find theoretical predictions for the heterogeneous effect auction size has on effort from high- and low-types. We test our predictions in a laboratory experiment that compares behavior in two-bidder, one-prize auctions with behavior in 20-bidder, 10-prize auctions. We find a statistically significant 11.8% increase in aggregate bidding when moving from the small to large auction. The impact is heterogeneous: as the auction size increases, low-types decrease effort but high-types increase effort. Additionally, the larger auction provides a stronger rank-correlation between effort and ability, awarding more prizes to the higher-skilled and improving the efficiency of prize allocation. [NBER link]
We conduct a field experiment with low-income shoppers to study how behavioral interventions can improve the effectiveness of healthy food subsidies. Our unique design enables us to elicit choices and deliver subsidies both before and at the point of purchase. We examine the effects of two non-restrictive changes to the choice environment: giving shoppers agency over what subsidy they receive and introducing a waiting period before the shopping trip to prompt deliberation about their purchases. Combined, our interventions enhance the subsidies, increasing healthy purchases by 61% relative to a choice-less healthy subsidy, and 199% relative to a control group. [SSRN Link]
We experimentally examine whether a policy targeting college summer school enrollment can accelerate degree progress and completion. We randomly assign summer scholarships to community college students and find a large impact on degree acceleration, increasing graduation within one year of the intervention by 32% and transfers to four-year colleges by 58%. We elicit preferences for the scholarships and find that treatment effects are concentrated among students with a preference against summer school. Our results suggest that educational impacts do not drive enrollment preferences. And, that many more students could benefit from summer school than the minority who currently enroll. [SSRN Link]
Poll respondents often attempt to present a positive image by overstating virtuous behaviors. We examine whether people account for this "socially desirable responding" (SDR) when drawing inferences from poll data. In an experiment, we incentivize ``predictors'' to guess others' choice behaviors across eight actions with varying social desirability. To aid guessing, predictors observe random subsamples of (i) incentivized choices or (ii) hypothetical claims from polls. Predictors show reasonable skepticism towards hypothetical claims, which exhibit predictable SDR. However, their skepticism is not tailored to the direction or magnitude of SDR. This under-correction occurs even though subjects' explicit responses can predict SDR. [SSRN Link]