« Back to Results

Empirical Frontiers in Discrimination Research

Paper Session

Saturday, Jan. 3, 2026 8:00 AM - 10:00 AM (EST)

Philadelphia Marriott Downtown
Hosted By: Econometric Society
  • Chair: Evan Rose, University of Chicago

What Do Blind Evaluations Reveal? How Discrimination Shapes Representation and Quality

Haruka Uchida
,
University of Chicago

Abstract

Concealing candidate identities during evaluations ("blinding") is often proposed to combat discrimination, yet its effects on the composition and quality of selected candidates, as well as its underlying mechanisms, remain unclear. I conduct a field experiment at an international academic conference, randomly assigning all 657 submitted papers to two blind and two non-blind reviewers (245 total) and collecting paper quality measures—citations and publication statuses five years later. I find that blinding significantly shrinks gaps in reviewer scores and acceptances by student status and institution rank, with no significant effects by gender. These increases in representation are not at the expense of quality: papers selected under blind review are of comparable quality to those selected non-blind. To understand mechanisms, I run a second field experiment that again implements blind and non-blind review, and elicits reviewer predictions of future submission outcomes. I combine my experiments to estimate a model of reviewer scores that uses blind scores to decompose non-blind disparities into distinct forms of discrimination. I find that the nature of discrimination differs by trait: student score gaps are explained by inaccurate beliefs about paper quality (inaccurate statistical discrimination) and alternative objectives (such as favoring authors whose acceptance benefits others), while institution gaps are attributable to residual drivers such as animus.

Discrimination Preferences

Nickolas Gagnon
,
Aarhus University
Daniele Nosenzo
,
Aarhus University

Abstract

We reconsider discrimination preferences through moral lenses and conduct experiments to systematically investigate these preferences using quota-based representative UK samples. Moving beyond the aggregate, we evaluate the frequency of individual preferences for and against taste- and statistical-based discrimination across three domains---ethnicity, gender, and LGBTQ+ status. Using over 60,000 anonymous decisions affecting how workers are paid from more than 3,500 individuals, we document that most individuals prefer to engage in at least one type of discrimination, that there is substantial heterogeneity in preferences, and that the existence of multiple preferences changes our understanding of why individuals engage or not in discrimination. Among others, we examine how preferences are linked to socio-demographic characteristics, politics, support for policies, and the gender wage gaps, evaluate how they correlate across domains, study underlying redistributive principles and effects of wage transparency, and complement our findings with a survey about workplaces.

Discriminatory Discretion: Theory and Evidence From Use of Pretrial Algorithms

Diag Davenport
,
Princeton University

Abstract

This article examines the biased usage of an algorithm, an understudied topic relative to the massive body of research that examines how algorithms may be biased. Using highly detailed administrative data, I study a large sample of high-stakes decision makers—New Jersey police and judicial officers—who are armed with a freely available algorithm. When officers consider requesting a warrant for a defendant’s detention, they have complete discretion over whether to consult an algorithmic risk score that predicts a defendant’s likelihood of failing to appear in court as well as the defendant’s likelihood of being rearrested if released. I find that officers frequently choose not to look at information that is free, simple, and non-binding. Moreover, the choice of whether to view the algorithm is far from random. Controlling for underlying risk, fficers are less likely to consult the risk score for black defendants (relative to white defendants) accused of lesser crimes, but the relationship is reversed for severe crimes. Then, once the risk scores are seen, officers are more likely to issue warrants for black defendants, again controlling for risk. The black-white warrant gap is smallest for the most and least risky defendants, and grows for more moderate-risk defendants. I organize these empirical facts in a novel taste-based discrimination framework in which agents are averse to certain groups, but also averse to appearing prejudiced. The key prediction of this avoidant animus is that agents will discriminate more in situations that are more ambiguous in an effort to curate their preferred image. I conclude by discussing policy implications for prejudice reduction, automation, and the discretionary use of decision aids.

Selecting for Diverse Talent: Theory and Evidence

Kadeem Noray
,
Massachusetts Institute of Technology

Abstract

We hypothesize that the complexity of selecting personnel in a way that jointly optimizes for talent and diversity impedes organizations from meeting their diversity goals. To formalize this, we prove that maximizing cohort diversity is computationally complex (i.e, NP-Hard) and incorporate this complexity into a selection model by adding computational costs. To test the model’s predictions, we construct an algorithm to estimate the diversity-talent frontier, which we apply to data from a scholarship and talent investment program. We find that shortlisted cohorts could have been 13% (19.6%) more diverse (more talented) without reducing talent (diversity). We also show that the program selected a significantly more diverse and talented cohort after we provided them with a frontier estimate. We conclude by using program data to demonstrate how the frontier estimation procedure can be used to evaluate the efficacy of alternative screening approaches. This reveals that if the program had screened on IQ, they would have significantly reduced diversity and overlooked many of the most talented applicants.
JEL Classifications
  • J2 - Demand and Supply of Labor
  • J7 - Labor Discrimination