Working Paper: NBER ID: w25548
Authors: Jon Kleinberg; Jens Ludwig; Sendhil Mullainathan; Cass R. Sunstein
Abstract: The law forbids discrimination. But the ambiguity of human decision-making often makes it extraordinarily hard for the legal system to know whether anyone has actually discriminated. To understand how algorithms affect discrimination, we must therefore also understand how they affect the problem of detecting discrimination. By one measure, algorithms are fundamentally opaque, not just cognitively but even mathematically. Yet for the task of proving discrimination, processes involving algorithms can provide crucial forms of transparency that are otherwise unavailable. These benefits do not happen automatically. But with appropriate requirements in place, the use of algorithms will make it possible to more easily examine and interrogate the entire decision process, thereby making it far easier to know whether discrimination has occurred. By forcing a new level of specificity, the use of algorithms also highlights, and makes transparent, central tradeoffs among competing values. Algorithms are not only a threat to be regulated; with the right safeguards in place, they have the potential to be a positive force for equity.
Keywords: Discrimination; Algorithms; Fairness; Regulation
JEL Codes: H0; I0; K0
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
Algorithms (C89) | easier identification of discriminatory practices (J71) |
Regulatory measures (G18) | transparency in algorithmic decision-making (D79) |
transparency in algorithmic decision-making (D79) | easier identification of discriminatory practices (J71) |
Opacity of algorithms (C60) | exacerbation of discrimination (J15) |
Bias in training data (C83) | perpetuation of biases in algorithms (J71) |