An Economic Approach to Regulating Algorithms

Working Paper: NBER ID: w27111

Authors: Ashesh Rambachan; Jon Kleinberg; Sendhil Mullainathan; Jens Ludwig

Abstract: There is growing concern about "algorithmic bias" - that predictive algorithms used in decision-making might bake in or exacerbate discrimination in society. We argue that such concerns are naturally addressed using the tools of welfare economics. This approach overturns prevailing wisdom about the remedies for algorithmic bias. First, when a social planner builds the algorithm herself, her equity preference has no effect on the training procedure. So long as the data, however biased, contain signal, they will be used and the learning algorithm will be the same. Equity preferences alone provide no reason to alter how information is extracted from data - only how that information enters decision-making. Second, when private (possibly discriminatory) actors are the ones building algorithms, optimal regulation involves algorithmic disclosure but otherwise no restriction on training procedures. Under such disclosure, the use of algorithms strictly reduces the extent of discrimination relative to a world in which humans make all the decisions.

Keywords: algorithmic bias; regulation; welfare economics; discrimination; social planner

JEL Codes: C54; D6; J7; K00


Causal Claims Network Graph

Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.


Causal Claims

CauseEffect
social planner's equity preferences (D63)predictive algorithm's construction (C45)
algorithmic audits (M42)discrimination (J71)
use of algorithms (C89)discrimination (J71)
regulation (L51)discrimination (J71)
flexibility in regulation (G18)accurate predictions (C53)
flexibility in regulation (G18)discriminatory practices (J71)

Back to index