Working Paper: NBER ID: w26673
Authors: Susan C. Athey; Kevin A. Bryan; Joshua S. Gans
Abstract: The allocation of decision authority by a principal to either a human agent or an artificial intelligence (AI) is examined. The principal trades off an AI’s more aligned choice with the need to motivate the human agent to expend effort in learning choice payoffs. When agent effort is desired, it is shown that the principal is more likely to give that agent decision authority, reduce investment in AI reliability and adopt an AI that may be biased. Organizational design considerations are likely to impact on how AI’s are trained.
Keywords: Artificial Intelligence; Decision Authority; Organizational Design
JEL Codes: C7; M54; O32; O33
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
AI's reliability (C52) | human agent's effort (J20) |
allocation of authority (H77) | human agent's effort (J20) |
AI bias (C45) | human agent's preference for decision authority (D91) |