Modeling Machine Learning

Working Paper: NBER ID: w30600

Authors: Andrew Caplin; Daniel J. Martin; Philip Marx

Abstract: We apply methodological innovations from cognitive economics that were designed to study human cognition to instead better understand machine learning. We first show that the folk theory of machine learning – that an algorithms learns optimally to minimize the loss function used in training – rests on a shaky foundation. We then identify a path forward by translating ideas from the costly learning branch of cognitive economics. We find that changes in the loss function impact learning just as they might if the algorithm was a rational human being who found learning costly according to a revealed pseudo-cost function that may or may not correspond to actual resource costs. Our approach can be leveraged to determine more effective loss functions given a third party’s objective, be it a firm or a policy maker.

Keywords: Machine Learning; Cognition; Economics

JEL Codes: C0; D80


Causal Claims Network Graph

Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.


Causal Claims

CauseEffect
feasibility-based learning (O22)observed behavior of the algorithm (C92)
intrinsic difficulty of learning (D83)model's predictions (C52)
class weights (C46)predictions (F17)
higher learning costs (I23)more accurate predictions (C53)

Back to index