The Risks of Risk-Based AI Regulation: Taking Liability Seriously

Working Paper: CEPR ID: DP18517

Authors: Martin Kretschmer; Tobias Kretschmer; Alexander Peukert; Christian Peukert

Abstract: The development and regulation of multi-purpose, large “foundation models” of AI seems to have reached a critical stage, with major investments and new applications announced every other day. Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4. Legislators globally compete to set the blueprint for a new regulatory regime. This paper analyses the most advanced legal proposal, the European Union’s AI Act currently in the stage of final “trilogue” negotiations between the EU institutions. This legislation will likely have extra-territorial implications, sometimes called “the Brussels effect”. It also constitutes a radical departure from conventional information and communications technology policy by regulating AI ex-ante through a risk-based approach that seeks to prevent certain harmful outcomes based on product safety principles. We offer a review and critique, specifically discussing the AI Act’s problematic obligations regarding data quality and human oversight. Our proposal is to take liability seriously as the key regulatory mechanism. This signals to industry that if a breach of law occurs, firms are required to know in particular what their inputs were and how to retrain the system to remedy the breach. Moreover, we suggest differentiating between endogenous and exogenous sources of potential harm, which can be mitigated by carefully allocating liability between developers and deployers of AI technology.

Keywords: product liability

JEL Codes: L86


Causal Claims Network Graph

Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.


Causal Claims

CauseEffect
EU's risk-based approach to AI regulation (L50)complexities that do not adequately address the unique nature of AI systems (D82)
Taking liability seriously (K13)improved compliance and accountability in AI technologies (O35)
Data management (C80)safety of AI outputs (C45)
Differentiation between endogenous and exogenous sources of harm (D62)effective allocation of liability (K13)

Back to index