Algorithmic Collusion with Imperfect Monitoring

Working Paper: CEPR ID: DP15738

Authors: Giacomo Calzolari; Emilio Calvano; Vincenzo Denicolo; Sergio Pastorello

Abstract: We show that if they are allowed enough time to complete the learning, Q-learning algorithms can learn to collude in an environment withimperfect monitoring adapted from Green and Porter (1984), without having been instructed to do so, and without communicating with one another. Collusion is sustained by punishments that take the form of "price wars" triggered by the observation of low prices. The punishments have a finite duration, being harsher initially and then gradually fading away. Such punishments are triggered both by deviations and by adverse demand shocks.

Keywords: Artificial Intelligence; Q-learning; Imperfect Monitoring; Collusion

JEL Codes: L41; L13; D43; D83


Causal Claims Network Graph

Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.


Causal Claims

CauseEffect
Q-learning algorithms (C73)Collusion in imperfect monitoring (D82)
Price wars triggered by low prices (L11)Sustainability of collusion (D74)
Imperfect monitoring (D82)Effectiveness of collusion (L12)

Back to index