Working Paper: NBER ID: w31924
Authors: Joshua S. Gans
Abstract: This paper robustly concludes that it cannot. A model is constructed under idealised conditions that presume the risks associated with artificial general intelligence (AGI) are real, that safe AGI products are possible, and that there exist socially-minded funders who are interested in funding safe AGI even if this does not maximise profits. It is demonstrated that a socially-minded entity formed by such funders would not be able to minimise harm from AGI that might be created by unrestricted products released by for-profit firms. The reason is that a socially-minded entity has neither the incentive nor ability to minimise the use of unrestricted AGI products in ex post competition with for-profit firms and cannot preempt the AGI developed by for-profit firms ex ante.
Keywords: No keywords provided
JEL Codes: L20; O33; O36
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
Socially minded governance structure (G38) | Safe AGI release (L17) |
Competition with for-profit entities (L39) | Safe AGI release (L17) |
For-profit development (O29) | Socially minded governance structure (G38) |