In this episode, we tackle one of the most pressing questions of our technological age: how much risk of human extinction should we accept in exchange for unprecedented economic growth from AI?
The podcast explores research by Stanford economist Chad Jones, who models scenarios where AI might deliver a staggering 10% annual GDP growth but carry a small probability of triggering an existential catastrophe. We dissect how our risk tolerance depends on fundamental assumptions about utility functions, time horizons, and what actually constitutes an "existential risk."
We discuss how Jones’ model presents some stark calculations: with certain plausible assumptions, society might rationally accept up to a 33% cumulative chance of extinction for decades of AI-powered prosperity. Yet slight changes to risk assumptions or utility functions can flip the calculation entirely, suggesting we should halt AI development altogether.
We also discuss how much of global GDP—potentially trillions of dollars—should be invested in AI safety research. Jones' models suggest anywhere from 1.8% to a staggering 15.8% of world GDP might be the optimal investment level to mitigate existential risk, numbers that dwarf current spending.
Beyond the mathematics, we discuss philosophical tensions: Should a world government be more or less risk-averse than individuals? Do we value additional years of life more than additional consumption? And how do we navigate a world where experts might exploit "Pascal's Mugger" scenarios to demand funding?
"If we delay AI," Seth concludes, "it will require killing something of what is essential to us. The unbounded optimism about the power of thought and freedom, or as the way Emerson would've put it, the true romance."
Justified Posteriors is sponsored by the Digital Business Institute at Boston University’s Questrom School of Business. Big thanks to Ching-Ting “Karina” Yang for her help editing the episode.
—
🔗Links to the paper for this episode’s discussion:
(FULL PAPER) The AI Dilemma: Growth versus Existential Risk by Charles I. Jones
(FULL PAPER) How Much Should We Spend to Reduce A.I.’s Existential Risk? by Charles I. Jones
🔗Related papers
Robust Technology Regulation by Andrew Koh and Sivakorn Sanguanmoo
Existential Risk and Growth by Leopold Aschenbrenner and Philip Trammell
🗞️Subscribe for upcoming episodes, post-podcast notes, and Andrey’s posts:
💻 Follow us on Twitter:
@AndreyFradkin https://x.com/andreyfradkin?lang=en
@SBenzell https://x.com/sbenzell?lang=en
Share this post