Future of Life Institute Podcast Podcast

Future of Life Institute Podcast

Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Samuel Hammond on why AI Progress is Accelerating - and how Governments Should Respond
Samuel Hammond joins the podcast to discuss whether AI progress is slowing down or speeding up, AI agents and reasoning, why superintelligence is an ideological goal, open source AI, how technical change leads to regime change, the economics of advanced AI, and much more.    Our conversation often references this essay by Samuel: https://www.secondbest.ca/p/ninety-five-theses-on-ai    Timestamps:  00:00 Is AI plateauing or accelerating?   06:55 How do we get AI agents?   16:12 Do agency and reasoning emerge?   23:57 Compute thresholds in regulation 28:59 Superintelligence as an ideological goal  37:09 General progress vs superintelligence  44:22 Meta and open source AI   49:09 Technological change and regime change  01:03:06 How will governments react to AI?   01:07:50 Will the US nationalize AGI corporations?   01:17:05 Economics of an intelligence explosion   01:31:38 AI cognition vs human cognition   01:48:03 AI and future religions  01:56:40 Is consciousness functional?   02:05:30 AI and children
Aug 22
2 hr 16 min
Video
Anousheh Ansari on Innovation Prizes for Space, AI, Quantum Computing, and Carbon Removal
Anousheh Ansari joins the podcast to discuss how innovation prizes can incentivize technical innovation in space, AI, quantum computing, and carbon removal. We discuss the pros and cons of such prizes, where they work best, and how far they can scale. Learn more about Anousheh's work here: https://www.xprize.org/home   Timestamps:  00:00 Innovation prizes at XPRIZE  08:25 Deciding which prizes to create  19:00 Creating new markets  29:51 How far can prizes scale?   35:25 When are prizes successful?   46:06 100M dollar carbon removal prize  54:40 Upcoming prizes  59:52 Anousheh's time in space
Aug 9
1 hr 3 min
Video
Mary Robinson (Former President of Ireland) on Long-View Leadership
Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org   Timestamps:  00:00 Mary's journey to presidency   05:11 Long-view leadership  06:55 Prioritizing global problems  08:38 Risks from artificial intelligence  11:55 Climate change  15:18 Barriers to global gender equality   16:28 Risk of nuclear war   20:51 Advice to future leaders   22:53 Humor in politics  24:21 Barriers to international cooperation   27:10 Institutions and technological change
Jul 25
30 min
Video
Emilia Javorsky on how AI Concentrates Power
Emilia Javorsky joins the podcast to discuss AI-driven power concentration and how we might mitigate it. We also discuss optimism, utopia, and cultural experimentation.  Apply for our RFP here:   https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/ Timestamps:  00:00 Power concentration   07:43 RFP: Mitigating AI-driven power concentration  14:15 Open source AI   26:50 Institutions and incentives  35:20 Techno-optimism   43:44 Global monoculture   53:55 Imagining utopia
Jul 11
1 hr 3 min
Video
Anton Korinek on Automating Work and the Economics of an Intelligence Explosion
Anton Korinek joins the podcast to discuss the effects of automation on wages and labor, how we measure the complexity of tasks, the economics of an intelligence explosion, and the market structure of the AI industry. Learn more about Anton's work at https://www.korinek.com   Timestamps:  00:00 Automation and wages  14:32 Complexity for people and machines  20:31 Moravec's paradox  26:15 Can people switch careers?   30:57 Intelligence explosion economics  44:08 The lump of labor fallacy   51:40 An industry for nostalgia?   57:16 Universal basic income   01:09:28 Market structure in AI
Jun 21
1 hr 32 min
Video
Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light
Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse. Find out more about Christian's work at https://www.founderspledge.com   Timestamps:  00:00 US-China competition and risk   18:01 The security dilemma   30:21 Official and unofficial diplomacy  39:53 Hotlines between countries   01:01:54 Preventing escalation after war   01:09:58 Catastrophic biological risks   01:20:42 Ultraviolet germicidal light  01:25:54 Ancient civilizational collapse
Jun 7
1 hr 36 min
Video
Christian Nunes on Deepfakes (with Max Tegmark)
Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation. You can learn more about Christian's work at https://now.org and about the Ban Deepfakes campaign at https://bandeepfakes.org  Timestamps: 00:00 The National Organisation for Women (NOW)  05:37 Deepfakes and women  10:12 Protecting ordinary victims of deepfakes  16:06 Deepfake legislation  23:38 Current harm from deepfakes  30:20 Bodily autonomy as a right  34:44 NOW's work on AI  Here's FLI's recommended amendments to legislative proposals on deepfakes:  https://futureoflife.org/document/recommended-amendments-to-legislative-proposals-on-deepfakes/
May 24
37 min
Video
Dan Faggella on the Race to AGI
Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at https://danfaggella.com Timestamps: 00:00 Value differences in AI 12:07 Should we eventually create AGI? 28:22 What is a worthy successor? 43:19 AI changing power dynamics 59:00 Open source AI 01:05:07 What drives AI progress? 01:16:36 What limits AI progress? 01:26:31 Which industries are using AI?
May 3
1 hr 45 min
Video
Liron Shapira on Superintelligence Goals
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us against AI? 56:28 An Apollo program for AI safety 01:04:49 Do we only have one chance? 01:07:34 Are we living in a crucial time? 01:16:52 Would superintelligence be fragile? 01:21:42 Would human-inspired AI be safe?
Apr 19
1 hr 26 min
Video
Annie Jacobsen on Nuclear War - a Second by Second Timeline
Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first critical seconds 29:42 Decisions under time pressure 34:27 Lessons from insiders 44:18 Submarines 51:06 How did we end up like this? 59:40 Interceptor missiles 1:11:25 Nuclear weapons and cyberattacks 1:17:35 Concentration of power
Apr 5
1 hr 26 min
Video
Load more