Future of Life Institute Podcast Podcast
Future of Life Institute Podcast
Future of Life Institute
Liron Shapira on Superintelligence Goals - episode of Future of Life Institute Podcast podcast

Liron Shapira on Superintelligence Goals

1 hour 26 minutes Posted Apr 19, 2024 at 2:29 pm.
Intelligence as optimization-power
Will LLMs imitate human values?
Why would AI develop dangerous goals?
Goal-completeness
Alignment to which values?
Is AI just another technology?
What is FOOM?
Risks from centralized power
Can AI defend us against AI?
An Apollo program for AI safety
Do we only have one chance?
Are we living in a crucial time?
Would superintelligence be fragile?
Would human-inspired AI be safe?
0:00
1:26:30
Download MP3
Show notes
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI.
Timestamps: