Support this podcast by signing up with these sponsors:
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Ilya’s Twitter: https://twitter.com/ilyasut
Ilya’s Website: https://www.cs.toronto.edu/~ilya/
PODCAST INFO:
Podcast website:
https://lexfridman.com/podcast
Apple Podcasts:
https://apple.co/2lwqZIr
Spotify:
https://spoti.fi/2nEwCF8
RSS:
https://lexfridman.com/feed/podcast/
Full episodes playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41
OUTLINE:
0:00 – Introduction
2:23 – AlexNet paper and the ImageNet moment
8:33 – Cost functions
13:39 – Recurrent neural networks
16:19 – Key ideas that led to success of deep learning
19:57 – What’s harder to solve: language or vision?
29:35 – We’re massively underestimating deep learning
36:04 – Deep double descent
41:20 – Backpropagation
42:42 – Can neural networks be made to reason?
50:35 – Long-term memory
56:37 – Language models
1:00:35 – GPT-2
1:07:14 – Active learning
1:08:52 – Staged release of AI systems
1:13:41 – How to build AGI?
1:25:00 – Question to AGI
1:32:07 – Meaning of life
CONNECT:
– Subscribe to this YouTube channel
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/LexFridmanPage
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman
Add comment