Dwarkesh Podcast cover art

Dwarkesh Podcast

Dwarkesh Podcast

By: Dwarkesh Patel
Listen for free

About this listen

Deeply researched interviews

www.dwarkesh.comDwarkesh Patel
Science
Episodes
  • Terence Tao – Kepler, Newton, and the true nature of mathematical discovery
    Mar 20 2026

    We begin the episode with the absolutely ingenious and surprising way in which Kepler discovered the laws of planetary motion.

    People sometimes say that AI will make especially fast progress at scientific discovery because of tight verification loops.

    But the story of how we discovered the shape of our solar system shows how the verification loop for correct ideas can be decades (or even millennia) long.

    During this time, what we know today as the better theory can actually make worse predictions.

    And the reasons it survives this epistemic hell is some mixture of judgment and heuristics that we don’t even understand well enough to actually articulate, much less codify into an RL loop. Hope you enjoy!

    Watch on YouTube; read the transcript.

    Sponsors

    - Jane Street loves challenging my audience with different creative puzzles. One of my listeners, Shawn, solved Jane Street’s ResNet challenge and posted a great walk-through on X. If you want to try one of these puzzles yourself, there’s one live now at janestreet.com/dwarkesh.

    - Labelbox can get you rubric-based evals, no matter your domain. These rubrics allow you to give your model feedback on all the dimensions you care about, so you can train how it thinks, not just what it thinks. Whatever you’re focused on—math, physics, finance, psychology or something else—Labelbox can help. Learn more at labelbox.com/dwarkesh.

    - Mercury just released a new feature called Insights. Insights summarizes your money in and out, showing you your biggest transactions and calling out anything worth paying attention to. It’s a super low-friction way to stay on top of your business. Learn more at mercury.com/insights.

    Timestamps

    (00:00:00) – Kepler was a high temperature LLM

    (00:11:44) – How would we know if there’s a new unifying concept within heaps of AI slop?

    (00:26:10) – The deductive overhang

    (00:30:31) – Selection bias in reported AI discoveries

    (00:46:43) – AI makes papers richer and broader, but not deeper

    (00:53:00) – If AI solves a problem, can humans get understanding out of it?

    (00:59:20) – We need a semi-formal language for the way that scientists actually talk to each other

    (01:09:48) – How Terry uses his time

    (01:17:05) – Human-AI hybrids will dominate math for a lot longer



    Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
    Show More Show Less
    1 hr and 24 mins
  • Dylan Patel — Deep dive on the 3 big bottlenecks to scaling AI compute
    Mar 13 2026

    Dylan Patel, founder of SemiAnalysis, provides a deep dive into the 3 big bottlenecks to scaling AI compute: logic, memory, and power.

    And walks through the economics of labs, hyperscalers, foundries, and fab equipment manufacturers.

    Learned a ton about every single level of the stack. Enjoy!

    Watch on YouTube; read the transcript.

    Sponsors

    * Mercury has already saved me a bunch of time this tax season. Last year, I used Mercury to request W-9s from all the contractors I worked with. Then, when it came time to issue 1099s this year, I literally just clicked a button and Mercury sent them out. Learn more at mercury.com.

    * Labelbox noticed that even when voice models appear to take interruptions in stride, their performance degrades. To figure out why, they built a new evaluation pipeline called EchoChain. EchoChain diagnoses voice models’ specific failure modes, letting you understand what your model needs to truly handle interruptions. Check it out at labelbox.com/dwarkesh.

    * Jane Street is basically a research lab with a trading desk attached – and their infrastructure backs this up. They’ve got tens of thousands of GPUs, hundreds of thousands of CPU cores, and exabytes of storage. This is what it takes to find subtle signals hidden deep within noisy market data. If this sounds interesting, you can explore open positions at janestreet.com/dwarkesh.

    Timestamps

    (00:00:00) – Why an H100 is worth more today than 3 years ago

    (00:24:52) – Nvidia secured TSMC allocation early; Google is getting squeezed

    (00:34:34) – ASML will be the #1 constraint for AI compute scaling by 2030

    (00:55:47) – Can't we just use TSMC's older fabs?

    (01:05:37) – When will China outscale the West in semis?

    (01:16:01) – The enormous incoming memory crunch

    (01:42:34) – Scaling power in the US will not be a problem

    (01:54:44) – Space GPUs aren't happening this decade

    (02:14:07) – Why aren't more hedge funds making the AGI trade?

    (02:18:30) – Will TSMC kick Apple out from N2?

    (02:24:16) – Robots and Taiwan risk



    Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
    Show More Show Less
    2 hrs and 31 mins
  • The most important question nobody's asking about AI
    Mar 11 2026

    Read the full essay here: https://www.dwarkesh.com/p/dow-anthropic

    Timestamps

    (00:00:00) - Anthropic vs The Pentagon

    (00:04:16) - The overhangs of tyranny

    (00:05:54) - AI structurally favors mass surveillance

    (00:08:25) - Alignment...to whom?

    (00:13:55) - Coordination not worth the costs



    Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
    Show More Show Less
    25 mins
No reviews yet