Dwarkesh Podcast

I love Conversations with Tyler, and often think — where are the other CWT-like podcasts? Turns out Dwarkesh Podcast is one of them. 

Dwarkesh Podcast | Dwarkesh Patel | Substack
Deeply researched interviews https://link.chtbl.com/dwarkesh

One of the things I love most about CWT is that Tyler doesn’t take the time to explain concepts mentioned in the conversation to listeners—the expectation is that if you don’t know about something, like the Coase theorem, you’re expected to Google it.

This makes for a more fun and challenging listening experience compared to, for example, Ezra Klein, who himself is in “explainer mode” much of the time and asks his guests to do the same. There’s no way to skim a podcast, so I find myself tuning out or clumsily skipping forward when I’m being instructed on simple concepts.

Dwarkesh is the opposite. When he focuses on AI, I don’t understand a lot of what’s going on, but I can take mental notes and look things up and hopefully learn through repeated exposure.

A few episodes I’ve enjoyed recently—

Zuck

Mark Zuckerberg - Llama 3, Open Sourcing $10b Models, & Caesar Augustus
Listen now | Mark Zuckerberg on: Llama 3 - open sourcing towards AGI what he would have done as CEO of Google+ energy constraints on scaling Caeser Augustus, intelligence explosion, bioweapons, $10b models, & much more Enjoy! Timestamps (00:00:00) - Llama 3 (00:08:32) - Coding on path to AGI

Overall I found Zuckerberg thoughtful and less robotic than in podcast appearances from the past. He’s deep in the weeds on AI, and his passion shines through.

Some highlights:

  • Meta : Android :: OpenAI : Apple. He makes some interesting parallels to mobile app development: “One thing that I think generally sucks about the mobile ecosystem is that you have these two gatekeeper companies, Apple and Google, that can tell you what you're allowed to build…. There's a bunch of times when we've launched or wanted to launch features and Apple's just like “nope, you're not launching that.” That sucks, right? So the question is, are we set up for a world like that with AI?”
  • Power generation is a constraint on scaling LLMs: “I think we would probably build out bigger clusters than we currently can if we could get the energy to do it.”
  • Zuck is a builder. “I just really like building things… I don't know how to explain it but I just feel constitutionally that I'm doing something wrong if I'm not building something new.” I wasn’t exactly surprised by this, but it was somehow comforting(?) to hear someone so massively successful talk about the drive to build new things in such straightforward language.

Patrick Collison

Patrick Collison (Stripe CEO) - Craft, Beauty, & The Future of Payments
Listen now | what it takes to process $1 trillion/year, how to build multi-decade APIs, companies, and relationships, and what’s next for Stripe

Two big takeaways for me:

  • Speed is a choice: Stripe deploys 1000 times per day. Stripe has built what sounds like incredible developer tooling to allow a thousand deployments per day. Deploys are automatic and incremental and include both unit testing and anomaly detection, so good/stable builds get promoted automatically and problematic builds get stopped automatically. This stands in stark contrast to the larger organizations I’ve worked at, where deploys get less frequent and more process-driven as the team gets bigger.
  • Being AI-first is a choice: Stripe has built LLM tooling into all of their internal dashboards and admin tools. They’ve made it super-easy to use different models internally, with the end result being millions of calls to LLMs each day as part of internal workflows.