Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat
Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat
Podcast1 hr 43 min
Listen to Episode
Note: AI-generated summary based on third-party content. Not financial advice. Read more.
Quick Insights

NVIDIA (NVDA) remains the top conviction play as it shifts to an aggressive one-year product cycle, leveraging a $100B-$250B supply chain moat that makes it nearly impossible for competitors to catch up. Investors should look toward specialized software providers like Synopsys (SNPS) and Cadence (CDNS), which are poised for a volume explosion as AI agents begin using these tools 24/7. High-bandwidth memory remains a critical bottleneck, positioning Micron (MU) as a primary beneficiary of NVIDIA's massive downstream demand. In the networking and scaling space, Lumentum (LITE) and Coherent (COHR) are key strategic partners to watch as silicon photonics becomes essential for future AI infrastructure. Finally, the ultimate constraint on this growth is power; therefore, any long-term AI portfolio must account for the energy sector and electrical infrastructure required to fuel "AI Factories."

Detailed Analysis

NVIDIA (NVDA)

NVIDIA is positioned as an "accelerated computing" company rather than just an AI chip maker. CEO Jensen Huang describes the company's mental model as a machine where the input is electrons and the output is tokens, with NVIDIA providing the "insanely hard" transformation layer in between.

  • The "Five-Layer Cake" Strategy: NVIDIA operates across five layers of AI: supply chain (upstream/downstream), computer systems, application developers, model makers, and the broader ecosystem.
  • Supply Chain Moat: NVIDIA has approximately $100B to $250B in purchase commitments. Their moat isn't just the chip design, but the ability to lock up scarce components (HBM memory, logic dies, packaging) years in advance because suppliers trust NVIDIA’s massive downstream demand.
  • TCO (Total Cost of Ownership) Advantage: Huang claims NVIDIA provides the best performance-to-cost and performance-to-watt ratios in the world. He argues that even if competitors offer cheaper chips, the efficiency of NVIDIA’s stack makes it the most profitable for data center operators.
  • Software & Ecosystem (CUDA): CUDA remains the "great treasure." With an install base of hundreds of millions of GPUs, developers write for NVIDIA first to ensure their software runs everywhere (from clouds to robotics).
  • Investment Philosophy: The company follows a "do as much as needed, as little as possible" rule. They partner for everything they don't have to do (like physical manufacturing or financing) to keep the business model lean.

Takeaways

  • Long-term Growth: NVIDIA is moving to a one-year product cycle (Blackwell, then Blackwell Ultra, then Feynman) to maintain its lead over ASICs.
  • Margin Sustainability: Despite high margins (70%+), Huang argues that custom ASICs (like those from Broadcom) also have high margins (65%+), meaning customers aren't saving as much as perceived by switching.
  • New Market Segments: NVIDIA is expanding into premium inference, targeting "high-response" tokens where speed is more valuable than raw throughput (e.g., AI agents for software engineering).

Software & Tool Makers (Various)

There is a prevailing market fear that AI will "commoditize" software, leading to lower valuations for SaaS companies. Jensen Huang argues the opposite.

  • Exponential Growth in Users: Huang believes the number of "agents" (AI users) will grow exponentially.
  • Increased Tool Utilization: Tools like Excel, PowerPoint, and specialized engineering tools from companies like Synopsys (SNPS) and Cadence (CDNS) will see a "skyrocket" in instances because AI agents will use them 24/7 to explore design spaces that humans couldn't.
  • Current Bottleneck: The reason software hasn't "skyrocketed" yet is that AI agents aren't yet proficient enough at using these tools, but this is expected to change rapidly.

Takeaways

  • Bullish on Specialized Software: Look for companies that provide "workflow codification" or "tool-making" software, as they may see massive volume increases as AI agents become the primary users.

Semiconductor Supply Chain Partners

NVIDIA’s growth is heavily dependent on a specialized group of upstream partners.

  • TSMC (TSM): The primary manufacturer. Huang dismisses concerns about "logic bottlenecks," stating that any bottleneck (CoWoS packaging, logic, etc.) is usually solved within 2–3 years once a clear demand signal is sent.
  • Memory Providers (Micron, SK Hynix, Samsung): Essential for HBM (High Bandwidth Memory). Huang specifically highlighted Micron (MU) for "doubling down" on partnership early.
  • Silicon Photonics: NVIDIA is reshaping this ecosystem through partnerships with Lumentum (LITE) and Coherent (COHR) to prepare for future scaling needs.

Takeaways

  • Supply Chain Visibility: NVIDIA "pre-fetches" bottlenecks. If NVIDIA is moving to a 1-year cycle, these suppliers must scale at an unprecedented rate, providing a massive tailwind for the high-end semiconductor equipment and memory sectors.

AI Infrastructure & "Neo-Clouds"

NVIDIA is intentionally fostering a new class of "AI-native" cloud providers to ensure their chips are accessible outside of the "Big Five" hyperscalers.

  • Key Players: CoreWeave, Crusoe, Lambda, Nscale, and Nebius.
  • Strategic Support: NVIDIA supports these companies with allocations and occasionally investments to ensure the "American tech stack" remains dominant and to prevent any single hyperscaler from having too much leverage.

Takeaways

  • Diversification of Compute: The rise of specialized AI clouds (like Crusoe) provides an alternative for startups that don't want to be locked into Amazon (AWS) or Google (GCP).

Investment Themes & Risks

Bullish Themes

  • Re-industrialization of the US: The shift toward AI factories, EVs, and robots requires a massive build-out of physical infrastructure.
  • Energy as the Ultimate Bottleneck: Huang identifies energy (electricity and "plumbers/electricians") as a harder bottleneck to solve than chip manufacturing. Data centers cannot grow without massive power policy shifts.

Risk Factors

  • Geopolitical/China Export Controls: Huang is vocally critical of total export bans to China. He warns that conceding the Chinese market (the 2nd largest in the world) forces China to build its own ecosystem, which could eventually compete with the US stack globally.
  • Energy Scarcity: If the US cannot provide enough energy for "AI Factories," the industry will be capped regardless of how many chips NVIDIA can produce.
  • Human Capital: A shortage of software engineers and radiologists (due to "AI doomerism" discouraging students) could create labor bottlenecks in sectors AI is meant to augment.
Ask about this postAnswers are grounded in this post's content.
Episode Description
I asked Jensen about TPU competition, Nvidia’s lock on the ever more bottlenecked supply chain needed to make advanced chips, whether we should be selling AI chips to China, why Nvidia doesn’t just become a hyperscaler, how it makes its investments, and much more. Enjoy! Watch on YouTube; read the transcript. Sponsors * Crusoe’s cloud runs on state-of-the-art Blackwell GPUs, with Vera Rubin deployment scheduled for later this year. But hardware is only part of the story—for inference, Crusoe’s MemoryAlloy tech implements a cluster-wide KV cache, delivering up to 10x faster TTFT and 5x better throughput than vLLM. Learn more at crusoe.ai/dwarkesh * Cursor helped me build an AI co-researcher over the course of a weekend. Now I have an AI agent that I can collaborate with in Google Docs via inline comment threads! And while other agentic coding tools feel like a total black-box, Cursor let me stay on top of the full implementation. You can try my co-researcher out at github.com/dwarkeshsp/ai_coworker, or get started on your own Cursor project today at cursor.com/dwarkesh * Jane Street spent ~20,000 GPU hours training backdoors into 3 different language models, then challenged my audience to find the triggers. They received some clever solutions—like comparing the base and fine-tuned versions and extrapolating any differences to reveal the hidden backdoor—but no one was able to solve all 3. So if open problems like this excite you, Jane Street is hiring. Learn more at janestreet.com/dwarkesh Timestamps (00:00:00) – Is Nvidia’s biggest moat its grip on scarce supply chains? (00:16:25) – Will TPUs break Nvidia’s hold on AI compute? (00:41:06) – Why doesn’t Nvidia become a hyperscaler? (00:57:36) – Should we be selling AI chips to China? (01:35:06) – Why doesn’t Nvidia make multiple different chip architectures? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
About Dwarkesh Podcast
Dwarkesh Podcast

Dwarkesh Podcast

By Dwarkesh Patel

Deeply researched interviews <br/><br/><a href="https://www.dwarkesh.com?utm_medium=podcast">www.dwarkesh.com</a>