FULL INTERVIEW: Dylan Patel Says We’re Still Underestimating AI
FULL INTERVIEW: Dylan Patel Says We’re Still Underestimating AI
Podcast43 min 43 sec
Listen to Episode
Note: AI-generated summary based on third-party content. Not financial advice. Read more.
Quick Insights

The market may be underestimating how AI is boosting Meta's (META) core advertising business, with significant profit gains already evident from improved ad targeting. For a long-term investment, consider TSMC (TSM), which is positioned as the ultimate bottleneck in the AI supply chain due to its dominance in advanced chip manufacturing. The current bottleneck in AI is power and data center capacity, creating opportunities for companies like Cummins (CMI) that provide essential power generation solutions. Investors should be cautious with Oracle (ORCL), as its recent poor communication and ties to OpenAI may create near-term stock volatility. Finally, both NVIDIA (NVDA) and Google (GOOGL) are pursuing resilient long-term strategies by developing diverse portfolios of specialized AI chips to adapt to the evolving market.

Detailed Analysis

NVIDIA (NVDA)

  • The company is experiencing a "vibe shift" away from its one-size-fits-all GPU strategy. They are now engineering multiple, specialized solutions for different AI workloads.
  • NVIDIA acquired Grok (the chip company, not the AI model) and is launching a new chip called CPX later this year, which is specialized for tasks like prompt processing (pre-fill) and video/image generation.
  • This strategy of creating different chips for different niches (the standard GPU, CPX, Grok chips) suggests that NVIDIA acknowledges the future of AI is uncertain and is building a portfolio of solutions to cover various potential outcomes. This can be seen as a de-risking strategy.
  • The speaker notes that new chip generations have high initial failure rates. For example, the new Blackwell chips have a 10-15% failure rate in the first two weeks, which is standard for the industry and gradually improves over time. This is a known operational cost and not necessarily a red flag.
  • CEO Jensen Huang is described as a "business killer" who has deep knowledge of the entire supply chain, from chip design to data centers, which is a significant leadership advantage.

Takeaways

  • NVIDIA's move to a diversified chip portfolio shows adaptability in a fast-moving market. Instead of betting on one architecture, they are spreading their bets across multiple potential futures for AI, which could make their business more resilient.
  • Investors should understand that high initial failure rates for new, cutting-edge chips are a normal part of the semiconductor industry's manufacturing process and are factored into business models.
  • The company's leadership is perceived as a major strength, with a deep understanding of the technical and business aspects of the entire AI ecosystem.

Google (GOOGL)

  • Similar to NVIDIA, Google is also diversifying its custom silicon roadmap. They are moving away from a single main line of TPUs (Tensor Processing Units).
  • They now have two different TPUs in development, one made by Broadcom (AVGO) and another by MediaTek, both to be manufactured by TSMC. A third TPU project is also underway.
  • This strategy aims to address different needs on the "Pareto optimal curve," such as chips focused on raw processing power (flops), chips with super-fast on-chip memory, and general-purpose AI chips.
  • Google is still considered "way ahead" of competitors in its ability to train AI models across multiple data centers in a region. This is a significant infrastructure advantage, though its importance may be changing as AI training methods evolve.

Takeaways

  • Google is not ceding the custom chip space to anyone. Their multi-pronged TPU strategy shows they are competing aggressively with NVIDIA to provide optimized hardware for their own AI services and Google Cloud customers.
  • Their established, large-scale, and interconnected data center infrastructure remains a key competitive moat that is difficult for others to replicate quickly.

Meta Platforms (META)

  • The speaker is very bullish on Meta, stating they are "making more money from AI than almost any company in the world outside of NVIDIA."
  • This profit is not from selling AI products directly, but from using AI to dramatically improve their core advertising algorithms. In a recent quarter, their ad prices (CPM) went up 9% despite a weak consumer environment, which implies their ad-targeting algorithm improved its effectiveness by double-digits.
  • Meta is viewed as a platform winner. As AI generates more personalized content, users will spend more time on digital platforms, directly benefiting content marketplaces like Instagram and Facebook.
  • The company is making significant long-term bets beyond social media:
    • They are developing the "best wearables" (e.g., smart glasses) and plan to integrate powerful AI assistants.
    • They have poached top talent from Google's search division to build out their AI assistant and commerce capabilities.
    • They licensed Midjourney's models and data in a deal speculated to be worth over $1 billion.
  • Due to immense demand for computing power, Meta is now buying up capacity from smaller, "long tail" cloud providers because they have already signed deals with all the major players and still need more.

Takeaways

  • Investors may be underestimating the immediate financial impact of AI on Meta's core advertising business. The efficiency gains are already showing up in their financial results.
  • Meta is aggressively positioning itself to be a leader in the next wave of computing (AI-powered wearables and assistants), which could open up massive new revenue streams beyond advertising.
  • Their aggressive pursuit of compute from all available sources signals their deep conviction in AI and their massive scale, but also highlights the current supply constraints in the market.

Oracle (ORCL)

  • The company's recent public communications were described as "terrible comms" and having "bank run language."
  • Oracle released statements to reassure the market about its financing for data centers and its financial relationship with OpenAI. The podcast hosts felt this projected a lack of confidence, comparing it to NVIDIA's unprompted, defensive comments about Google's TPU.
  • The speaker's personal view is that Oracle is "fine" and the issue is primarily bad public relations, not a fundamental business problem.
  • It was noted that Oracle's stock peaked shortly after they announced their major deal with OpenAI. The "NVIDIA-OpenAI trade" has reportedly been performing poorly recently compared to the "TPU-Anthropic-Google-Amazon complex."

Takeaways

  • Poor corporate communications can create stock volatility and negative sentiment, even if the underlying business is sound. Investors should be critical of how companies manage their public image during times of market anxiety.
  • Oracle's fortunes appear closely tied to the success and sentiment surrounding its key partners, particularly OpenAI. Any perceived weakness in OpenAI could negatively impact Oracle's stock.

AI Supply Chain & Key Bottlenecks

  • Data Centers in Space: This is a long-term, speculative idea.
    • Bull Case: Falling launch costs from companies like SpaceX and free solar power make it economically feasible. The head of compute at XAI is reportedly bullish on this.
    • Bear Case: Major challenges include the inability to service failed chips, heat dissipation, and the difficulty of building large, interconnected chip clusters in space. The speaker believes less than 1% of data center capacity will be in space by 2028.
  • The Shifting Bottleneck: The primary constraint for AI development is a moving target.
    • 2023: The bottleneck was semiconductor chips (e.g., NVIDIA GPUs).
    • 2024-2025: The bottleneck shifted to power and data center capacity, as the energy and construction industries were unprepared for the sudden surge in demand.
    • 2027 and beyond: The bottleneck is expected to swing back fully to semiconductors, specifically leading-edge fab capacity from companies like TSMC and high-bandwidth memory.
  • TSMC (TSM): Positioned as the ultimate bottleneck. Unlike power turbines, which can be sourced from multiple vendors, you "cannot get a three nanometer fab." Building new fabs is incredibly complex and takes years. The speaker notes that in Taiwan, TSMC's fabs are treated as a national priority, receiving water during droughts even when cities face rationing.
  • Power: While a bottleneck now, the industry is "waking up" and finding creative solutions. The speaker mentions that companies like Cummins (CMI) can produce a million diesel engines a year, which can be used for power generation in places like West Texas, bypassing some traditional utility constraints.

Takeaways

  • Investing in the AI boom requires looking at the entire supply chain, not just the famous chip designers. The critical bottleneck shifts over time, creating opportunities in different sectors.
  • TSMC holds a uniquely powerful position as the world's primary manufacturer of advanced chips. Its capacity is the hard limit on how fast the AI industry can grow, giving it immense strategic importance and potential pricing power.
  • The power and data center sectors are in a massive build-out phase. While they are a constraint today, the industry is responding, which could benefit companies involved in power generation equipment, grid infrastructure, and data center construction.

Other Notable Mentions

  • Cerebras (Private): An AI chip company that specializes in reducing latency for long-running inference tasks. OpenAI signed a 750-megawatt deal with them to serve its "most price insensitive customers" who need the absolute fastest results for specific, complex prompts. This highlights the emergence of a premium market for specialized AI computation.
  • Tesla (TSLA): Their custom FSD (Full Self-Driving) chip is mentioned as a model of reliability. It is smaller and less complex than a high-end GPU, but its dependability in cars suggests a potential blueprint for reliable chips in other harsh environments, like space.
  • Adobe (ADBE): Used as a cautionary tale. The company's stock rose on AI hype but later fell as the market began to question whether it was a foundational AI company or just a company using AI features. This illustrates the risk of investing in "AI-themed" stocks without deep analysis of their competitive moat.
Ask about this postAnswers are grounded in this post's content.
Episode Description
This is our full interview with SemiAnalysis Founder and CEO Dylan Patel, recorded live on TBPN at the Cisco AI Summit in San Francisco.  We discuss data centers in space, the limits of today’s AI hardware, and how chips, power, and geopolitics will shape the future of AI infra. TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to podcast platforms immediately after.  Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has recently featured Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella.
About TBPN
TBPN

TBPN

By John Coogan & Jordi Hays

Technology's daily show (formerly the Technology Brothers Podcast). Streaming live on X and YouTube from 11 - 2 PM PST Monday - Friday. Available on X, Apple, Spotify, and YouTube.