Exploring the Tech that Enables AGI: Claude Mythos and NVIDIA's Next Generation
Exploring the Tech that Enables AGI: Claude Mythos and NVIDIA's Next Generation
Podcast22 min 2 sec
Listen to Episode
Note: AI-generated summary based on third-party content. Not financial advice. Read more.
Quick Insights

NVIDIA (NVDA) remains the primary high-conviction play as it transitions from the Blackwell architecture to the Vera Rubin (2026) and Feynman (2028) chips, which are projected to offer up to 50x more compute power. Investors should look toward "Neoclouds" like CoreWeave or specialized data center infrastructure as a "landlord" play, as older GPUs like the H100 are maintaining high value by pivoting from model training to high-margin inference tasks. The next major leap in AI intelligence is expected between 2026 and 2027 as frontier labs like Anthropic and OpenAI fully migrate their models to Blackwell hardware. While GPUs dominate current headlines, emerging bottlenecks in energy grids and memory suggest Intel (INTC) and utility providers may become "sneaky" secondary plays for the broader hardware ecosystem. Be mindful of regulatory risks and compute scarcity, as the inability to meet global token demand currently acts as the primary ceiling for AI software deployment.

Detailed Analysis

NVIDIA (NVDA)

The transcript highlights NVIDIA as the primary driver of the hardware revolution enabling Artificial General Intelligence (AGI). The discussion focuses on the rapid release cycle of their GPU architectures and the "anti-depreciation" nature of their hardware.

  • Product Roadmap: NVIDIA has a clear trajectory of chips that are multiples more powerful than their predecessors:
    • Blackwell (GB200/GB300): Announced March 2024. Despite being "old" in design terms, it is only now powering the most advanced models like Claude Mythos.
    • Vera Rubin: Expected to provide a 2.5x to 5x multiple on compute power.
    • Rubin Ultra: Scheduled for late 2027, offering a 14x multiple.
    • Feynman: Scheduled for 2028, estimated at a 30x to 50x multiple compared to current stacks.
  • Asset Value Retention: Unlike traditional hardware that depreciates, NVIDIA GPUs (like the H100) have maintained or increased in value due to extreme scarcity and the shift toward using older chips for "inference" (running AI models) while newer chips are used for "training."
  • Cost Efficiency: While power increases, the cost per unit of intelligence is dropping. A task costing $1 on Blackwell is projected to cost $0.07 on Rubin Ultra.

Takeaways

  • Hardware as the Bottleneck: Investment value remains heavily skewed toward hardware providers because AI labs are currently "compute constrained." The demand for tokens is growing faster than chip supply.
  • Long-Term Moat: NVIDIA’s aggressive roadmap (Vera Rubin to Feynman) suggests they are widening the gap before competitors can catch up to the Blackwell architecture.
  • Inference Market: Older GPUs (H100s) are becoming high-yield assets for "Neoclouds" (e.g., CoreWeave) because they are essential for the high-margin business of processing user prompts.

Anthropic & OpenAI

The discussion identifies these "Frontier Labs" as the primary beneficiaries and consumers of the hardware mentioned above.

  • Claude Mythos (Anthropic): Highlighted as a breakthrough model that discovered decade-old security flaws. It was trained on the Blackwell chip.
  • SPUD (OpenAI): Mentioned as a forthcoming model expected to match or exceed the power of Mythos, also leveraging Blackwell hardware.
  • GPT-5.4: Revealed to still be running on older Hopper chips, suggesting a massive "intelligence jump" is coming once these labs fully transition to Blackwell and Rubin architectures.

Takeaways

  • AGI Timeline: The analysts suggest AGI is no longer a "physics problem" but a "plugging it in" problem. Investors should prepare for models that can autonomously advance industries like finance and medicine by 2026–2027.
  • Regulatory Risk: The power of these models (specifically Mythos) has triggered federal involvement and "emergency meetings" with the Federal Reserve and top banks, indicating high potential for future regulation.

Neoclouds (e.g., CoreWeave)

A new class of investment opportunity: companies that build data centers specifically to rent out GPU power.

  • Business Model: These companies act like a specialized AWS for AI.
  • Contract Strength: Their capacity is often booked 6–12 months in advance, with customers renewing early to guarantee access.
  • Asset Utilization: 70% of their fleet consists of "older" GPUs, which remain highly profitable for inference tasks.

Takeaways

  • Infrastructure Play: For investors looking beyond NVIDIA, Neoclouds represent a way to play the "landlord" of the AI era.
  • Counter-Thesis to Depreciation: The "Big Short" (Michael Burry) thesis that these assets would deflate has proven wrong so far because the demand for inference is insatiable.

Investment Themes & Sectors

Artificial General Intelligence (AGI)

  • Definition: A single model that can outperform the best humans in finance, science (curing diseases), and general cognitive tasks.
  • Investment Insight: The transition from "spiky" AI (good at one thing) to generalized AI is expected to happen as hardware multiples hit the 10x–50x range over the next 3 years.

The "Inference" Economy

  • Context: Training a model is a one-time high cost; inference (answering user questions) is a recurring revenue stream.
  • Insight: As software becomes more efficient, the value generated by a single GPU increases. Chatbot inference cost $3/hour in 2023, but autonomous agents can now generate $30–$300/hour in value using the same or similar hardware.

Emerging Bottlenecks

  • Energy and Memory: While GPUs are the current focus, the transcript hints that CPUs, energy grids, and memory will be the next major constraints.
  • Intel (INTC): Briefly mentioned as a "sneaky" play for future discussion, likely regarding their role in the broader hardware ecosystem or manufacturing.

Summary of Risk Factors Mentioned

  • Compute Scarcity: Labs cannot release models (like Mythos) to the general public because they lack the "tokens" (compute capacity) to support global demand.
  • Malicious Use: The extreme intelligence of upcoming models poses risks for cybersecurity and autonomous malicious actions.
  • Energy Grid: The physical limitation of "plugging in" these massive GPU clusters is a looming barrier to the 2028–2029 projections.
Ask about this postAnswers are grounded in this post's content.
Episode Description
We explore the game-changing release of Claude Mythos and NVIDIA's Blackwell chip, a leap toward artificial general intelligence (AGI). We discuss AI hardware evolution, the rise of "neoclouds," and the ethical implications of Mythos. ------ 🌌 LIMITLESS HQ ⬇️ NEWSLETTER:    https://limitlessft.substack.com/ FOLLOW ON X:   https://x.com/LimitlessFT SPOTIFY:             https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQ APPLE:                 https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890 RSS FEED:           https://limitlessft.substack.com/ ------ TIMESTAMPS 0:00 The Rise of Claude Mythos 1:18 The Power of Hardware 3:56 The Evolution of AI Models 5:55 Accelerating Towards AGI 9:41 Defining AGI and Its Implications 14:59 The GPU Market Dynamics 17:26 The Role of Neoclouds 19:06 Inference and Its Importance 19:56 Future Prospects and Challenges ------ RESOURCES Josh: https://x.com/JoshKale Ejaaz: https://x.com/cryptopunk7213 ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures⁠
About Limitless: An AI Podcast
Limitless: An AI Podcast

Limitless: An AI Podcast

By Limitless

Exploring the frontiers of Technology and AI