Who Should Control A.I.?
Who Should Control A.I.?
Podcast1 hr 9 min
Listen to Episode
Note: AI-generated summary based on third-party content. Not financial advice. Read more.
Quick Insights

The U.S. government’s shift toward aggressive AI integration in defense creates a high-conviction opportunity for Microsoft (MSFT), which stands to gain massive Azure cloud revenue as OpenAI secures the military contracts recently rejected by Anthropic. Palantir (PLTR) remains the primary "operating system" for defense AI, offering resilience for investors because its platform can seamlessly swap out restricted models like Claude for approved alternatives. Conversely, Anthropic faces significant valuation risk and potential "de-platforming" from federal revenue if the Department of War follows through on designating the firm a supply chain risk. Investors should prioritize AI infrastructure and data integrators over individual private labs, as the latter face "stroke-of-the-pen" regulatory risks and potential nationalization. For a defensive play, look toward Privacy-Enhancing Technologies (PETs) as the government scales AI-driven mass surveillance through the purchase of bulk commercial data.

Detailed Analysis

This investment analysis focuses on the geopolitical and regulatory fallout between the U.S. Department of War (formerly DoD) and Anthropic, as discussed in The Ezra Klein Show. The dialogue highlights a pivotal shift in how the U.S. government intends to control AI and the resulting risks for private AI labs.


Anthropic (Private)

Anthropic, the creator of the Claude AI model, is currently facing an existential threat from the U.S. Department of War. The conflict stems from Anthropic’s refusal to remove "usage restrictions" from its military contracts—specifically clauses prohibiting the use of its AI for domestic mass surveillance and fully autonomous lethal weapons.

  • Supply Chain Risk Designation: Secretary of War Pete Hegseth has threatened to designate Anthropic a "supply chain risk."
    • This designation is typically reserved for foreign adversaries (e.g., Huawei).
    • If enacted, it could prevent any military contractor or subcontractor from doing business with Anthropic, effectively "de-platforming" them from the massive defense economy.
  • Operational Role: Despite the friction, Claude has already been utilized in high-stakes operations, including the raid against Nicolas Maduro and ongoing intelligence analysis in the Middle East.
  • The "Safety" Moat vs. Liability: Anthropic’s brand is built on "AI Safety" and "Constitutional AI." However, the Trump administration views these safety guardrails as a "woke" attempt by an unelected CEO to exercise veto power over military decisions.

Takeaways

  • Political Risk: Anthropic faces significant "stroke-of-the-pen" risk. If the supply chain designation holds, the company's valuation could crater as it loses access to federal revenue and partnerships with "prime" contractors like Palantir.
  • The "Safety" Discount: Investors should note that "Safety-first" AI companies may struggle to secure massive government contracts if their ethical "constitutions" conflict with state objectives.

OpenAI (Private / Microsoft Partnership)

Following the breakdown of the Anthropic deal, the Department of War signed a contract with OpenAI. This marks a significant win for the company in the "Defense AI" race.

  • Strategic Positioning: OpenAI appears more willing (or politically savvy enough) to navigate the Trump administration's requirements.
  • Internal Friction: The deal has sparked internal controversy among OpenAI researchers. To mitigate this, CEO Sam Altman has proposed new terms where OpenAI retains control over the cloud environment and specific model safeguards.
  • Financial Backing: The transcript notes that OpenAI leadership (Greg Brockman) has significant political ties to the current administration, potentially smoothing the path for these contracts.

Takeaways

  • Market Leadership: OpenAI is successfully positioning itself as the "pragmatic" partner for the U.S. government, potentially capturing the market share left behind by Anthropic.
  • Microsoft (MSFT) Synergy: As OpenAI’s primary partner, Microsoft stands to benefit from increased Azure usage for these classified government workloads.

Palantir Technologies (PLTR)

Palantir is mentioned as a "prime contractor" for the Department of War that utilizes frontier models like Claude within its systems.

  • Subcontractor Risk: The threat against Anthropic creates a "cascading risk" for Palantir. If a specific model (Claude) is banned, Palantir must quickly pivot to integrated alternatives (like OpenAI or internal models) to maintain its government contracts.
  • Integration Power: The discussion reinforces Palantir’s role as the essential "operating system" that sits between raw AI models and military application.

Takeaways

  • Resilience through Pluralism: Palantir’s value lies in its ability to swap different AI "engines." While the Anthropic ban is a headache, Palantir remains the primary gateway for AI in the defense sector.

Investment Themes & Sector Insights

1. The "Defense AI" Sector

The transcript suggests a massive, urgent push to integrate AI into the national security infrastructure.

  • The Problem: The intelligence community has more data than it can analyze (requiring an estimated 8 million analysts).
  • The Solution: AI is being used for "infinitely scalable" data processing, signals intelligence, and cyber offensive/defensive operations.
  • Insight: Companies that provide the infrastructure for this data processing (cloud providers and data integrators) are safer bets than the individual model labs, which are subject to intense political volatility.

2. Regulatory "Gotchas" and Mass Surveillance

A major takeaway is the legal distinction between "surveillance" and "analyzing commercially available data."

  • The Loophole: The government can buy bulk data (location, search history, etc.) from private brokers. While humans couldn't process this at scale, AI can.
  • Insight: There is a growing market for Privacy-Enhancing Technologies (PETs) and companies that help citizens or corporations shield data from AI-driven "bulk analysis."

3. Nationalization Risk

The guest (Dean Ball) suggests that the logic of the current administration—that AI is too powerful to be independent of U.S. control—leads inevitably toward nationalization.

  • Risk Factor: If frontier AI labs (Anthropic, OpenAI, xAI) are eventually treated like the Manhattan Project, private equity and venture capital investors may face "forced exits" or capped returns.

4. The "Pluralism" Opportunity

The discussion highlights that different models will have different "souls" or political alignments (e.g., xAI's Grok vs. Anthropic's Claude).

  • Insight: There is an investment opportunity in Model Aggregators—platforms that allow users or governments to toggle between different models depending on the task's ethical or operational requirements.
Ask about this postAnswers are grounded in this post's content.
Episode Description
Last Friday, Secretary of Defense Pete Hegseth announced that he was breaking the Pentagon’s contract with the A.I. company Anthropic and would declare the company a supply chain risk — a designation for companies so dangerous, they can’t exist anywhere in the U.S. military supply chain. What makes this so wild is the military is still using Anthropic’s A.I. system right now. They reportedly used it during the raid to capture Maduro in Venezuela, and are now using it in the war in Iran. This story raises so many questions: Why does the government think Anthropic is so dangerous? How exactly is the government using A.I. right now? How do they want to use A.I.? And who should ultimately control this powerful and uncertain technology? Dean Ball is a senior fellow at the Foundation for American Innovation and the author of the newsletter Hyperdimensional. He served as a senior policy adviser on A.I. for the Trump White House and was the primary staff writer of their A.I. action plan. But he’s been furious at the Trump administration for how it has been handling the conflict with Anthropic. So I wanted to have him on the show to explain why. Mentioned: “Hyperdimensional" by Dean Ball “What if Dario Amodei Is Right About A.I.?” The Ezra Klein Show “Stratechery” by Ben Thompson Book Recommendations: Rationalism in Politics and Other Essays by Michael Oakeshott Empire Of Liberty by Gordon S. Wood Roll, Jordan, Roll by Eugene D. Genovese Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com. You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs. This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris with Kate Sinclair and Mary Marge Locker. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our executive producer is Claire Gordon. The show’s production team also includes Marie Cascione, Annie Galvin, Kristin Lin, Emma Kehlbeck, Jack McCordick, Marina King and Jan Kobal. Original music by Pat McCusker. Audience strategy by Kristina Samulewski and Shannon Busta. The director of New York Times Opinion Audio is Annie-Rose Strasser. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
About The Ezra Klein Show
The Ezra Klein Show

The Ezra Klein Show

By New York Times Opinion

Ezra Klein invites you into a conversation on something that matters. How do we address climate change if the political system fails to act? Has the logic of markets infiltrated too many aspects of our lives? What is the future of the Republican Party? What do psychedelics teach us about consciousness? What does sci-fi understand about our present that we miss? Can our food system be just to humans and animals alike? Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.