
The U.S. government’s shift toward aggressive AI integration in defense creates a high-conviction opportunity for Microsoft (MSFT), which stands to gain massive Azure cloud revenue as OpenAI secures the military contracts recently rejected by Anthropic. Palantir (PLTR) remains the primary "operating system" for defense AI, offering resilience for investors because its platform can seamlessly swap out restricted models like Claude for approved alternatives. Conversely, Anthropic faces significant valuation risk and potential "de-platforming" from federal revenue if the Department of War follows through on designating the firm a supply chain risk. Investors should prioritize AI infrastructure and data integrators over individual private labs, as the latter face "stroke-of-the-pen" regulatory risks and potential nationalization. For a defensive play, look toward Privacy-Enhancing Technologies (PETs) as the government scales AI-driven mass surveillance through the purchase of bulk commercial data.
This investment analysis focuses on the geopolitical and regulatory fallout between the U.S. Department of War (formerly DoD) and Anthropic, as discussed in The Ezra Klein Show. The dialogue highlights a pivotal shift in how the U.S. government intends to control AI and the resulting risks for private AI labs.
Anthropic, the creator of the Claude AI model, is currently facing an existential threat from the U.S. Department of War. The conflict stems from Anthropic’s refusal to remove "usage restrictions" from its military contracts—specifically clauses prohibiting the use of its AI for domestic mass surveillance and fully autonomous lethal weapons.
Following the breakdown of the Anthropic deal, the Department of War signed a contract with OpenAI. This marks a significant win for the company in the "Defense AI" race.
Palantir is mentioned as a "prime contractor" for the Department of War that utilizes frontier models like Claude within its systems.
The transcript suggests a massive, urgent push to integrate AI into the national security infrastructure.
A major takeaway is the legal distinction between "surveillance" and "analyzing commercially available data."
The guest (Dean Ball) suggests that the logic of the current administration—that AI is too powerful to be independent of U.S. control—leads inevitably toward nationalization.
The discussion highlights that different models will have different "souls" or political alignments (e.g., xAI's Grok vs. Anthropic's Claude).

By New York Times Opinion
Ezra Klein invites you into a conversation on something that matters. How do we address climate change if the political system fails to act? Has the logic of markets infiltrated too many aspects of our lives? What is the future of the Republican Party? What do psychedelics teach us about consciousness? What does sci-fi understand about our present that we miss? Can our food system be just to humans and animals alike? Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.