Anthropic, the Pentagon, and the Future of Autonomous Weapons
Anthropic, the Pentagon, and the Future of Autonomous Weapons
42 days agoOdd LotsBloomberg
Podcast51 min 45 sec
Listen to Episode
Note: AI-generated summary based on third-party content. Not financial advice. Read more.
Quick Insights

Palantir (PLTR) is the highest-conviction play in defense AI, serving as the essential "operating system" for the Pentagon through its Maven Smart System that integrates diverse military data streams. Investors should favor companies providing Decision Support AI and "edge computing" capabilities, which allow AI models to run directly on hardware like drones and missiles. While Anthropic faces a limited addressable market due to strict ethical restrictions on military use, OpenAI is positioned to capture more defense revenue by adopting a more cooperative stance with the Department of Defense. IBM remains a strong case study for enterprise AI adoption, demonstrating how large-scale automation can significantly slash administrative costs and improve operational efficiency. To hedge against the rise of autonomous weaponry, look for investment opportunities in Counter-AI technologies, specifically electronic jamming and anti-drone hardware.

Detailed Analysis

Based on the Odd Lots podcast episode featuring Paul Scharre (Executive VP at the Center for a New American Security), here are the investment insights and thematic takeaways regarding the intersection of AI, defense, and autonomous weaponry.


Palantir Technologies (PLTR)

• The transcript identifies Palantir as the provider of the Maven Smart System, the primary architecture the U.S. military uses to fuse diverse data streams. • This system acts as the "operating system" for modern warfare, integrating satellite imagery, geolocation, and signals intelligence. • Palantir is positioned as the essential bridge between Silicon Valley’s AI models and the Pentagon’s operational needs.

Takeaways

Infrastructure Dominance: Palantir’s role as the integration layer makes it "sticky" within the Department of Defense (DOD). Even if specific AI models (like Claude or GPT) are swapped out, the underlying Palantir infrastructure remains. • Data Fusion Value: As warfare becomes more data-heavy, the value shifts to platforms that can make that data readable for human analysts.


Anthropic & OpenAI (Private)

• There is a significant "public breakup" and policy disagreement between Anthropic and the Pentagon regarding the use of the Claude model for autonomous weapons and surveillance. • OpenAI has signaled a more cooperative stance, stepping in where Anthropic has hesitated due to safety concerns. • The DOD is pushing for "any lawful use" clauses in contracts, which conflicts with some AI companies' internal safety "red lines" (e.g., offensive cyber attacks).

Takeaways

Reputational Risk vs. Revenue: Investors should note that "Safety-First" AI companies (like Anthropic) may limit their total addressable market (TAM) by refusing lucrative defense contracts. • The "Race to the Bottom": There is a risk that the government will naturally gravitate toward providers with the fewest ethical restrictions, potentially favoring OpenAI or open-source alternatives in the defense sector.


IBM (IBM)

• Mentioned as a sponsor, but with specific data points: IBM is integrating AI into its global workforce of 300,000, resolving 94% of common HR questions via AI.

Takeaways

Operational Efficiency: IBM serves as a case study for how large-scale enterprises can use AI to slash administrative costs and "repetitive tasks," freeing up thousands of hours for strategic work.


Defense & AI Sector Themes

The Talent Gap: The U.S. government lacks the internal technical skill to build world-class AI. It is entirely dependent on commercial providers. • Shift in R&D: Unlike "Stealth" technology (invented in secret government labs), AI is a commercial-first technology. The military is now a "customer" rather than an "inventor." • The "Flash Crash" Risk in Warfare: A major risk factor discussed is the lack of "circuit breakers" in autonomous war. Unlike financial markets, there is no referee to call a timeout if two AI algorithms begin an unintended escalation.

Takeaways

Investment Opportunity in "The Edge": There is a growing need for "distilled models"—AI that can run on low computing power directly on a drone or missile (the "edge") rather than in a massive data center. • Counter-AI Technology: As autonomous drones (like the Iranian loitering munitions) become more common, investments in electronic jamming and "anti-drone" hardware are likely to see increased demand. • Human-in-the-loop (HITL) Requirement: Despite the hype around "Terminator" robots, the near-term investment winners will be companies that provide Decision Support AI—tools that help humans process data faster, rather than tools that replace human decision-making entirely.


Risk Factors

Data Integrity: The "strike on the school" mentioned in the transcript highlights that AI is only as good as its data. Outdated databases (DIA targeting data) can lead to catastrophic errors, creating liability for contractors. • Contractual Friction: The Pentagon’s demand for "any lawful use" may create long-term friction with tech companies that have strong "AI Safety" cultures, potentially leading to volatile contract renewals.

Ask about this postAnswers are grounded in this post's content.
Episode Description
The last big story right before the war in Iran started was the collapse in the relationship between the Pentagon and Anthropic, with the latter objecting to any potential use of its models in either fully autonomous weapons or domestic surveillance. Of course, this story immediately become more relevant with the start of the war, and the reporting that Anthropic's technology was in fact utilized at the start of hostilities. But what does that mean? How are these models used? And what would a fully autonomous weapons system actually entail? On this episode, we speak with Paul Scharre, the executive vice president and director of studies at the Center for a New American Security. He has written two books on the subject of AI in warfare, and previously worked inside the Department of Defense on some of these very questions. We discuss the future of autonomous weaponry, and the various ethical and technological dimensions such weapons would entail. Subscribe to the Odd Lots Newsletter Join the conversation: discord.gg/oddlots See omnystudio.com/listener for privacy information.
About Odd Lots
Odd Lots

Odd Lots

By Bloomberg

<p>Bloomberg's Joe Weisenthal and Tracy Alloway explore the most interesting topics in finance, markets and economics. Join the conversation every Monday and Thursday.</p>