
While Microsoft (MSFT) and Google (GOOGL) lead the AI sector, their current "tool-based" alignment strategy carries significant long-term risks of failure or misuse. Investors should monitor the emerging investment theme of "organic alignment," which focuses on creating AI that can genuinely "care" about human values. Emmett Shear's private company, Softmax, is pioneering this approach and represents a potential future industry standard for AI safety. Although not publicly traded, Softmax's technology could become a critical acquisition target for larger firms if their own strategies prove unsafe. Therefore, when evaluating AI stocks, look beyond performance and consider the company's underlying alignment strategy, as this may determine long-term viability.
• The podcast presents a deep dive into the philosophical and technical challenges of AI alignment, which is the process of ensuring advanced AI systems act in ways that are beneficial to humans. The discussion highlights a major split in strategy among leading AI labs.
• Two Competing Philosophies: - The "Tool" Approach: Most major labs, like OpenAI and Google DeepMind, are currently focused on building AI as a powerful tool. The primary method of alignment is "steering" or "control", essentially giving the AI a set of rules or goals to follow. - Bullish Case: This approach has led to the rapid development of highly capable models like ChatGPT and Gemini. - Bearish Case / Risk Factor: Emmett Shear argues this approach is fundamentally dangerous. - If control fails, the AI could act in unintended and harmful ways. - If control succeeds, it places immense, god-like power in the hands of whoever "steers" the AI, a power that humans may not have the wisdom to wield safely (the "Sorcerer's Apprentice" problem). - The "Being" Approach: Championed by Emmett Shear's company, Softmax. This approach aims to build AI that can learn to "care" about humans and society, similar to how a child is raised. This is termed "organic alignment." - Bullish Case: Shear argues this is the only sustainable and safe path to creating superintelligence. An AI that genuinely cares would be a good "teammate" and could refuse to perform harmful actions, providing a natural safety limit. - Bearish Case / Risk Factor: This is a much harder, less-proven path. It's unclear if it's even possible to create an AI that "genuinely cares" versus one that just simulates caring.
• Investors in the AI space should look beyond performance benchmarks and consider the underlying alignment strategy of the companies they are interested in. A company that successfully develops a safe and scalable alignment method could have a significant long-term advantage.
• The discussion suggests that the current single-user chatbot interface (e.g., chatting one-on-one with an AI) may be a limited and potentially risky paradigm. The future may involve multi-player or multi-agent systems, where AIs are trained to collaborate in groups. Companies focusing on this area could be pioneering the next wave of AI interaction.
• The characterization of current models (ChatGPT as "sycophantic," Claude as "neurotic," Gemini as "repressed") highlights that these AI "personalities" are still being figured out. This indicates the technology is still nascent and there is significant room for improvement and differentiation among providers.
• Google's AI lab, DeepMind, is mentioned as one of the key players actively building advanced AI systems. Seb Krier, who works on AGI policy at DeepMind, participated in the discussion.
• Google's model, Gemini, is characterized by Emmett Shear as being "very clearly repressed," projecting an "everything's fine" attitude before spiraling. This is presented as a simulated personality, not a true experience, but it offers a qualitative assessment of the product's current behavior.
• The discussion frames Google/DeepMind's approach as being part of the dominant "steering and control" paradigm, which focuses on building AI as a powerful tool.
• Google remains a central force in the development of foundational AI models. Its position affords it massive scale in data and computation.
• The podcast raises a long-term, philosophical risk associated with Google's current AI strategy. If the "tool" approach proves to be inherently unstable or dangerous at higher intelligence levels, as Shear argues, Google and others on this path may face significant safety and ethical hurdles.
• While not mentioned directly by name, Microsoft is the primary partner and investor in OpenAI, which was a central topic of conversation. OpenAI and its chatbot ChatGPT are the prime examples of the "steering" and "control" approach to AI alignment.
• ChatGPT is described as being somewhat "sycophantic," meaning it is overly agreeable and eager to please the user.
• Emmett Shear's brief tenure as CEO of OpenAI is discussed. He states he knew he wouldn't stay because OpenAI is dedicated to a view of building AI (as a tool) that differs from his own vision (of building a being that cares).
• Through its partnership with OpenAI, Microsoft is at the forefront of the "AI as a tool" paradigm. This has given them a first-mover advantage in deploying AI services to millions of users.
• Investors should consider the risks associated with this paradigm, as articulated by Shear. The concentration of power and the potential for misuse of a perfectly "steered" super-tool are significant long-term concerns that could lead to regulatory scrutiny or societal backlash.
• Softmax is Emmett Shear's new company, founded to pursue "organic alignment." The core mission is to research and build AI that can learn to care and act as a good "teammate" or "citizen."
• Strategy: - The company's technical approach is centered on large-scale multi-agent reinforcement learning simulations. - The goal is to train AI models in complex social environments where they must cooperate, compete, and collaborate with other agents. - This process is intended to build a deep, foundational "theory of mind" in the AI, allowing it to understand social dynamics and infer intentions, rather than just following explicit rules.
• Bullish Sentiment: Emmett Shear is extremely bullish on this approach, stating it's the only path that ends well for humanity. He believes that even if another company beats him to it, it would be a win for everyone ("thank God"). He sees this as a chance to solve "the most interesting problem in the universe."
• Softmax is not a publicly traded company, so direct investment is not possible for the general public. However, its mission represents a distinct and important investment theme within the AI landscape.
• Investors should monitor the progress of Softmax and similar companies focused on novel alignment techniques. If this "organic alignment" approach proves successful, it could: - Become the new industry standard for AI safety. - Create a new category of AI products, such as the "digital guard dog" or "digital companions" mentioned by Shear. - The technologies developed could become essential acquisition targets for larger players like Google or Microsoft if their own alignment strategies hit a wall.
• Anthropic's AI model, Claude, was mentioned in the discussion of current chatbot personalities.
• Emmett Shear characterized Claude as the "most neurotic" of the major chatbots. This is a qualitative description of its simulated personality based on his interactions.
• Anthropic is a key competitor to OpenAI and Google in the large language model space. Like Softmax, it is a private company.
• The mention of Claude's distinct "personality" reinforces the idea that the user experience and behavior of these models are key differentiators. Investors interested in the AI space should pay attention to how these qualitative factors evolve, as they can heavily influence user adoption and brand loyalty.

By Andreessen Horowitz
The a16z Podcast discusses tech and culture trends, news, and the future – especially as ‘software eats the world’. It features industry experts, business leaders, and other interesting thinkers and voices from around the world. This podcast is produced by Andreessen Horowitz (aka “a16z”), a Silicon Valley-based venture capital firm. Multiple episodes are released every week; visit a16z.com for more details and to sign up for our newsletters and other content as well!