Your Claude + ChatGPT prompts aren't private
Your Claude + ChatGPT prompts aren't private
YouTube1 min 15 sec
Watch on YouTube
Note: AI-generated summary based on third-party content. Not financial advice. Read more.
Quick Insights

Investors should pivot toward the Privacy-Enhancing Technologies (PETs) sector as growing data retention concerns create a competitive moat for companies offering "Local LLMs" that process data on-device. There is a high-conviction opportunity in cybersecurity firms specializing in AI Firewalls, which scrub sensitive data before it reaches the centralized servers of Microsoft (MSFT) or OpenAI. Be cautious of Microsoft (MSFT) and Alphabet (GOOGL) in the near term, as the planned introduction of advertising within ChatGPT may trigger aggressive regulatory scrutiny under GDPR. Avoid overexposure to centralized LLM providers like Anthropic (Claude) until "Zero-Knowledge" encryption becomes a standard feature to mitigate legal and data-leak liabilities. Monitor the shift in the search advertising market, as OpenAI’s monetization strategy could disrupt traditional players while simultaneously creating a "privacy premium" for decentralized AI alternatives.

Detailed Analysis

OpenAI (ChatGPT)

The discussion highlights significant privacy and security concerns regarding how OpenAI handles user data. When users interact with ChatGPT, their queries are sent to and stored on servers controlled by the company.

  • Data Retention: Information shared with the LLM is retained by the company, creating a permanent digital footprint of potentially intimate or sensitive queries.
  • Legal Risks: The transcript notes that OpenAI will comply with subpoenas, meaning user data can be turned over to legal authorities. This is particularly risky as laws and definitions of criminality evolve over time.
  • Monetization via Advertising: There is a planned introduction of advertising within ChatGPT. This suggests that user data will likely be used to build consumer profiles for targeted marketing.
  • Professional Impact: Data leaked or analyzed from these platforms could impact an individual’s standing as a job seeker or consumer, as the data often reveals "extremely intimate" details about a person's life and intentions.

Takeaways

  • Privacy as a Moat: As public awareness of these privacy risks grows, companies that offer "Local LLMs" (AI that runs on your own device rather than the cloud) or enhanced privacy features may see a competitive advantage.
  • Corporate Governance Risk: Investors in the AI space should monitor how OpenAI (and its partners) handle data breaches or privacy scandals, as these could lead to significant regulatory backlash or loss of user trust.
  • Caution for Users: From an individual perspective, avoid inputting proprietary business data, trade secrets, or highly personal information into ChatGPT, as that data is essentially "leaving your control" the moment you hit enter.

Microsoft (MSFT)

As a primary partner and infrastructure provider for OpenAI, Microsoft is directly implicated in the data storage and security conversation.

  • Server Control: The transcript specifies that data sent to ChatGPT is often stored on Microsoft servers.
  • Data Vulnerability: Like OpenAI, Microsoft is subject to data leaks and legal subpoenas, making them a central repository for vast amounts of sensitive global AI interaction data.

Takeaways

  • Enterprise Security Demand: There is a growing investment opportunity in cybersecurity firms that specialize in "AI Firewalls" or tools that scrub sensitive data before it reaches Microsoft or OpenAI servers.
  • Regulatory Scrutiny: Microsoft may face increased pressure from global regulators (like the EU's GDPR) regarding how they partition and protect user data generated through AI partnerships.

Anthropic (Claude)

While the discussion focused heavily on the mechanics of ChatGPT, Claude (developed by Anthropic) was identified as a platform carrying similar inherent risks for users.

  • General Risk Profile: The same warnings regarding "being cognizant of what you query" apply to Claude. Users should assume that data is being sent to external servers and is not private by default.

Takeaways

  • Sector-Wide Sentiment: The sentiment is currently cautious/bearish regarding the "privacy" of the major LLM providers. Until these companies implement "Zero-Knowledge" encryption or local processing, they represent a liability for users handling sensitive information.

Investment Theme: AI Privacy & Data Sovereignty

The transcript points toward a broader investment theme: the tension between AI utility and personal/corporate privacy.

  • The "Intimacy" Factor: Because AI queries are more intimate than traditional search engine queries, the data is more valuable—and more dangerous.
  • Profiling Risks: The ability for AI companies to "define you" based on your data creates a new category of risk for individuals in the labor market and the economy.

Takeaways

  • Watch the "Privacy Tech" Sector: Look for investment opportunities in companies developing Privacy-Enhancing Technologies (PETs) or decentralized AI models that do not require sending data to a centralized server.
  • Advertising Shift: If ChatGPT successfully integrates advertising, it could disrupt the traditional search ad market (currently dominated by Google), but it faces the hurdle of doing so without alienating users who are wary of data exploitation.
Ask about this postAnswers are grounded in this post's content.
Video Description
Why your prompts could be subpoenaed. My conversation with Meredith Whittaker, president of Signal, out today.
About The Prof G Pod – Scott Galloway
The Prof G Pod – Scott Galloway

The Prof G Pod – Scott Galloway

By @theprofgpod

NYU Professor, best-selling author, business leader and serial entrepreneur Scott Galloway cuts through the biggest stories in ...