This Is How to Tell if Writing Was Made by AI
This Is How to Tell if Writing Was Made by AI
37 days agoOdd LotsBloomberg
Podcast48 min 32 sec
Listen to Episode
Note: AI-generated summary based on third-party content. Not financial advice. Read more.
Quick Insights

As the internet becomes saturated with automated content, investors should prioritize companies developing C2PA standards and "proof-of-humanity" technologies, specifically monitoring hardware leaders like Apple (AAPL) and Sony (SONY) for chip-level digital watermarking. Reddit (RDDT) offers a high-conviction play on human-centric data, but its valuation depends on successfully utilizing detection tools like Pangram Labs to prevent AI bots from devaluing its training data. Alphabet (GOOGL) faces a critical near-term risk as "AI slop" threatens the quality of Google Search, making their ability to filter low-quality SEO content a primary driver for stock stability. Watch for an "arms race" in the software sector where AI detection APIs become essential infrastructure for any platform reliant on ad premiums and user trust. For long-term growth, shift focus toward "walled garden" platforms that can verify human provenance, as open-web search utility may degrade due to the "Dead Internet" phenomenon.

Detailed Analysis

AI Detection & Content Integrity (Investment Theme)

The podcast discusses the rise of "AI Slop"—low-quality, automated content—and the emerging industry dedicated to identifying it. As AI-generated text reaches a critical mass, tools that verify human provenance are becoming essential infrastructure for platforms, publishers, and educators.

Takeaways

  • The "Signal-to-Noise" Opportunity: As the internet becomes flooded with automated content (estimated at 40% of current web content), value is shifting toward "verified human" platforms. Investors should look for companies building "proof-of-humanity" technologies.
  • Platform Integrity Risks: Social media and review platforms (like Reddit, Yelp, or Quora) face existential risks from bot farms. Companies that successfully integrate advanced detection APIs like Pangram Labs may maintain higher ad premiums due to better engagement quality.
  • The "C2PA" Standard: Mention was made of the C2PA (Coalition for Content Provenance and Authenticity). This is a critical technical standard to watch, as hardware manufacturers (like Apple, Samsung, or Sony) may soon embed "digital watermarks" at the chip level to prove a photo or video was captured by a physical lens rather than AI.

Pangram Labs (Private Company)

Pangram Labs is a startup focused on AI detection. It uses deep learning models to identify the "decision patterns" of large language models (LLMs) to distinguish them from human writing.

Takeaways

  • High Accuracy Benchmarks: The company claims a 99% accuracy rate for detecting AI and a false positive rate of only 1 in 10,000. This level of precision is necessary for commercial viability in legal and academic sectors.
  • Model Agnostic Detection: Their technology can identify clusters of writing styles belonging to specific models like Claude 3, ChatGPT, or DeepSeek, suggesting that AI leaves a "digital fingerprint" regardless of the prompt.
  • B2B Use Cases: Their primary revenue drivers are large platforms like Quora that need to moderate content at scale to prevent "AI slop" from degrading their product.

Alphabet Inc. (GOOGL)

The discussion highlights Google's conflicting incentives regarding AI. While they provide tools to generate AI content, they are also the primary gatekeeper fighting it to save the utility of Google Search.

Takeaways

  • Search Engine Degradation: The rise of AI-generated SEO articles (pennies on the dollar to produce) threatens the quality of Google Search. If Google cannot successfully filter this "slop," users may migrate to "walled gardens" like Discord or Reddit.
  • Product Integration Risks: Google is aggressively pushing Gemini into Gmail and Docs. The analysts suggest this could backfire by eroding social norms around communication, potentially devaluing the "human" touch of their workspace suite.

Reddit (RDDT)

Reddit was specifically highlighted as a platform currently being targeted by AI bot farms to influence consumer behavior and "game" LLM training data.

Takeaways

  • The "Reddit Search" Trend: Because users trust Reddit for authentic human reviews (e.g., "best nose hair trimmer Reddit"), bot farms are now populating subreddits with AI-generated "organic" mentions to sway both humans and the AI models that scrape Reddit for data.
  • Monetization vs. Authenticity: Reddit's value as a data source for AI training (which they now license) depends on it being human-generated. If the "slop" percentage (currently estimated at 10% and rising) grows too high, the value of their data to companies like OpenAI or Google could decrease.

Emerging Risk Factors

The podcast identifies several specific risks that investors in the AI space should monitor:

  • The "Dead Internet" Theory: The risk that open social platforms become so saturated with bots (Heaven Banning/Slop) that they become unusable, leading to a total collapse of the current digital advertising model.
  • Reputational Risk for Media: Outlets like The Guardian were mentioned in the context of writers using AI to "shirk work." Media companies lack robust internal controls to catch this, which could lead to significant brand damage.
  • Adversarial Prompting: As detection tools like Pangram Labs improve, "adversarial" AI will be developed specifically to bypass them, creating a permanent "arms race" in the software sector.
Ask about this postAnswers are grounded in this post's content.
Episode Description
When you consider the fact that many people don't know how and where to place a comma, it's safe to say that AI is already better than most people at writing. It's clean copy. It can be surprisingly persuasive. And sometimes, it's even informative. But there's frequently still something about it that just seems... off. Many people can tell quite quickly when they're reading AI-generated text. And beyond the style, the existence of AI generated text has all kinds of ramifications, from making it easier for students to cheat, to the rise of deceptive chatbots, to potentially degrading the experience on sites like Reddit. So how do you actually tell if a piece of writing was generated by AI? On this episode, we speak with Max Spero, the CEO of Pangram Labs, a company that built software to detect whether a piece of content was AI generated or not. We talk about the advanced techniques they use, the risk of false positives and false negatives, and what AI writing means in general for the future of the Internet. Read more: The AI Video Apps Gaining Ground After OpenAI Declared Sora Dead Credit Derivative Trading Shatters Records on Iran War, AI Fears Only Bloomberg - Business News, Stock Markets, Finance, Breaking & World News subscribers can get the Odd Lots newsletter in their inbox each week, plus unlimited access to the site and app. Subscribe at  bloomberg.com/subscriptions/oddlots Subscribe to the Odd Lots Newsletter Join the conversation: discord.gg/oddlots See omnystudio.com/listener for privacy information.
About Odd Lots
Odd Lots

Odd Lots

By Bloomberg

<p>Bloomberg's Joe Weisenthal and Tracy Alloway explore the most interesting topics in finance, markets and economics. Join the conversation every Monday and Thursday.</p>