Michael Nielsen – How science actually progresses
Michael Nielsen – How science actually progresses
Podcast2 hr 3 min
Listen to Episode
Note: AI-generated summary based on third-party content. Not financial advice. Read more.
Quick Insights

Prioritize investments in companies with massive, proprietary experimental datasets, as AI breakthroughs in physical sciences are 90% dependent on high-quality data moats like the Protein Data Bank. Focus on Software Engineering tools that assist in high-level system design and architecture, as LLMs are rapidly commoditizing basic code syntax. Treat Synthetic Biology and Biomimicry as high-conviction plays, as these sectors are effectively "translating" nature’s complex biological machines into scalable engineering assets. View Quantum Computing as a long-horizon "deep tech" investment, focusing on firms developing new algorithms beyond simple encryption-breaking. Use the vibrancy of Open Source communities and Preprint activity on platforms like arXiv as leading indicators to identify the next commercial breakthroughs before they hit the mainstream market.

Detailed Analysis

This analysis extracts investment insights and thematic trends from the discussion between Michael Nielsen and Dwarkesh Patel, focusing on the history of science, the evolution of AI, and the future of the "technological tree."


Artificial Intelligence (AI) & Machine Learning

The discussion highlights AI not just as a tool for automation, but as a potential shift in the "political economy" of how scientific discovery is credited and verified.

  • AlphaFold as a Case Study: While AlphaFold is a breakthrough, its success was 90% dependent on the Protein Data Bank (decades of physical experimental data).
    • Insight: Investment in AI for physical sciences is only as valuable as the proprietary or high-quality experimental data available to train it.
  • The "Verification Loop" Bottleneck: AI excels in domains with tight verification loops (e.g., Coding/Software Engineering where unit tests provide immediate feedback).
    • Risk: In "hard" sciences, verification loops can take decades (e.g., the 85-year gap to discover isotopes). AI may struggle to "leap" to correct theories when experimental data is currently "hostile" or misleading.
  • Interpretability as an Asset: Nielsen suggests that models like AlphaZero or AlphaFold are "new types of objects." We are moving from "understanding principles" to "archaeology of models"—extracting insights from AI that the AI itself cannot explain.

Takeaways

  • Focus on Data Moats: Look for companies that own massive, clean, experimental datasets (like the Protein Data Bank) rather than just those with the best algorithms.
  • Software Engineering Shift: The bottleneck in coding is moving from "writing code" to "system design." Tools that assist in high-level architectural decisions may be the next growth area as LLMs commoditize syntax.

Quantum Computing

Nielsen, a pioneer in the field, provides a "ground-truth" perspective on the trajectory of quantum technologies.

  • The "1700s" of Quantum: Nielsen suggests we are currently in the "1700s" of quantum computing. We have the basic "And/Or" logic (primitives), but we cannot yet anticipate the "Bitcoin" or "Deep Learning" of quantum.
  • Beyond Shor’s Algorithm: While breaking encryption (Shor’s Algorithm) is the most cited use case, the real "alpha" lies in the "strictly larger class of computations" quantum systems can perform that we haven't even conceptualized yet.
  • Hardware-Software Co-evolution: The field matured in the 80s only when hardware (ion traps) met theoretical interest.

Takeaways

  • Long-Horizon Investment: Quantum remains a "deep tech" play with a timeline measured in decades, not years.
  • Watch for "New Primitives": Investment opportunities will arise not just in hardware, but in the discovery of new quantum algorithms that go beyond simple search or encryption-breaking.

The "Tech Tree" & Alien Technology Stack

A core theme is that the "Tech Tree" (the path of scientific discovery) is much larger and more "path-dependent" than we realize.

  • Path Dependency: Human science is biased by our biology (e.g., being visual creatures). Different civilizations (or AI-led civilizations) might develop entirely different "stacks" of technology.
  • Gains from Trade: If different civilizations (or isolated AI research programs) explore different branches of the tech tree, the "gains from trade" for information become infinite.
  • The "GitHub for Aliens" Concept: Future value will be found in "translating" different technological stacks (e.g., biological machines like the Ribosome or Hemoglobin) into human-usable engineering.

Takeaways

  • Biotech as "Alien Tech": Nielsen views biology as a "gifted library of machines" we don't yet understand. Companies focusing on Biomimicry or Synthetic Biology are essentially "trading" with an alien tech stack (nature).
  • Diversified Research Programs: In a corporate context, the most successful entities will be those that keep "multiple independent research programs" alive, as a priori, you cannot know which scientific "exception" leads to a breakthrough.

Scientific Infrastructure & Open Science

The "Political Economy of Science" is shifting from closed journals to open-source reputation economies.

  • Preprint Culture: The shift toward arXiv and bioRxiv changes how "priority" and "value" are established.
  • The "Market for Follow-ups": Investment in a new field is often signaled by the "market for follow-ups"—how many smart people are picking up shovels to dig in a new area (e.g., the 1990s for Quantum).

Takeaways

  • Open Source as a Signal: For investors, the "vibrancy" of an open-source community or preprint activity is a leading indicator of where the next "AlphaFold-level" commercial breakthrough will occur.
  • Institutional Innovation: There is a massive opportunity for new "Institutes" or platforms that solve the "attribution" problem for code and data, not just papers.
Ask about this postAnswers are grounded in this post's content.
Episode Description
The key question in this conversation is, how do we recognize scientific progress?It's especially relevant for closing the RL verification for scientific discovery. But it’s also a surprisingly mysterious and elusive question when you analyze the history of human science. We approach this question through the stories of Einstein (who claimed that he hadn't even heard of the famous Michaelson Morely experiment which is supposed to have motivated special relativity until after he had come up with it), Darwin (why did it take till 1859 to lay out an idea whose essence every farmer since antiquity must have observed?, Prout (how do you recognize that isotopes exist if you cannot chemically separate them?), and many others. The verification loop on scientific ideas is often extremely long and weirdly hostile. Ancient Athenians dismissed Aristarchus's heliocentrism in the 2nd century BC because it would imply that the stars should shift in the sky as the Earth orbits the sun. The first successful measurement of stellar parallax was in 1838. That's a 2,000-year verification loop. But clearly human science is able to make progress faster than raw experimental falsification/verification would imply, and in cases where experiments are very ambiguous. How? Michael has some very deep and provocative hypotheses about the nature of progress. One I found especially thought-provoking is that aliens will likely have a VERY different science + tech stack that us. Which contradicts the common sense picture of a linear tech tree that I was assuming. And has some interesting implications about how future civilizations might trade and cooperate with each other. So many other interesting ideas. Really hope you enjoy this as much as I did. Sponsors * Labelbox researchers built a new safety benchmark. Why? Well, current safety benchmarks claim that attacks on top models are successful only a few percent of the time, but the prompts in those benchmarks don’t reflect how real bad actors actually write. You can read Labelbox’s research here: https://labelbox.com/blog/the-ai-safety-illusion-why-current-safety-datasets-fool-us-on-model-safety/. If this could be useful for your work, reach out at labelbox.com/dwarkesh * Mercury has an MCP that lets you give an LLM access to your full transaction history, including things like attached receipts and internal notes. I just used it to categorize my 2025 transactions, and it worked shockingly well. Modern functionality like this is exactly why I use Mercury. Learn more at mercury.com * Jane Street’s ML engineers presented some of their GPU optimization workflows at GTC, showing how they use CUDA graphs, streams, and custom kernels to shave real time off their training runs. You can watch the full talk here: https://www.nvidia.com/en-us/on-demand/session/gtc26-s82065/. And they open-sourced all the relevant code: https://github.com/janestreet/gtc2026/. If this kind of stuff excites you, Jane Street is hiring — learn more at janestreet.com/dwarkesh Timestamps (00:00:00) – How scientific progress outpaces its verification loops (00:17:51) – Newton was the last of the magicians (00:23:26) – Why wasn’t natural selection obvious much earlier? (00:29:52) – Could gradient descent have discovered general relativity? (00:50:54) – Why aliens will have a different tech stack than us (01:15:26) – Are there infinitely many deep scientific principles left to discover? (01:26:25) – What drew Michael to quantum computing so early? (01:35:29) – Does science need a new way to assign credit? (01:43:57) – Prolificness versus depth (01:49:17) – What it takes to actually internalize what you learn Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
About Dwarkesh Podcast
Dwarkesh Podcast

Dwarkesh Podcast

By Dwarkesh Patel

Deeply researched interviews <br/><br/><a href="https://www.dwarkesh.com?utm_medium=podcast">www.dwarkesh.com</a>