At the American Physical Society’s Global Physics Summit in Denver, a session on “Navigating the AI revolution: future-proofing your science career” drew in a crowd of early-stage physicists searching for practical career advice. What they received was much more philosophical in nature.
Malachi Schram of the Pacific Northwest National Lab and Hilary Egan of the National Laboratory of the Rockies delivered back-to-back talks full of similar rhetoric, emphasizing the fast-paced development of AI used for specialized tasks in science, such as detecting equipment failure or identifying ways of retrofitting older buildings.
But the third speaker, Matthew Schwartz, a theoretical physicist from Harvard University, took his optimism about AI far further. In a punchy presentation, he predicted that large language models (LLMs) will surpass human intelligence in five years.
“There’s definitely exponential growth of the intellectual capacity of these [large language] models as a function of time,” Schwartz told the audience, using the number of model parameters as a proxy for intelligence. “The machines are still growing by roughly 10 times each year, and we” – he paused for dramatic effect – “are not growing much smarter.” This drew a wave of laughter from the crowd.
Unlike humans, machines can visualize higher dimensional spaces, hold far more information in memory and process more complex equations. “We are not the endpoint of intelligence. We are only the smartest things to evolve on Earth so far,” Schwartz argued. He went on to suggest that humans may simply be incapable of understanding long-standing physics problems such as a theory of everything. He compared it to cats, which he suggested will never understand chess.
If the talent of physicists exists on a bell curve, Schwartz claims we can push the bell curve higher on the talent axis: “If we use AI augmentation, we can get 10 000 Einsteins a century instead of one Einstein.”