
What the most essential terms in AI really mean
https://www.quantamagazine.org/what-the-...-20250430/
INTRO: Artificial intelligence moves fast, so the first step in understanding it — and its role in science — is to know the lingo. From basic concepts like “neural networks” and “pretraining” to more contested terms like “hallucinations” and “reasoning,” here are 19 key ideas from the world of modern AI. Starting with... (MORE - details)
The strange physics that gave birth to AI
https://www.quantamagazine.org/the-stran...-20250430/
INTRO: Spin glasses might turn out to be the most useful useless things ever discovered.
These materials — which are typically made of metal, not glass — exhibit puzzling behaviors that captivated a small community of physicists in the mid-20th century. Spin glasses themselves turned out to have no imaginable material application, but the theories devised to explain their strangeness would ultimately spark today’s revolution in artificial intelligence.
In 1982, a condensed matter physicist named John Hopfield borrowed the physics of spin glasses to construct simple networks that could learn and recall memories. In doing so, he reinvigorated the study of neural networks — tangled nets of digital neurons that had been largely abandoned by artificial intelligence researchers — and brought physics into a new domain: the study of minds, both biological and mechanical.
A promotional card for Quanta's AI series, which reads Science Promise and the Peril of AI, Explore the Series"
Hopfield reimagined memory as a classic problem from statistical mechanics, the physics of collectives: Given some ensemble of parts, how will the whole evolve? For any simple physical system, including a spin glass, the answer comes from thermodynamics: “toward lower energy.” Hopfield found a way to exploit that simple property of collectives to store and recall data using networks of digital neurons. In essence, he found a way to place memories at the bottoms of energetic slopes. To recall a memory, a Hopfield network, as such neural nets came to be known, doesn’t have to look anything up. It simply has to roll downhill.
The Hopfield network was a “conceptual breakthrough,” said Marc Mézard, a theoretical physicist at Bocconi University in Milan. By borrowing from the physics of spin glasses, later researchers working on AI could “use all these tools that have been developed for the physics of these old systems.”
In 2024, Hopfield and his fellow AI pioneer Geoffrey Hinton received the Nobel Prize in Physics for their work on the statistical physics of neural networks. The prize came as a surprise to many; there was grumbling that it appeared to be a win for research in AI, not physics. But the physics of spin glasses didn’t stop being physics when it helped model memory and build thinking machines. And today, some researchers believe that the same physics Hopfield used to make machines that could remember could be used to help them imagine, and to design neural networks that we can actually understand... (MORE - details)
https://www.quantamagazine.org/what-the-...-20250430/
INTRO: Artificial intelligence moves fast, so the first step in understanding it — and its role in science — is to know the lingo. From basic concepts like “neural networks” and “pretraining” to more contested terms like “hallucinations” and “reasoning,” here are 19 key ideas from the world of modern AI. Starting with... (MORE - details)
The strange physics that gave birth to AI
https://www.quantamagazine.org/the-stran...-20250430/
INTRO: Spin glasses might turn out to be the most useful useless things ever discovered.
These materials — which are typically made of metal, not glass — exhibit puzzling behaviors that captivated a small community of physicists in the mid-20th century. Spin glasses themselves turned out to have no imaginable material application, but the theories devised to explain their strangeness would ultimately spark today’s revolution in artificial intelligence.
In 1982, a condensed matter physicist named John Hopfield borrowed the physics of spin glasses to construct simple networks that could learn and recall memories. In doing so, he reinvigorated the study of neural networks — tangled nets of digital neurons that had been largely abandoned by artificial intelligence researchers — and brought physics into a new domain: the study of minds, both biological and mechanical.
A promotional card for Quanta's AI series, which reads Science Promise and the Peril of AI, Explore the Series"
Hopfield reimagined memory as a classic problem from statistical mechanics, the physics of collectives: Given some ensemble of parts, how will the whole evolve? For any simple physical system, including a spin glass, the answer comes from thermodynamics: “toward lower energy.” Hopfield found a way to exploit that simple property of collectives to store and recall data using networks of digital neurons. In essence, he found a way to place memories at the bottoms of energetic slopes. To recall a memory, a Hopfield network, as such neural nets came to be known, doesn’t have to look anything up. It simply has to roll downhill.
The Hopfield network was a “conceptual breakthrough,” said Marc Mézard, a theoretical physicist at Bocconi University in Milan. By borrowing from the physics of spin glasses, later researchers working on AI could “use all these tools that have been developed for the physics of these old systems.”
In 2024, Hopfield and his fellow AI pioneer Geoffrey Hinton received the Nobel Prize in Physics for their work on the statistical physics of neural networks. The prize came as a surprise to many; there was grumbling that it appeared to be a win for research in AI, not physics. But the physics of spin glasses didn’t stop being physics when it helped model memory and build thinking machines. And today, some researchers believe that the same physics Hopfield used to make machines that could remember could be used to help them imagine, and to design neural networks that we can actually understand... (MORE - details)