
What water turning to vapour and the way AI learns have in common Premium
The Hindu
Explore how AI learning parallels physics laws, revealing insights into neural networks and their performance mechanisms.
Artificial intelligence (AI) models like ChatGPT, Claude, and Gemini often give the impression that there’s a mind at work within the machine. These days they “think” in response to queries, go back and correct themselves, apologise for mistakes, and mimic many tics of human communication.
There’s no direct physical evidence to this day that a machine mind exists however. In fact, there’s good reason to believe what these machines are doing when they say they’re “thinking” is actually dealing with a physical phenomenon.
In the 1980s, a group of physicists led among others by John Hopfield and Geoffrey Hinton realised that if you have a network with millions of neurons, you can stop treating them as individual ‘particles’ and start addressing them as a system. And the behaviour and properties of these systems can be described by the rules of thermodynamics and statistical mechanics.
Hopfield and Hinton won the physics Nobel Prize in 2024 for this work. A pair of studies published in Physical Review E has doubled down on the same idea, showing that two common ‘tricks’ engineers use to make AI models better are also such physical phenomena.
A neural network is a network of processors connected to each other like neurons in the human brain and which learns and uses information like the brain. They can also be stacked in multiple layers, so that one layer prepares the inputs for the next and so on. Neural networks are at the heart of machine learning applications like generative AI, self-driving cars, computer vision, and modelling.
They also have an Achilles heel called overfitting: a network becomes so obsessed with some specific examples it has seen during its training that it fails to understand the broader patterns. Engineers have developed some techniques to prevent this. For instance, the October 2025 paper by University of Oxford and Princeton University researchers Francesco Mori and Francesca Mignacco focused on a technique called dropout. During training, the neural network is made to randomly turn off a certain percentage of its neurons, forcing the remaining ones to work harder and learn the concepts independently.













