
Who wins the science prize when AI makes the discovery? Premium
The Hindu
Explore the complexities of awarding scientific prizes when AI contributes to discoveries, challenging traditional notions of authorship and credit.
In 1974, Antony Hewish won the physics Nobel Prize for discovering pulsars. His graduate student, Jocelyn Bell Burnell, had actually spotted the first one in the data; she’d also built parts of the telescope herself, analysed the charts, noticed the anomaly, and helped confirm that it was real. But she didn’t win the prize. At the time, the Nobel committees argued that Hewish had designed the telescope and directed the research programme. The fact that Bell Burnell’s eyes and judgment were the ones that caught the signal didn’t register as the decisive contribution. In fact, in the committee’s apparent view, she was doing what graduate students do: executing a senior scientist’s vision.
Let’s reimagine this scenario by replacing Bell Burnell with an AI, and the question stays the same: when a crucial insight or calculation emerges from something that isn’t the senior scientist’s own brain, how do we decide who ‘made’ the discovery?
Suppose an AI system solves a longstanding problem in mathematical physics — say, the existence and smoothness of the Navier-Stokes equations — and produces a proof. Human mathematicians confirm the proof is correct. Who should win the Nobel Prize?
(“Nobel Prizes” in this article is a stand-in for many prizes of its type, including the Abel Prize, the Wolf Prizes, and the Lasker Awards.)
On February 13, OpenAI announced that its AI model GPT-5.2 had helped a group of scientists “derive a new result in theoretical physics”. The (human) scientists posed the original question. GPT-5.2 suggested a potential solution. Then OpenAI built an internal model that fleshed the solution out as well as – this is important – provided it. The scientists finally verified it (verifiability is also important), and voila.
The first instinct might be to say it should be the humans who asked the question, set up the problem, and knew what would count as a solution. The AI model is just a powerful calculator. When Andrew Wiles proved Fermat’s Last Theorem using computer verification, nobody suggested the computer should share credit; it was only checking cases Wiles had fully specified. But if an AI generates a proof humans can verify but not fully reconstruct, they’re more like curators than coauthors and shouldn’t win the prize. Discovery implies understanding.













