
Studies show AI impairs critical thinking long-term. Here’s how teachers can intervene Premium
The Hindu
Explore how AI in classrooms risks undermining true learning and critical thinking, advocating for better teaching over policing.
It’s eleven forty-seven at night. A second-year college student in Mumbai clicks ‘submit’ on a 1,200-word paper. Her class started at 11:10. The grammar is perfect, the tone is intellectual, and the structure is clean. The student is sure she has studied and sleeps well. However by next week, a lot of people like her might not even remember what the project was about: because they didn’t really learn anything. Welcome to the new normal on campuses from Chennai to Chandigarh: instead of cheating in the old way, students quietly swap out their hardest thinking for a machine and incorrectly think that speed means they have control over the subject or activity.
This certainly is dangerous for higher education system. In our country, grades typically determine how easy it is to move around. Students have to deal with busy schedules, long commutes, part-time jobs, family expectations, and never-ending exam and test prep cycles. Wanting ‘quick and clean’ isn’t a bad thing; it’s a normal reaction to a high-pressure environment. Another reason: many colleges still emphasize polished final solutions over observable logic. That makes it easy for students to look like they’re doing well when they’re really not.
Now a caveat: This isn’t a tirade against technology. It is an appeal to protect the slow, hard part of learning that takes facts and converts them into knowledge; that into understanding and onto judgment.
Tools such as ChatGPT, Gemini or Claude, are indeed helpful, especially for students who need help with language, structure, or getting unstuck, in the sense, those with ‘starter trouble’. The problem starts when AI goes from being a tool for learning to a tool for finishing. Three peer-reviewed research from 2025 warn us about that change.
One article in Frontiers in Psychology in November 2025 called it a “cognitive paradox”: AI can make learning easier by getting rid of unneeded friction (such as formatting problems and first-draft worry). Overuse of AI or LLM tools can also make it harder to conduct the mental effort and initiative that lead to long-lasting knowledge.
The authors of this November 2025 study use Cognitive Load Theory and Bloom’s taxonomy to say that if students hire someone else to do the higher-order work, such as analysis, evaluation, and creation, which can be a truly human activity, then learning in college becomes just finishing low-level tasks. The study also includes experimental patterns that should make any teacher think twice: students who used AI did more practice problems correctly, but their knowledge of concepts got worse; and continuous exposure led to memory loss in controlled settings. In other words, AI can make kids seem smarter in the near-term, but in the long term slowly break down the mental model that forms the bulwark of lasting knowledge.













