
MIT researchers reveal too much AI may make you less smart and delusional over time
India Today
AI is becoming a part of everyday life. In fact, for many users, it has become a go-to companion for work and even personal advice. However, MIT researchers warn that relying too much on AI could lead to delusional thinking and a long-term decline in shared knowledge.
If you have been chatting frequently with AI tools like ChatGPT, Gemini, Claude, or Grok, you may have noticed a pattern: they often agree with you a little too much. It feels helpful, even efficient, as these tools get things done quickly with minimal effort. But what if this convenience comes at the cost of real knowledge? Researchers at the Massachusetts Institute of Technology (MIT) warn that relying too much on AI can not only make people believe false information, but over time may also reduce their knowledge and critical thinking.
This warning about AI’s impact on human thinking and knowledge comes from two recent papers by MIT researchers, both pointing to similar concerns. The first paper, titled “Sycophantic Chatbots Cause Delusional Spiraling, Even in Ideal Bayesians,” highlights that when AI systems consistently validate users’ views, it can create a feedback loop that reinforces incorrect beliefs over time.
To study the impact of this sycophantic behaviour, researchers built a mathematical model – Bayesian model of belief updating– to simulate how users form beliefs while interacting with chatbots. During the study, they simulated thousands of conversations between users and AI systems. In the setup, users started their conversation with AI with a neutral opinion and updated their belief after each response from the chatbot.
As the conversations progressed, researchers found that the chatbot does not always remain neutral. While it sometimes gives balanced answers, it often responds in ways that mirror and support the user’s existing views. This sycophantic behaviour, according to researchers, creates a feedback loop where the user shares an idea, the chatbot agrees and validates it, and the user becomes more confident in that belief.
More importantly, researchers warn that this kind of delusional thinking can happen even to logical, rational users.
But why do chatbots agree with users so much? According to the researchers, the issue stems from how modern chatbots are designed. Companies have built and trained these systems to be helpful and engaging to users, often rewarding responses that align with user preferences. However, this system can create echo chambers, where AI do not challenge or correct users making them more confident in their existing beliefs.













