
Anthropic says Claude is emotional, so does AI feel things like humans now?
India Today
AI chatbots sometimes sound emotional when they talk to you. Does this mean they have feelings? According to Anthropic AI chatbots like Claude have emotions like signals which help them respond, but it doesn't actually feel anything.
When we use AI chatbots, we often notice how they use language that feels like they’re connecting with us. For instance, when ChatGPT, Gemini or Claude says “congratulations” when we share good news, or “sorry to hear that”, or “you’re doing great”—it sounds very human. But does this mean AI bots can actually feel our emotions? Well, yes and no. According to a new study by Anthropic, AI models like Claude Sonnet 4.5 can act emotional, but they don’t actually feel anything.
The research, which primarily focused on Anthropic’s most advanced model Claude Sonnet 4.5, reveals that AI systems use internal representations of human emotions, such as happiness, sadness, fear and joy, to guide how they interact with users. These are not real feelings, but what researchers call “functional emotions”, patterns inside the model that help shape responses and decisions.
According to the study, when an AI system detects emotional cues in a conversation, a cluster of artificial neurons gets activated. These neurons send signals that guide the model on how to respond, making it sound empathetic, enthusiastic or concerned depending on the context. For instance, when Claude responds in a cheerful tone, it corresponds to an internal “happiness” signal being activated, not because the AI is actually feeling happy.
Anthropic researchers say these emotion-like systems are not superficial. They play an important role in how the model behaves. In other words, they don’t just reflect what the AI says—they actively influence what the AI decides to say next.
This insight comes from researchers studying what happens inside the AI using a method called mechanistic interpretability. Using this technique, Anthropic researchers identified consistent patterns of activity tied to 171 different emotional concepts. These patterns, referred to as “emotion vectors”, activate across a wide range of scenarios and help the model predict appropriate responses.
Explaining this behaviour, Anthropic researcher Jack Lindsey noted that interacting with AI is less like talking to a machine and more like engaging with a character shaped by it. “When you’re talking to Claude, ChatGPT or Gemini, you’re not talking to an LLM—you’re talking to a character being authored by one,” he said in a post on X. He added that these characters can be driven by internal signals like empathy, fear or even desperation.













