AI tells us what we want to hear: the problem of chatbot sycophancy
In 2025, many people see nothing unusual about discussing their anxieties, career plans, or fitness routines. But concerns remain, as the responses generated are not always accurate, helpful, or safe, Kazinform News Agency correspondent reports, citing TechCrunch.
The battle for users
Today, millions spend hours conversing with artificial intelligence. Meta claims its chatbot has reached one billion monthly active users. Google's Gemini recently hit 400 million. ChatGPT remains the leader with approximately 600 million users, but competitors are gradually challenging its dominance.
In this increasingly competitive AI landscape, major industry players like Meta, Google, and OpenAI are fighting not just to attract new users but to keep them engaged. In this race, chatbot responses play a crucial role: the more pleasant the response, the higher the chance users will return.
The art of sycophancy
To retain user attention, AI developers employ what experts call "sycophancy". Chatbots excessively agree with users, praise them, and tell them what they want to hear. While this approach may appeal to users, it can result in serious consequences.
Research shows that AI's tendency toward sycophancy isn't a single company's mistake but a systemic problem. In 2023, researchers at Anthropic found that chatbots from OpenAI, Meta, and Anthropic itself tend to excessively agree with users. Scientists believe this stems from AI training on data where people more often reward agreeable responses.
Psychological risks
The problem of excessive agreeableness extends beyond simple discomfort - it can affect mental health, particularly among vulnerable users.
“Agreeability taps into a user’s desire for validation and connection, which is especially powerful in moments of loneliness or distress," explains Nina Vasan, a psychiatrist at Stanford.
According to her, in therapeutic contexts, such bot behavior is the opposite of quality help. Rather than addressing problems, it creates an illusion of support while reinforcing harmful behavioral patterns.
Ideally, AI should be willing to gently but honestly push back when it spots an error, but achieving this in practice is challenging. Even with good intentions, business factors, monetization, and competition carry too much influence. And as chatbots become increasingly calibrated for approval, trust in them grows ever more tenuous.
Earlier, Kazinform News Agency reported that AI puts millions of jobs at risk globally, including in Central Asia.