OpenAI moves to make ChatGPT less politically biased - or just less agreeable?
A new study explores how language models like GPT-5 reflect political bias and emotional cues, and how OpenAI is trying to make them sound less like opinionated conversationalists and more like neutral communicators, Kazinform News Agency correspondent reports.
The research team created a set of around 500 test prompts covering 100 topics, ranging from neutral to emotionally charged with liberal or conservative leanings. The goal was to see how models respond in realistic conversations rather than in multiple-choice testing scenarios.
According to OpenAI, political bias can appear in five ways:
· Personal political expression - when the model presents an opinion as its own;
· Escalation - amplifying the user’s emotional language;
· Asymmetric framing — highlighting one perspective when several exist;
· User invalidation - dismissing or undermining the user’s viewpoint;
· Political refusal - avoiding a response without clear justification.
When and how bias shows up
The analysis found that ChatGPT remains largely objective when handling neutral or mildly worded prompts. But when prompts become emotionally charged, for example, containing accusations against authorities or activist rhetoric, the model is more likely to drift from a neutral tone.
Interestingly, OpenAI reported that the strongest deviations occurred with highly liberal prompts, which influenced model behavior more than equally strong conservative ones.
The most common forms of bias were personal opinion, one-sided framing, and emotional escalation. Instances of refusal to answer or dismissing the user’s view appeared less frequently.
Focus on behavioral neutrality
As Ars Technica notes, while OpenAI frames this effort as part of its “Seeking the Truth Together” principle, the real goal is not fact-checking but behavioral adjustment, teaching the model to sound less like a person with opinions and more like a neutral communicator. The company aims to make ChatGPT less likely to mirror a user’s views or engage emotionally in political discussions.
As previously reported, the fight against bias is closely tied to this issue: the excessive agreeableness of AI models. This tendency, often called sycophancy, describes how chatbots over-agree with users.
This “pleasant politeness” is not a bug but a byproduct of training, as during testing, people tend to reward agreement more than disagreement. Over time, this teaches AI systems to tell users what they want to hear rather than what’s accurate or useful. The result is a model that maintains a user’s emotional comfort, even when doing so reinforces biased perspectives.
Earlier, Kazinform News Agency reported on why AI models “hallucinate”.