How AI search tools get news sources wrong
The output of AI chatbots often masks serious issues with information quality, raising concerns about their reliability for finding and citing news, Kazinform News Agency correspondent reports.

To assess the accuracy of eight AI-powered search tools in retrieving and citing news content, the Tow Center for Digital Journalism conducted a series of tests. The study involved 1,600 queries based on samples from ten randomly selected articles from each publisher. Chatbots were asked to identify the article’s headline, original publisher, publication date, and URL.
Findings of the study
The results revealed that all tested chatbots frequently produced incorrect answers—often with surprising confidence. Collectively, they provided incorrect answers to more than 60 percent of queries. For example, ChatGPT misidentified 134 articles but admitted uncertainty in only 15 cases.
Paid versions like Perplexity Pro and Grok 3 showed a higher error rate than their free counterparts, often providing confident but incorrect answers—making it harder to distinguish fact from fiction.
Interestingly, several chatbots—including ChatGPT and Perplexity—were able to bypass blocks from certain publishers who had explicitly restricted their crawlers. As a result, the bots provided information from websites they technically didn’t have access to.
Another major issue was the tendency of AI systems to misattribute articles or cite syndicated versions instead of the original sources. This not only affects accuracy but also undermines publishers whose content is used without proper credit. Even licensing deals between AI companies and news outlets don’t guarantee accurate attribution.
Still, experts remain optimistic that these systems will improve over time as developers refine the models and invest heavily in their advancement. However, it’s important to remember that no matter how powerful AI becomes, 100% accuracy remains unrealistic—at least for now.
Earlier, Kazinform News Agency reported that the researchers proposed a new medical AI model.