AI overestimates how rational we are

Artificial intelligence systems tend to overestimate how rational people are, leading them to make poor strategic choices when interacting with humans, according to a new study by economists at HSE University, Qazinform News Agency correspondent reports, citing Tech Xplore.

AI overestimates how rational we are
Collage credit: Yerzhan Zhanibekov/ ChatGPT

Researchers found that leading AI models such as ChatGPT 4o and Claude Sonnet 4 often assume a higher level of logical reasoning in people than actually exists. The findings were published in the Journal of Economic Behavior & Organization.

Keynesian beauty contest

The research is based on the Keynesian beauty contest, a well-known thought experiment introduced by British economist John Maynard Keynes in the 1930s.

In its original version, newspaper readers were asked to select the most attractive faces from a large set of photographs, with the prize going to those whose choices were closest to the overall average selection.

The key challenge is not to pick what one personally finds most appealing, but to anticipate what the majority of participants will choose. More sophisticated players go a step further, trying to predict what others think the majority will select, and even what others believe about those expectations. The contest is therefore designed to test several layers of reasoning about how people think and how rational they assume others to be.

Winning depends on aligning one’s choice with the expected group average rather than on making the most individually logical or aesthetically pleasing decision. This makes the beauty contest a powerful tool for studying strategic thinking, herd behavior, and expectation driven outcomes in economics and finance.

Guess the number game

A common modern version of this experiment is the Guess the number game. Participants select a number between zero and one hundred, and the winner is the one whose choice is closest to a fraction of the group average, typically one half or two thirds. More experienced players tend to choose lower numbers because they anticipate that others are also trying to outthink the group.

How AI thinks

To test how AI performs in such settings, the researchers replicated results from 16 classic Guess the Number experiments previously conducted with human participants.

The AI models were given detailed prompts explaining the game rules and describing their opponents. These opponents ranged from first year economics students and academic conference participants to people with different cognitive styles and emotional states. The models were then asked to choose a number and explain their reasoning.

The study found that the AI systems adjusted their choices based on who they believed they were playing against. When facing experienced game theory researchers, the models tended to select numbers close to zero, which is usually optimal in such groups. When playing against first year students, they chose much higher numbers, reflecting expectations of less strategic thinking.

Despite this adaptability, the researchers found a consistent weakness. The AI models often assumed that human opponents would think more strategically than they actually did. In practice, many people rely on intuition or simple reasoning rather than multiple layers of strategic anticipation. This mismatch caused the AI systems to overthink and, in many cases, lose.

The study also showed that while the models displayed elements of strategic reasoning, they failed to identify a dominant strategy in simple two player games, highlighting limits in their understanding of human decision making.

The authors argue that improving AI performance in real world strategic settings will require a better understanding of human irrationality, not just formal logic.

Earlier, Qazinform News Agency reported that the U.S. War Department brings AI to daily operations.

Most popular
See All