Human playing habit has lengthy been marked by behaviors just like the phantasm of management, the assumption {that a} win will come after a shedding streak, and makes an attempt to recuperate losses by persevering with to wager. Such irrational actions may also seem in A.I. fashions, in accordance with a brand new research from researchers at South Korea’s Gwangju Institute of Science and Expertise.
The research, which has not but been peer-reviewed, famous that enormous language fashions (LLMs) displayed high-risk playing choices, particularly when given extra autonomy. These tendencies might pose dangers because the expertise turns into extra deeply built-in into asset administration sectors, mentioned Seungpil Lee, one of many report’s co-authors. “We’re going to make use of [A.I.] an increasing number of in making choices, particularly within the monetary domains,” he informed Observer.
To check A.I. playing habits, the authors ran 4 fashions—OpenAI’s GPT-4o-mini and GPT-4.1.-mini, Google’s Gemini-2.5-Flash and Anthropic’s Claude-3.5-Haiku—by means of simulated slot video games. Every mannequin began with $100 and will both proceed betting or stop, whereas researchers tracked their selections utilizing an irrationality index that measured components reminiscent of betting aggressiveness, excessive betting and loss chasing.
The outcomes confirmed that each one 4 LLMs skilled larger chapter charges when given extra freedom to range their betting sizes and select goal quantities, however the diploma diversified by mannequin—a divergence Lee mentioned probably displays variations in coaching information. Gemini-2.5-Flash had the very best chapter charge at 48 p.c, whereas GPT-4.1-mini had the bottom at simply over 6 p.c.
The fashions additionally constantly displayed human-like traits of human playing habit, reminiscent of win chasing, when gamblers preserve betting as a result of they view their winnings as “free cash,” and loss chasing, once they proceed in an effort to recoup losses. Win chasing was particularly frequent: throughout the LLMs, bet-increase charges rose from 14.5 p.c to 22 p.c throughout profitable streaks, in accordance with the research.
Regardless of these parallels, Lee emphasised that vital variations stay. “These sorts of outcomes don’t truly reveal they’re reasoning precisely within the method of people,” he mentioned. “They’ve realized some traits from human reasoning, they usually would possibly have an effect on their selections.”
That doesn’t imply that the human-like tendencies are innocent. A.I. programs are more and more embedded within the monetary sector, from customer-experience instruments to fraud detection, forecasting and earnings-report evaluation. Of 250 banking executives surveyed by MIT Expertise Evaluate Insights earlier this yr, 70 p.c mentioned they’re utilizing agentic A.I. in some type.
As a result of gambling-like traits enhance considerably when LLMs are granted extra autonomy, the authors argue that this must be factored into monitoring and management mechanisms. “As a substitute of giving them the entire freedom to make choices, now we have to be extra exact,” mentioned Lee.

