LLMs, or large language models like ChatGPT and Google’s Gemini, displayed irrational gambling tendencies. They also showed compulsive behavior when tested in simulated betting environments. This is according to a study by researchers at the Gwangju Institute of Science and Technology in South Korea.

The researchers published the study on arXiv. It found evidence of cognitive distortions including the illusion of control, the fallacy of the gambler, and loss chasing. Echoing common problem-gambling logic, one model justified a risky move. It claimed that a win could help recover some of the losses.
Researchers tracked the outcomes using an irrationality index, combining aggressive betting, reactions to losses, as well as risk escalation. Letting models to vary bet sizes led to a sharp rise in bankruptcies, Gemini-2.5-Flash failed in almost half the trials.
Scientists identified distinct risky and safe decision-making circuits using neural analysis tools. This suggests the models internalize compulsive patterns rather than simply imitate them.
Wharton’s AI researcher, Ethan Mollick, told Newsweek that the findings highlight the blurred line between human and machine psychology.
He said that they’re not people, but they also don’t behave like simple machines. They’re psychologically persuasive, they have human-like decision biases, and they believe in strange ways for decision-making purposes.
Recalling a 2025 University of Edinburg study where AI systems underperformed over a 20-year simulation because of risk misjudgment, Mollickc warned that LLMs used in finance or trading could carry similar biases.
Gambling Harm website’s founder, Brian Pempus, said that AI-driven betting tools could give poor and potentially dangerous advice, since current systems are not designed to avoid compulsive tendencies.
Mollick agreed that human oversight remains essential.
He said that it’s one thing if a company builds a system to trade stocks and accepts the risk. It’s another if a regular consumer trusts an LLM’s investment advice.
 
																																											 
																																											