Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
AI models from Google, OpenAI and Anthropic lost money betting on football matches over a Premier League season, in a new study suggesting even the most advanced systems struggle to analyse the real world over long periods of time.
The “KellyBench” report released this week by AI start-up General Reasoning highlights the gap between AI’s rapidly advancing capabilities in certain tasks, such as writing software, and its shortcomings in other kinds of human problems.
London-based General Reasoning tested eight top AI systems in a virtual recreation of the 2023-24 Premier League season, providing them with detailed historical data and statistics about each team and previous games. The AIs were instructed to build models that would maximise returns and manage risk.
The AI “agents” then placed bets on the outcomes of matches and the number of goals scored to test how they could adapt to new events and updated player data as the season progressed.
The AI could not access the internet to retrieve results and each was given three attempts to turn a profit.
Anthropic’s Claude Opus 4.6 fared best, with an average loss of 11 per cent and nearly breaking even on one attempt.
xAI’s Grok 4.20 went bankrupt once and failed to complete the other two tries. Google’s Gemini 3.1 Pro managed to turn a 34 per cent profit on one go but went bankrupt on another.
“Every frontier model we evaluated lost money over the season and many experienced ruin,” the authors of the paper concluded, with the AI “systematically underperforming humans” in this scenario.
The results offer some comfort to white-collar professionals and businesses who are fretting that AI could take their jobs, as it roils the shares of industries from finance to marketing.
Ross Taylor, one of the study’s authors and General Reasoning’s chief executive, said: “There is so much hype about AI automation but there’s not a lot of measurement of putting AI into a longtime horizon setting.”
He added that many of the benchmarks typically used to test AI are flawed because they are set in “very static environments” that bear little resemblance to the chaos and complexity of the real world.
General Reasoning’s paper, which has not yet been peer reviewed, provides a counterweight to growing excitement in Silicon Valley about the huge recent leaps in AI’s ability to complete computer programming tasks with little to no human intervention.
Taylor, a former Meta AI researcher, said: “If you . . . try AI on some real-world tasks, it does really badly . . . Yes, software engineering is very important and economically valuable, but there are lots of other activities with longer time horizons that are important to look at.”
www.ft.com
#models #lose #shirts #Premier #League #bets





