Samsung Researcher’s Small AI Model Beats ChatGPT, Gemini

As the AI era heats up, major tech companies are competing to develop more powerful AI models. A Samsung researcher recently developed Tiny Recursion Model (TRM), which outperforms many popular LLMs (Large Language Models). The model’s code is now available on GitHub under the MIT License, meaning anyone can use it for their commercial applications.
Samsung researcher develops 7-million-parameter AI model that beats most big LLMs
Alexia Jolicoeur-Martineau, a Senior AI Researcher at Samsung’s Advanced Institute of Technology (SAIT) in Montreal, created this TRM. While it’s a small neural network with only 7 million parameters, it surpasses larger models on the reasoning benchmarks, including OpenAI’s o3-mini and Google’s Gemini 2.5 Pro. For example, the model obtains 45% on ARC-AGI-1 and 8% on ARC-AGI-2, outperforming most LLMs.
“The idea that one must rely on massive foundational models trained for millions of dollars by some big corporation in order to solve hard tasks is a trap,” wrote Jolicoeur-Martineau on X. “Currently, there is too much focus on exploiting LLMs rather than devising and expanding new lines of direction.”
TRM uses the Hierarchical Reasoning Model (HRM) to perform well on structured, visual, grid-based problems like Sudoku and mazes. For the uninitiated, HRM uses two separate networks, where one works at high frequency and the other at low. However, Jolicoeur-Martineau’s approach was to make things easier. TRM uses a single two-layer model rather than two networks.
“A tiny model pretrained from scratch, recursing on itself and updating its answers over time, can achieve a lot without breaking the bank,” Jolicoeur-Martineau added. The model starts with a problem and a rough initial answer. It then works through a series of reasoning steps, gradually refining the answer until it reaches a stable output. Each action fixes errors from the previous step without the need for complicated layers or math.










