Google Gemini wins the IMO 2025 after ChatGPT: another AI conquers the Mathematics Olympiad
- Graziano Stefanelli
- Jul 21
- 4 min read

After the official announcement in which OpenAI revealed that ChatGPT had reached and surpassed the gold medal threshold at the International Mathematical Olympiad—news released just two days earlier—now Gemini, in its Deep Think version, achieves the highest recognition in the official competition, taking the race between AI and human talent to a new level.

The announcement that changes the perception of problem-solving
At 9:47 a.m. PDT on Monday, July 21, Demis Hassabis broke the silence on social media, announcing that Gemini, in its advanced “Deep Think” version, had reached gold medal status at the International Mathematical Olympiad. The message was clear: the AI system managed to solve five out of six problems, achieving a score of 35 out of 42, precisely the value that, according to the competition’s rules, marks the elite among the world’s mathematicians.This news takes on even greater significance because, just two days earlier, OpenAI had officially announced that its ChatGPT model had achieved the same result of 35 points, with independent assessments by former IMO medalists. In this way, both OpenAI and Google DeepMind can now claim to have a model capable of winning the gold medal in the world’s most selective math test.
A model that truly thinks: how Gemini Deep Think crossed the boundary
Gemini’s breakthrough did not come by chance, but from the refinement of an entirely new reasoning mode compared to previous generations. The Deep Think algorithm uses a strategy called parallel inference-time computation: instead of following a single reasoning path, the model simultaneously explores multiple logical avenues, analyzes different approaches, and then merges the most promising solutions. This architecture avoids the dead ends and blocks typical of human logic, tackling geometric and combinatorial problems with a much more flexible and powerful approach.
Added to this is highly targeted training: Gemini has been refined with millions of examples of Olympiad proofs and advanced mathematical solutions, selected to stimulate the ability to reason and explain step by step, not just provide the final answer.Finally, the evaluation was not based on technical conversions, but on answers written in fluent English and assessed by the official IMO graders, exactly as happens with top students. The AI was thus measured under the same conditions as humans, in a truly comparable challenge.
The impact of the result: why those 35 points are historically significant
Gemini’s achievement represents a clear break from previous attempts by artificial intelligence in the Olympiad context. In 2024, the AlphaGeometry and AlphaProof models had attracted attention by reaching 28 points and the silver medal, but they were still far from the gold threshold. This year, however, the leap was clear: Gemini not only exceeded the overall average (which stopped at about 18 points out of 42), but ranked among the top 10% of a field of over six hundred selected students worldwide. In numerical terms, only 67 human competitors earned the same gold medal, confirming that Gemini’s result is not a matter of luck, but marks the entry of a new intelligence among the true “champions of the mind.”
The reactions of the mathematical community: enthusiasm and caution
Industry professionals welcomed the news with a mixture of admiration and critical reflection. The IMO president, Gregor Dolinar, described Gemini’s solutions as “clear, structured, and elegant,” noting that in several cases the AI was able to argue with a clarity rarely seen even among the best students. However, the scientific community also urges caution: the IMO, prestigious as it is, remains a specific benchmark, tied to a particular class of problems. No model, not even Gemini, can yet be considered a substitute for general human reasoning across all fields of knowledge. The question that emerges is whether AI will be able to extend this ability to broader domains, or whether its talent will remain confined to contests “trained” on large amounts of data.
A global challenge between giants: OpenAI’s pursuit and what’s at stake
The competition has not been without a response: OpenAI, immediately after Google’s announcement, reiterated that its ChatGPT had already reached the same 35-point mark in the lab, though without officially entering the competition. This back-and-forth shows that the race for supremacy in advanced mathematical reasoning is now intense. Both companies, aware of the scope of the discovery, have chosen to proceed cautiously, for now limiting access to the models and planning a controlled testing phase by experts before any public release.
Beyond the IMO: opportunities and open questions for education and research
The impact of Gemini’s victory goes far beyond the context of mathematical olympiads. On the academic research front, models like Deep Think could become unprecedented tools for discovering new conjectures and proofs in fields such as number theory or theoretical physics, accelerating a process that currently takes years of human effort. At the same time, the introduction of such advanced AI tutors could revolutionize education, offering each student personalized explanations for increasingly difficult problems. However, some issues remain: the possibility of excessive dependence on machine outputs, the risk of standardizing teaching only on what AI can solve best, and the need to rethink assessment criteria in student competitions.
It’s no coincidence that the IMO itself is considering introducing a separate category for automated systems, thus preserving the formative value of student competition.
It’s no longer about whether AI can surpass humans in a contest, but about imagining what mathematics—and knowledge itself—can become when machines truly start to reason at our side.
________
FOLLOW US FOR MORE.
DATA STUDIOS


