That means enterprises deploying generative AI models can tap a more powerful tool in the
“Google’s Gemini 2.5 has landed — a masterpiece of reasoning, multimodality and raw computational might,”
In reasoning and knowledge, Gemini 2.5 beat OpenAI’s
The first version of Gemini 2.5 that Google is releasing is an experimental Pro version. It is available to
Read also:
Gemini 2.5 is a thinking or reasoning model, which pauses to cycle through its logic before responding to improve the accuracy and performance of its answers. It analyzes information, comes to logical conclusions, adds context and understands nuance to reach a decision, according to the post.
Google’s rivals have already released their own reasoning models, which include those from OpenAI, Anthropic, Grok, DeepSeek and others. Google itself has released a reasoning model called
However, Gemini 2.5 goes beyond the reasoning capabilities of Gemini 2.0 Flash Thinking, which uses reinforcement learning (rewarding right answers, punishing wrong ones) and chain-of-thought prompting, per the post.
With Gemini 2.5, Google was able to reach a new level of performance by combining a “significantly enhanced base model with improved post-training,” the post said.
Going forward, Google will incorporate these thinking capabilities directly into all its models, so they can handle “more complex problems and support even more capable, context-aware agents,” according to the post.
See also:
Like Google’s other Gemini models, Gemini 2.5 is natively multimodal, meaning it can analyze and understand text, audio, video, images and code — capabilities built in from the ground up, not bolted on.
Gemini 2.5 also offers a context window of 1 million tokens (about 750,000 English words), so it can accept very long prompts, a feature matched only by
“The context window is incredibly important for the AI race,”
“With a larger context, the model can provide better assistance with programming, answering questions and text generation — anything basically,” Badeev said.
Gemini 2.5 blew away the competition in long-context performance with 83.1%, the blog post said. OpenAI’s o3-mini came in at 61.4% and its GPT-4.5 at 64%.
Google plans to double the context window soon, per the post.
“If Google does indeed implement a 2 million token context, it will be an unprecedented advantage over other models, even with lower benchmarks,” Badeev said.
For all PYMNTS AI coverage, subscribe to the daily
The post