Last year, Google released the first model in the Gemini 2.0 family, Gemini 2.0 Flash, as part of its broader push towards AI agents, where an important focus has been on improving the performance of AI models at scale. And now, last week, Google rolled out the Gemini 2.0 Flash model for all users of the Gemini app, on both desktop and mobile, including free users.
![Google Gemini 2.0: 5 key details about Google’s latest AI models Google Gemini 2.0: 5 key details about Google’s latest AI models](https://www.hindustantimes.com/ht-img/img/2025/02/11/550x309/Screenshot_2023-12-07_011555_1701892002898_1739278043368.png)
Additionally, Google also released an experimental version of Gemini 2.0 Pro last week. This version is designed for code-based tasks and performs best in coding and handling complex prompts.
Here, let us tell you all you need to know about the Gemini 2.0 family of models. Read on.
Also Read: Samsung Galaxy S26 could bring this big shift from the S25 series: Details here
Gemini 2.0 Pro Experimental Is Google’s Strongest Performing Model Yet
Gemini 2.0 Pro Experimental was released on February 5. Google states that it delivers the strongest performance yet, with enhanced reasoning and world knowledge compared to any previous model. It also features Google’s largest context window of 2 million tokens, allowing it to conduct detailed analysis and process vast amounts of information. Moreover, this model can call external tools such as Google Search and code execution. It is currently available in Google AI Studio, Vertex AI, and for Gemini Advanced users.
Gemini 2.0 Flash Lite Is Google’s Most Cost-Effective Model
Gemini 2.0 Flash is Google’s most cost-effective model, designed for speed and efficiency. It is even faster than Gemini 1.5 Flash while maintaining the same cost and speed but with improved quality. Benchmarks indicate that Gemini 2.0 Flash outperforms Gemini 1.5 Flash. Like its predecessor, it has a 1 million token context window and supports multimodal inputs.
At present, it is available in public preview through Google AI Studio and Vertex AI. Additionally, Gemini 2.0 Flash is now accessible within the Gemini app.
Also Read: Samsung Galaxy S25 Ultra review: Almost the perfect Android flagship
Gemini 2.0 Flash Is For High-Volume Tasks
The Gemini 2.0 Flash model is optimised for high-volume, high-frequency tasks at scale. Google states that it has a 1 million token context window and is being made available across various Google products, including the Gemini app, Gemini API, Google AI Studio, and Google Vertex AI.
Gemini 2.0 Critiques Its Own Responses
Google has also introduced new reinforcement techniques to boost the safety and accuracy of its AI models. As these models become more capable and feature-rich, Gemini now has the ability to critique its own responses, improving overall reliability and better handling sensitive prompts.
Google Gemini Is Now Better At Risk Assessment
Google says it is also leveraging “automated red teaming to assess safety and security risks.” This includes risks from indirect prompt injection, a type of cybersecurity attack in which attackers hide malicious instructions in data that is likely to be retrieved by an AI system.