Google I/o 2024: Gemini 1.5 Pro Gets Big Upgrade as New Flash and Gemma AI Models Unveiled


Google held a keynote session of its annual developer-focused Google I/O event on Tuesday. During the session, the tech giant emphasized on new developments in the field of artificial intelligence (AI) and introduced various new AI models as well as new features for the existing infrastructure. A major highlight was the introduction of a two million token reference window for Gemini 1.5 Pro, which is currently available to developers. A faster version of Gemini was also introduced, as well as Gemma 2, the next generation of Google's small language model (SML).

The event was kicked off by CEO Sundar Pichai, who made one of the biggest announcements of the night – the availability of a two million token reference window for Gemini 1.5 Pro. The company introduced a one million token reference window earlier this year but until now, it was only available to developers. Google has now made it generally available in public preview and can be accessed through Google AI Studio and Vertex AI. Instead, a two million token reference window is available via a waitlist exclusively to developers and Google Cloud customers using the API.

Google claims that with a context window of two million, the AI ​​model can process two hours of video, 22 hours of audio, over 60,000 lines of code or over 1.4 million words at a time. Apart from improving contextual understanding, the tech giant has also improved Gemini 1.5 Pro's code generation, logical reasoning, planning, multi-turn conversation as well as understanding of images and audio. The tech giant is also integrating AI models into Gemini Advanced and Workspace apps.

Google has also introduced a new addition to the family of Gemini AI models. The new AI model, called Gemini 1.5 Flash, is a lightweight model designed to be faster, more responsive and cost-efficient. The tech giant said it has worked on improving latency to improve its speeds. Although solving complex tasks will not be its strength, it can perform tasks like summarization, chat applications, image and video captioning, data extraction from long documents and tables and much more.

Finally, the tech giant announced the next generation of its tiny AI model, Gemma 2. This model comes with 27 billion parameters, but can run efficiently on either a GPU or a single TPU. Google claims that Gemma 2 performs better than models twice its size. The company has not released its benchmark scores yet.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *