Alphabet unveils long-awaited Gemini AI model

Alphabet unveils long-awaited Gemini AI model
Alphabet CEO Sundar Pichai Unveiling Gemini AI Model
Key Highlights
  • Google unveiled Gemini, its newest and most potent AI model, available in three different sizes.
  • On Tuesday, executives highlighted that Gemini Pro surpasses OpenAI’s GPT-3.5, but they refrained from comparing it directly to GPT-4.
  • The company intends to offer licenses for Gemini to customers through Google Cloud, enabling them to integrate it into their applications. Additionally, Gemini will be the driving force behind consumer-oriented Google AI apps such as the Bard chatbot and Search Generative Experience.

Alphabet has introduced its latest artificial intelligence model, named Gemini. This advanced technology can analyze various types of information like videos, audio, and text. Gemini, eagerly anticipated by the owner of Google, surpasses the previous Google technology in sophisticated reasoning and nuanced understanding, according to the company.

Sundar Pichai on Gemini AI

Alphabet CEO Sundar Pichai described Gemini as a significant scientific and engineering effort in a blog post. Since the launch of OpenAI’s ChatGPT a year ago, Google has been working to create AI software that competes with Microsoft-backed OpenAI’s offerings.

On Wednesday, Google incorporated part of the Gemini model into its AI assistant Bard. The company plans to release the most advanced version of Gemini through Bard early next year. Alphabet is developing three versions of Gemini, each optimized for different processing power levels. The most powerful version is designed for data centers, while the smallest is efficient for mobile devices.

Gemini, the largest AI model from Google DeepMind, is more cost-effective for users than previous larger models, according to DeepMind Vice President Eli Collins. Despite still requiring substantial computing power for training, Google is enhancing its process.

In addition to Gemini, Alphabet announced a new generation of custom-built AI chips called tensor processing units (TPUs). The Cloud TPU v5p is designed to train large AI models and is assembled in pods of 8,960 chips. These new chips can train large language models almost three times faster than previous generations. The updated chips are now available for developers in a “preview” as of Wednesday.

Read Original Article on Cybernews

The information above is curated from reliable sources, modified for clarity. Slash Insider is not responsible for its completeness or accuracy. Please refer to the original source for the full article. Views expressed are solely those of the original authors and not necessarily of Slash Insider. We strive to deliver reliable articles but encourage readers to verify details independently.