Google: Gemini Will Outshine ChatGPT

New Artificial Intelligence Gemini will outshine OpenAI’s ChatGPT as per DeepMind CEO Demis Hassabis

Image credit: Rawpixel

A chatbot that may compete with or perhaps top OpenAI's well-liked ChatGPT is what DeepMind, the renowned research facility owned by Google, is aiming to build as the race to create potent language models continues to heat up. Gemini is a new AI language model that uses methods from DeepMind's ground-breaking AlphaGo system, which is famed for its victory over a skilled human Go player. Gemini is an exhilarating mix of the impressive linguistic capabilities of large-scale models with the strategic planning skills of AlphaGo.

DeepMind CEO Demis Hassabis revealed to Wired that Gemini would not only possess the capacity to analyze text but also exhibit problem-solving skills. “At a high level, you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models,” Hassabis stated. “We also have some new innovations that are going to be pretty interesting.”

The forthcoming Gemini chatbot, briefly previewed at Google’s I/O developer conference in May, is expected to leverage advancements in reinforcement learning, a technique that rewards and penalizes AI systems to teach them desired behaviors. This approach has already yielded significant improvements in the language model domain, as seen in ChatGPT’s response generation. With DeepMind’s expertise in reinforcement learning, honed through projects like AlphaGo, the lab aims to push the boundaries of generative AI.

Gemini is still under construction and won't be finished for several months. The initiative, which might cost tens, hundreds, or perhaps millions of dollars, will leverage cutting-edge AI methods like reinforcement learning and tree search created for AlphaGo.

Software's capacity to learn how to solve strategic issues, such as selecting the next move in a game of Go or playing a video game, is referred to as reinforcement learning. A technique for discovering and storing the next potential moves on a board is tree search.

As they only rely on pattern search approaches to try to forecast the most statistically significant text snippets to respond to a user's query, current LLM systems are restricted in their capacity to learn new things or even to "adapt" to strategic and sophisticated challenges. There is nothing remotely "intelligent" about it, but if you don't require responsibility, dependability, or even plain factual correctness, the output of those constrained generative AIs can be stunning.

Related : ChatGPT's Knowledge of Heart Disease Prevention

One of the methods developed by DeepMind researchers over the past few years, feedback-based reinforcement learning, has the potential to significantly boost LLM performance and offer Gemini an advantage over other systems.

Hassabis claims that DeepMind and Brain, Google's AI research divisions that have now been integrated into the Google DeepMind division, are responsible for "80 or 90 percent of the innovations "we are currently noticing in ChatGPT and other AI systems. Mountain View might reclaim its leadership in the AI race with Gemini.

Sam Draper
July 5, 2023

Innovation of the Month

Do you want to discover more, visit the website
Visit Website

Other news

OURA Acquires Software Company Sparta Science

Oura is set to acquire Sparta Science, integrating its health and performance analytics platform.

Vitality and New Strategic Partners Focus on Diabetes Prevention and Management and Financial Wellness

Vitality Group, a global health and wellness company committed to making people healthier.

Round-Bodied Robot Rolls, Uses Legs to Steer

University of Illinois created a robot inspired by the wheel-bodied droide from Star Wars.

Nike Run Club App Helps You Take Your Running To a Whole New Level

The Nike Run Club App provides you with the guidance, inspiration, and innovation you need to...
Discover more