Google: Gemini Will Outshine ChatGPT

New Artificial Intelligence Gemini will outshine OpenAI’s ChatGPT as per DeepMind CEO Demis Hassabis

Image credit: Rawpixel

A chatbot that may compete with or perhaps top OpenAI's well-liked ChatGPT is what DeepMind, the renowned research facility owned by Google, is aiming to build as the race to create potent language models continues to heat up. Gemini is a new AI language model that uses methods from DeepMind's ground-breaking AlphaGo system, which is famed for its victory over a skilled human Go player. Gemini is an exhilarating mix of the impressive linguistic capabilities of large-scale models with the strategic planning skills of AlphaGo.

DeepMind CEO Demis Hassabis revealed to Wired that Gemini would not only possess the capacity to analyze text but also exhibit problem-solving skills. “At a high level, you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models,” Hassabis stated. “We also have some new innovations that are going to be pretty interesting.”

The forthcoming Gemini chatbot, briefly previewed at Google’s I/O developer conference in May, is expected to leverage advancements in reinforcement learning, a technique that rewards and penalizes AI systems to teach them desired behaviors. This approach has already yielded significant improvements in the language model domain, as seen in ChatGPT’s response generation. With DeepMind’s expertise in reinforcement learning, honed through projects like AlphaGo, the lab aims to push the boundaries of generative AI.

Gemini is still under construction and won't be finished for several months. The initiative, which might cost tens, hundreds, or perhaps millions of dollars, will leverage cutting-edge AI methods like reinforcement learning and tree search created for AlphaGo.

Software's capacity to learn how to solve strategic issues, such as selecting the next move in a game of Go or playing a video game, is referred to as reinforcement learning. A technique for discovering and storing the next potential moves on a board is tree search.

As they only rely on pattern search approaches to try to forecast the most statistically significant text snippets to respond to a user's query, current LLM systems are restricted in their capacity to learn new things or even to "adapt" to strategic and sophisticated challenges. There is nothing remotely "intelligent" about it, but if you don't require responsibility, dependability, or even plain factual correctness, the output of those constrained generative AIs can be stunning.

Related : ChatGPT's Knowledge of Heart Disease Prevention

One of the methods developed by DeepMind researchers over the past few years, feedback-based reinforcement learning, has the potential to significantly boost LLM performance and offer Gemini an advantage over other systems.

Hassabis claims that DeepMind and Brain, Google's AI research divisions that have now been integrated into the Google DeepMind division, are responsible for "80 or 90 percent of the innovations "we are currently noticing in ChatGPT and other AI systems. Mountain View might reclaim its leadership in the AI race with Gemini.

Sam Draper
July 5, 2023

Innovation of the Month

Do you want to discover more, visit the website
Visit Website

Other news

Strados Wins FDA Clearance for its Wireless Lung Sound Measurement Platform

Pennsylvania-based Strados Labs has received the FDA’s 510(k) clearance for its Strados RESP...

Plastic Logic’s New Flexible Color Display Set To Revolutionize Wearables

Plastic Logic, a leader in the design and manufacture of flexible, glass-free electrophoretic...

Hyprshell Combines Robotics and Ergonomics with AI

The Hypershell Omega aims to decrease backpack weight, enhance natural range.

Physical Activity and Wearables – Prevention, Treatment and Rehabilitation

By providing patients with an instant biofeedback, wearables not only have the possibility...
Discover more