Robot Predicts When You’re Going to Smile

Researchers have created a robot that can predict when a person is going to smile.

Image credits: Creative Machines Lab

Researchers have created a humanoid robot that can predict whether a person will smile a second before they do and mimic the smile on its own face.

Although artificial intelligence can now mimic human language to an impressive degree, interactions with physical robots often fall into the “uncanny valley”, in part because robots can’t replicate the complex non-verbal cues and mannerisms that are vital for communication.

Now, with the use of high-resolution cameras and AI models, Hod Lipson of Columbia University in New York and his colleagues have developed a robot named Emo that can recognize and attempt to mimic human facial expressions. About 0.9 seconds before someone else does, it can predict if they will smile and smile in time with them. Lipson responds, “I’m a jaded roboticist, but I smile back at this robot,” says Lipson, reports TomorrowsWorldToday.

Emo is a face with cameras in its eyes and pliable plastic skin that is driven by 24 individual motors that are magnetized to one another. It makes use of two neural networks: one for analyzing and forecasting human facial expressions, and another for learning how to make its own facial expressions.

Related Round-Bodied Robot Rolls, Uses Legs to Steer

While the second network was taught by having the robot watch itself make faces on a live camera feed, the first network was trained using YouTube footage of people making faces.

“It learns what its face is going to look like when it’s going to pull all these muscles,” says Lipson. “It’s sort of like a person in front of a mirror, when even if you close your eyes and smile, you know what your face is going to look like.”

Lipson and the research team expect that ultimately, Emo's technology will enhance and add realism to human-robot interactions. In addition to teaching the robot to copy human expressions, they hope to expand its expressive repertoire and teach it to respond to spoken cues.

Sam Draper
April 12, 2024

Innovation of the Month

Do you want to discover more, visit the website
Visit Website

Other news

Samsung Leaker Cracks Samsung’s Secret Language Code About Upcoming Galaxy Watch

Last week, Max Weinbach of Android Police found references to Samsung’s next TWS called...

Abbott’s Confirm Rx Smartphone Compatible ICM Offers More Accurate Detection of Arrhythmia

Abbott launched a smartphone-compatible next-gen insertable cardiac monitor.

University of Houston Researchers Develop Super Thin Wearable That’s Barely Noticeable to the Wearer

With the growing popularity of medical wearables, demand is rising for...

Human Performance Company WHOOP Becomes Official Performance Partner of Howard University Athletics

Howard University Department of Athletics and WHOOP announced a new multi-year deal that names...
Discover more