Robot Predicts When You’re Going to Smile

Researchers have created a robot that can predict when a person is going to smile.

Image credits: Creative Machines Lab

Researchers have created a humanoid robot that can predict whether a person will smile a second before they do and mimic the smile on its own face.

Although artificial intelligence can now mimic human language to an impressive degree, interactions with physical robots often fall into the “uncanny valley”, in part because robots can’t replicate the complex non-verbal cues and mannerisms that are vital for communication.

Now, with the use of high-resolution cameras and AI models, Hod Lipson of Columbia University in New York and his colleagues have developed a robot named Emo that can recognize and attempt to mimic human facial expressions. About 0.9 seconds before someone else does, it can predict if they will smile and smile in time with them. Lipson responds, “I’m a jaded roboticist, but I smile back at this robot,” says Lipson, reports TomorrowsWorldToday.

Emo is a face with cameras in its eyes and pliable plastic skin that is driven by 24 individual motors that are magnetized to one another. It makes use of two neural networks: one for analyzing and forecasting human facial expressions, and another for learning how to make its own facial expressions.

Related Round-Bodied Robot Rolls, Uses Legs to Steer

While the second network was taught by having the robot watch itself make faces on a live camera feed, the first network was trained using YouTube footage of people making faces.

“It learns what its face is going to look like when it’s going to pull all these muscles,” says Lipson. “It’s sort of like a person in front of a mirror, when even if you close your eyes and smile, you know what your face is going to look like.”

Lipson and the research team expect that ultimately, Emo's technology will enhance and add realism to human-robot interactions. In addition to teaching the robot to copy human expressions, they hope to expand its expressive repertoire and teach it to respond to spoken cues.

Sam Draper
April 12, 2024

Innovation of the Month

Do you want to discover more, visit the website
Visit Website

Other news

Physilect Developing A Series Of Exergames That Use Movesense Sensor As A Controller

Physilect, a Finnish pioneer of computer aided remote rehabilitation, is developing a series of ...

University of Houston Researchers Develop Super Thin Wearable That’s Barely Noticeable to the Wearer

With the growing popularity of medical wearables, demand is rising for...

Plantiga Launches Data-Driven Membership Program For Athletes

Plantiga, a movement diagnostics company, today launched its exclusive membership program...

Global Medical Wearables Market Size to Reach US $85.6 Billion by 2027: Polaris Market Research

The global Wearable Medical Devices market size is expected to reach USD 85.6 billion by 2027...
Discover more