Researchers have created a humanoid robot that can predict whether a person will smile a second before they do and mimic the smile on its own face.
Although artificial intelligence can now mimic human language to an impressive degree, interactions with physical robots often fall into the “uncanny valley”, in part because robots can’t replicate the complex non-verbal cues and mannerisms that are vital for communication.
Now, with the use of high-resolution cameras and AI models, Hod Lipson of Columbia University in New York and his colleagues have developed a robot named Emo that can recognize and attempt to mimic human facial expressions. About 0.9 seconds before someone else does, it can predict if they will smile and smile in time with them. Lipson responds, “I’m a jaded roboticist, but I smile back at this robot,” says Lipson, reports TomorrowsWorldToday.
Emo is a face with cameras in its eyes and pliable plastic skin that is driven by 24 individual motors that are magnetized to one another. It makes use of two neural networks: one for analyzing and forecasting human facial expressions, and another for learning how to make its own facial expressions.
Related Round-Bodied Robot Rolls, Uses Legs to Steer
While the second network was taught by having the robot watch itself make faces on a live camera feed, the first network was trained using YouTube footage of people making faces.
“It learns what its face is going to look like when it’s going to pull all these muscles,” says Lipson. “It’s sort of like a person in front of a mirror, when even if you close your eyes and smile, you know what your face is going to look like.”
Lipson and the research team expect that ultimately, Emo's technology will enhance and add realism to human-robot interactions. In addition to teaching the robot to copy human expressions, they hope to expand its expressive repertoire and teach it to respond to spoken cues.