Researchers at Carnegie Mellon University's Human-Computer Interaction Institute developed a tool that uses AI to control AR/VR interfaces by touching your skin with a finger.
The team wanted to ultimately design a control that would provide tactile feedback using only the sensors that come with a standard AR/VR headset. OmniTouch, a previous method developed by Chris Harrison, an associate professor in the HCII and director of the Future Interfaces Group, got close. But that method required a special, clunky, depth-sensing camera. Vimal Mollyn, a Ph.D. student advised by Harrison, had the idea to use a machine learning algorithm to train normal cameras to recognize touching, reports Charlotte Hu at Carnegie Mello University.
"Try taking your finger and see what happens when you touch your skin with it. You'll notice that there are these shadows and local skin deformations that only occur when you're touching the skin," Mollyn said. "If we can see these, then we can train a machine learning model to do the same, and that's essentially what we did."
Mollyn used a special touch sensor that went along the palm and underneath of the index finger to gather data for EgoTouch. While remaining undetectable to the camera, the sensor gathered information on various touch kinds at various forces. Without human annotation, the model then learned to associate skin abnormalities and shadows to touch and force. The team collected hours of data in a variety of scenarios, activities, and lighting settings, expanding its training data collection to include 15 individuals with varying skin tones and hair densities.
Related Wearable Ring Turns Any Surface into Touchpad
EgoTouch can detect touch with more than 96% accuracy and has a false positive rate of around 5%. It recognizes pressing down, lifting up and dragging. The model can also classify whether a touch was light or hard with 98% accuracy.
"That can be really useful for having a right-click functionality on the skin," Mollyn said.
Developers may be able to imitate touchscreen gestures on our skin by detecting changes in touch. For instance, your smartphone can detect when you tap and hold an icon, zoom in, swipe right, or scroll up or down a page. The camera must be able to distinguish the minute variations between touch type and force in order to convert this into a skin-based interface.
Across a range of skin tones and hair densities, as well as at various locations on the hand and forearm, such as the palm, back, and front of the arm, the accuracy was roughly the same. Bony regions such as the knuckles were not effectively served by the system.
"It's probably because there wasn't as much skin deformation in those areas," Mollyn said. "As a user interface designer, what you can do is avoid placing elements on those regions."
Mollyn is exploring ways to use night vision cameras and nighttime illumination to enable the EgoTouch system to work in the dark. He's also collaborating with researchers to extend this touch-detection method to surfaces other than the skin.
"For the first time, we have a system that just uses a camera that is already in all the headsets. Our models are calibration free, and they work right out of the box," said Mollyn. "Now we can build off prior work on on-skin interfaces and actually make them real."