- Zhijian Yang, Yu-Lin Wei, Sheng Shen, Romit Roy Choudhury. Ear-AR. MobiCom ’20: Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, 2020 DOI: 10.1145/3372224.3419213
- Jay Prakash, Zhijian Yang, Yu-Lin Wei, Haitham Hassanieh, Romit Roy Choudhury. EarSense. MobiCom ’20: Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, 2020 DOI: 10.1145/3372224.3419197
- Sheng Shen, Daguan Chen, Yu-Lin Wei, Zhijian Yang, Romit Roy Choudhury. Voice localization using nearby wall reflections. MobiCom ’20: Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, 2020 DOI: 10.1145/3372224.3380884
“The leap from today’s earphones to ‘earables’ would mimic the transformation that we had seen from basic phones to smartphones,” said Romit Roy Choudhury, professor in electrical and computer engineering (ECE). “Today’s smartphones are hardly a calling device anymore, much like how tomorrow’s earables will hardly be a smartphone accessory.”
Instead, the group believes tomorrow’s earphones will continuously sense human behavior, run acoustic augmented reality, have Alexa and Siri whisper just-in-time information, track user motion and health, and offer seamless security, among many other capabilities.
The research questions that underlie earable computing draw from a wide range of fields, including sensing, signal processing, embedded systems, communications, and machine learning. The SyNRG team is on the forefront of developing new algorithms while also experimenting with them on real earphone platforms with live users.
Computer science PhD student Zhijian Yang and other members of the SyNRG group, including his fellow students Yu-Lin Wei and Liz Li, are leading the way. They have published a series of papers in this area, starting with one on the topic of hollow noise cancellation that was published at ACM SIGCOMM 2018. Recently, the group had three papers published at the 26th Annual International Conference on Mobile Computing and Networking (ACM MobiCom) on three different aspects of earables research: facial motion sensing, acoustic augmented reality, and voice localization for earphones.
“If you want to find a store in a mall,” says Zhijian, “the earphone could estimate the relative location of the store and play a 3D voice that simply says ‘follow me.’ In your ears, the sound would appear to come from the direction in which you should walk, as if it’s a voice escort.”
The second paper, EarSense: Earphones as a Teeth Activity Sensor, looks at how earphones could sense facial and in-mouth activities such as teeth movements and taps, enabling a hands-free modality of communication to smartphones. Moreover, various medical conditions manifest in teeth chatter, and the proposed technology would make it possible to identify them by wearing earphones during the day. In the future, the team is planning to look into analyzing facial muscle movements and emotions with earphone sensors.
The third publication, Voice Localization Using Nearby Wall Reflections, investigates the use of algorithms to detect the direction of a sound. This means that if Alice and Bob are having a conversation, Bob’s earphones would be able to tune into the direction Alice’s voice is coming from.
“We’ve been working on mobile sensing and computing for 10 years,” said Wei. “We have a lot of experience to define this emerging landscape of earable computing.”
Haitham Hassanieh, assistant professor in ECE, is also involved in this research. The team has been funded by both NSF and NIH, as well as companies like Nokia and Google.
We would love to thank the author of this write-up for this remarkable material
‘Earable’ computing: A new research area in the making