Ubicomp’18: vocal resonance as a biometric

At the Joint Conference on Pervasive and Ubiquitous Computing conference, Ubicomp, David Kotz presented THaW’s work to develop a novel biometric approach to identifying and verifying who is wearing a device – an important consideration for a medical device that may be collecting diagnostic information that is fed into an electronic health record. Their novel approach is to use vocal resonance, i.e., the sound of your voice as it passes through bones and tissues, for a device to recognize its wearer and verify that it is physically in contact with the wearer… not just nearby.  They implemented the method on a wearable-class computing device and showed high accuracy and low energy consumption. 2018-10-08-00147-crop.jpg

Rui Liu, Cory Cornelius, Reza Rawassizadeh, Ron Peterson, and David Kotz. Vocal Resonance: Using Internal Body Voice for Wearable AuthenticationProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) (UbiComp), 2(1), March 2018. DOI 10.1145/3191751.

Abstract: We observe the advent of body-area networks of pervasive wearable devices, whether for health monitoring, personal assistance, entertainment, or home automation. For many devices, it is critical to identify the wearer, allowing sensor data to be properly labeled or personalized behavior to be properly achieved. In this paper we propose the use of vocal resonance, that is, the sound of the person’s voice as it travels through the person’s body – a method we anticipate would be suitable for devices worn on the head, neck, or chest. In this regard, we go well beyond the simple challenge of speaker recognition: we want to know who is wearing the device. We explore two machine-learning approaches that analyze voice samples from a small throat-mounted microphone and allow the device to determine whether (a) the speaker is indeed the expected person, and (b) the microphone-enabled device is physically on the speaker’s body. We collected data from 29 subjects, demonstrate the feasibility of a prototype, and show that our DNN method achieved balanced accuracy 0.914 for identification and 0.961 for verification by using an LSTM-based deep-learning model, while our efficient GMM method achieved balanced accuracy 0.875 for identification and 0.942 for verification.

This entry was posted in publication and tagged , , by David Kotz. Bookmark the permalink.

About David Kotz

David Kotz is the Provost, the Pat and John Rosenwald Professor in the Department of Computer Science, and the Director of Emerging Technologies and Data Analytics in the Center for Technology and Behavioral Health, all at Dartmouth College. He previously served as Associate Dean of the Faculty for the Sciences and as the Executive Director of the Institute for Security Technology Studies. His research interests include security and privacy in smart homes, pervasive computing for healthcare, and wireless networks. He has published over 240 refereed papers, obtained $89m in grant funding, and mentored nearly 100 research students. He is an ACM Fellow, an IEEE Fellow, a 2008 Fulbright Fellow to India, a 2019 Visiting Professor at ETH Zürich, and an elected member of Phi Beta Kappa. He received his AB in Computer Science and Physics from Dartmouth in 1986, and his PhD in Computer Science from Duke University in 1991.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s