IoT Two-Factor Neurometric Authentication

Angel Rodriguez, Sara Rampazzi, and Kevin Fu recently had a poster accepted titled IoT Two-Factor Neurometric Authentication System using Wearable EEG:

Abstract: The IoT authentication space suffers from various user-sided drawbacks, such as poor password choice, the accidental publication of biometric data, and the practice of disabling authentication completely. This is commonly attributed to the “Security vs Usability” problem – generally, the stronger the authentication, the more inconvenient it is to perform and maintain for the user. Neurometric authentication offers a compelling resistance to eavesdropping and replay attacks, and the ability for a user to simply “think to unlock”. Furthermore, the recent increase in popularity of consumer EEG devices, as well as new research demonstrating its accuracy, have made EEG-based neurometric authentication much more viable.

Using a Support Vector Machine and one-time tokens, we present a secure two-factor authentication method, that allows a user to authenticate multiple IoT devices. We perform preliminary trials on the Psyionet BCI dataset and demonstrate a qualitative comparison of extracted EEG feature sets.

RampazziLeft: IoT two factor authentication scheme –  (1)  After internal user-thought authentication, the  device securely sends a one-time token to the IoT device. (2) The IoT device securely communicates with a server to verify the token. (3) If the token is verified, the server sends a secure confirmation reply to the IoT device, authenticating the user. Right: Proof of concept using the Psyionet BCI dataset – The top row shows the averaged covariance matrices of the extracted features of two different users thinking about the same mental task (imagining closing their fists). The bottom row shows similar features for one user thinking of two different tasks (imagine closing both fists vs both feet).

Proceedings of the IEEE Workshop on the Internet of Safe Things (SafeThings), May 2019. Accepted, publication pending.

 

Ubicomp’18: vocal resonance as a biometric

At the Joint Conference on Pervasive and Ubiquitous Computing conference, Ubicomp, David Kotz presented THaW’s work to develop a novel biometric approach to identifying and verifying who is wearing a device – an important consideration for a medical device that may be collecting diagnostic information that is fed into an electronic health record. Their novel approach is to use vocal resonance, i.e., the sound of your voice as it passes through bones and tissues, for a device to recognize its wearer and verify that it is physically in contact with the wearer… not just nearby.  They implemented the method on a wearable-class computing device and showed high accuracy and low energy consumption. 2018-10-08-00147-crop.jpg

Rui Liu, Cory Cornelius, Reza Rawassizadeh, Ron Peterson, and David Kotz. Vocal Resonance: Using Internal Body Voice for Wearable AuthenticationProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) (UbiComp), 2(1), March 2018. DOI 10.1145/3191751.

Abstract: We observe the advent of body-area networks of pervasive wearable devices, whether for health monitoring, personal assistance, entertainment, or home automation. For many devices, it is critical to identify the wearer, allowing sensor data to be properly labeled or personalized behavior to be properly achieved. In this paper we propose the use of vocal resonance, that is, the sound of the person’s voice as it travels through the person’s body – a method we anticipate would be suitable for devices worn on the head, neck, or chest. In this regard, we go well beyond the simple challenge of speaker recognition: we want to know who is wearing the device. We explore two machine-learning approaches that analyze voice samples from a small throat-mounted microphone and allow the device to determine whether (a) the speaker is indeed the expected person, and (b) the microphone-enabled device is physically on the speaker’s body. We collected data from 29 subjects, demonstrate the feasibility of a prototype, and show that our DNN method achieved balanced accuracy 0.914 for identification and 0.961 for verification by using an LSTM-based deep-learning model, while our efficient GMM method achieved balanced accuracy 0.875 for identification and 0.942 for verification.

WearSys papers, MobiSys posters

THaW researchers are showing off some cool research at this week’s MobiSys conference in Niagara Falls, with three papers at MobiSys workshops and a poster in the poster session.

  • Aarathi Prasad and David Kotz. ENACT: Encounter-based Architecture for Contact Tracing. In ACM Workshop on Physical Analytics (WPA), pages 37-42, June 2017. ACM Press. DOI 10.1145/3092305.3092310.
  • Rui Liu, Reza Rawassizadeh, and David Kotz. Toward Accurate and Efficient Feature Selection for Speaker Recognition on Wearables. InProceedings of the ACM Workshop on Wearable Systems and Applications (WearSys), pages 41-46, 2017. ACM Press. DOI 10.1145/3089351.3089352.
  • Rui Liu, Cory Cornelius, Reza Rawassizadeh, Ron Peterson, and David Kotz. Poster: Vocal Resonance as a Passive Biometric. In Proceedings of the ACM International Conference on Mobile Systems, Applications, and Services (MobiSys), pages 160, 2017. ACM Press. DOI 10.1145/3081333.3089304.
  • Xiaohui Liang and David Kotz. AuthoRing: Wearable User-presence Authentication. In Proceedings of the ACM Workshop on Wearable Systems and Applications (WearSys), pages 5-10, 2017. ACM Press. DOI 10.1145/3089351.3089357.

ZEBRA press

THaW’s article about Zero-Effort Bilateral Recurring Authentication (ZEBRA) triggered a lot of press coverage: such as Communications of the ACM (CACM)VICE Motherboard, Dartmouth NowGizmagThe Register UKPlanet Biometrics*, Computer Business Review*,  Fierce Health ITDaily Science NewsSenior Tech Insider, Motherboard, Homeland Security Newswire, and NFC World. They’re all intrigued by ZEBRA’s ability to continuously authenticate the user of a desktop terminal and to log them out if they leave or if someone else steps in to use the keyboard. Some(*) mistakenly believe our ZEBRA method uses biometrics; quite the contrary, ZEBRA is designed to be user-agnostic and thus requires no per-user training period. (ZEBRA correlates the bracelet wearer’s movements with the keyboard and mouse movements, not with a prior model of the wearer’s movements as do methods built on behavioral biometrics.)  ZEBRA could be combined with a biometric authentication of the wearer to the bracelet, and can be combined with other methods of initial authentication of wearer to system (such as username/password, or fingerprints) making it an extremely versatile tool that adds strength to existing approaches. The Dartmouth THaW team continues to refine ZEBRA. [Note: since the time this paper was published we have learned of a relevant trademark on the name “Zebra”. Thus, we have renamed our approach “BRACE” and will use that name in future publications.]

photo of Shimmer device on a wrist, wherein the hand is using a mouse and the other hand is using a keyboard

Our experiments used the Shimmer research device, though in principle it could work with any fitness band.