LightTouch – Connecting Wearables to Ambient Displays

Connectivity reached new extremes, when wearable technologies enabled smart device communications to appear where analogue watches, rings, and vision-enhancing glasses used to sit. Risks of sensitive data being wrongly transmitted, as a result of malicious or non-malicious intent, grow alongside these new technologies. To ensure that this continued interconnectivity of smart devices and wearables is safe and secure, the THaW team devised, published, and patented LightTouch. This technology, conceptually compatible with existing smart bracelet and display designs, uses optical sensors on the smart device and digital radio links to create a shared secret key that enables the secure and private connection between devices.

LightTouch makes it easy for a person to securely connect their wearable device to a computerized device they encounter, for the purpose of viewing information from their device and possibly sharing that information with nearby acquaintances. To learn more, check out this recent Spotlight in IEEE Computer, or click the links below to read the journal article, the patent specifics, or the conference presentation.


Xiaohui Liang, Ronald Peterson, and David Kotz. Securely Connecting Wearables to Ambient Displays with User IntentIEEE Transactions on Dependable and Secure Computing 17(4), pages 676–690, July 2020. IEEE. DOI: 10.1109/TDSC.2018.2840979

Xiaohui Liang, Tianlong Yun, Ron Peterson, and David Kotz. Secure System For Coupling Wearable Devices To Computerized Devices with Displays, March 2020. USPTO; U.S. Patent 10,581,606; USPTO. Download from https://patents.google.com/patent/US20170279612A1/en — Priority date 2014-08-18, Grant date 2020-03-03.

Xiaohui Liang, Tianlong Yun, Ronald Peterson, and David Kotz. LightTouch: Securely Connecting Wearables to Ambient Displays with User Intent. In IEEE International Conference on Computer Communications (INFOCOM), May 2017. IEEE. DOI: 10.1109/INFOCOM.2017.8057210

#NSFStories

Two faces of Mobile Sensing

A PhD dissertation from a recent ThaW graduate.

The recent popularization of mobile devices equipped with high-performance sensors has given rise to the fast development of mobile sensing technology. Mobile sensing applications, such as gesture recognition, vital sign monitoring, localization, and identification analyze the signals generated by human activities and environment changes, and thus get a better understanding of the environment and human behaviors. While benefiting people’s lives, the growing capability of Mobile Sensing would also spawn new threats to security and privacy. On one hand, while the commercialization of new mobile devices enlarges the design space, it is challenging to design effective mobile sensing systems, which use fewer or cheaper sensors and achieve better performance or more functionalities. On the other hand, attackers can utilize the sensing strategies to track victims’ activities and cause privacy leakages. Mobile sensing attacks usually use side channels and target the information hidden in non-textual data. I present the Mobile Sensing Application-Attack (MSAA) framework, a general model showing the structures of mobile sensing applications and attacks, and how the two faces — the benefits and threats — are connected. MSAA reflects our principles of designing effective mobile sensing systems and exploring information leakages. Our experiment results show that our applications can achieve satisfactory performance, and also confirm the threats of privacy leakage if they are maliciously used, which reveals the two faces of mobile sensing.

Tuo Yu. Two faces of Mobile SensingPhD thesis, May 2020. University of Illinois at Urbana-Champaign. Download from http://hdl.handle.net/2142/107938

Mobile devices based eavesdropping of handwriting

Recent THaW paper:

When filling out privacy-related forms in public places such as hospitals or clinics, people usually are not aware that the sound of their handwriting can leak personal information. In this paper, we explore the possibility of eavesdropping on handwriting via nearby mobile devices based on audio signal processing and machine learning. By presenting a proof-of-concept system, WritingHacker, we show the usage of mobile devices to collect the sound of victims’ handwriting, and to extract handwriting-specific features for machine learning based analysis. An attacker can keep a mobile device, such as a common smartphone, touching the desk used by the victim to record the audio signals of handwriting. Then, the system can provide a word-level estimate for the content of the handwriting. Moreover, if the relative position between the device and the handwriting is known, a hand motion tracking method can be further applied to enhance the system’s performance. Our prototype system’s experimental results show that the accuracy of word recognition reaches around 70 – 80 percent under certain conditions, which reveals the danger of privacy leakage through the sound of handwriting.

July 2020: Tuo Yu, Haiming Jin, and Klara Nahrstedt. Mobile devices based eavesdropping of handwritingIEEE Transactions on Mobile Computing 19(7), pages 1649–1663, July 2020. IEEE. DOI: 10.1109/TMC.2019.2912747