Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems

A new THaW paper was published at USENIX Security last week. It describes using a laser at a distance of 110 meters to stimulate audio sensors on smart speakers and thereby insert audio commands that are accepted as coming from a legitimate user. Techniques for dealing with this vulnerability are proposed.

Takeshi Sugawara, Benjamin Cyr, Sara Rampazzi, Daniel Genkin, and Kevin Fu. Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems. In Proceedings of the USENIX Security Symposium (USENIX Security), pages 2631–2648, August 2020. USENIX Association.

Paper and video presentation at https://www.usenix.org/conference/usenixsecurity20/presentation/sugawara 

With a Laser, Researchers Say They Can Hack Alexa, Google Home or Siri

The New York Times just posted a story about this startling new research from Kevin Fu and his group, funded in part by THaW.  The research team posted more details here.

 

The Evolving Cyberthreat to Privacy

THaW’s A.J. Burns and Eric Johnson recently published a piece in IT Professional:

ABSTRACT: Cyberthreats create unique risks for organizations and individuals, especially regarding breaches of personally identifiable information (PII). However, relatively little research has examined hackings distinct impact on privacy. The authors analyze cyber breaches of PII and found that they are significantly larger compared to other breaches, showing that past breaches are useful for predicting future breaches.
Issue No. 03 – May./Jun. (2018 vol. 20)