Angel Rodriguez, Sara Rampazzi, and Kevin Fu recently had a poster accepted titled IoT Two-Factor Neurometric Authentication System using Wearable EEG:
Abstract: The IoT authentication space suffers from various user-sided drawbacks, such as poor password choice, the accidental publication of biometric data, and the practice of disabling authentication completely. This is commonly attributed to the “Security vs Usability” problem – generally, the stronger the authentication, the more inconvenient it is to perform and maintain for the user. Neurometric authentication offers a compelling resistance to eavesdropping and replay attacks, and the ability for a user to simply “think to unlock”. Furthermore, the recent increase in popularity of consumer EEG devices, as well as new research demonstrating its accuracy, have made EEG-based neurometric authentication much more viable.
Using a Support Vector Machine and one-time tokens, we present a secure two-factor authentication method, that allows a user to authenticate multiple IoT devices. We perform preliminary trials on the Psyionet BCI dataset and demonstrate a qualitative comparison of extracted EEG feature sets.
Left: IoT two factor authentication scheme – (1) After internal user-thought authentication, the device securely sends a one-time token to the IoT device. (2) The IoT device securely communicates with a server to verify the token. (3) If the token is verified, the server sends a secure confirmation reply to the IoT device, authenticating the user. Right: Proof of concept using the Psyionet BCI dataset – The top row shows the averaged covariance matrices of the extracted features of two different users thinking about the same mental task (imagining closing their fists). The bottom row shows similar features for one user thinking of two different tasks (imagine closing both fists vs both feet).
Proceedings of the IEEE Workshop on the Internet of Safe Things (SafeThings), May 2019. Accepted, publication pending.
Professor Avi Rubin recently testified at a Maryland State Senate Finance Committee, hearing regarding a bill about IoT security [February 26, 2019]. Below are his remarks.
My name is Avi Rubin, and I am a full professor of Computer Science at Johns Hopkins University and Technical Director of our Information Security Institute. I am also the Founder and Chief Scientist of Harbor Labs, a Maryland CyberSecurity company that has developed an IoT Security Analysis product. I have been an active researcher in the area of Computer and Network Security since 1992. The primary focus of my research is Security for the Internet of Things (IoT Security). These are the types of connected devices that are addressed in SB 553.
This one-hour talk by David Kotz was presented at ARM Research in Austin, TX at the end of January 2019. The first half covers some recent THaW research about Wanda and SNAP and the second half lays out some security challenges in the Internet of Things. Watch the video below.
Abstract: The homes, offices, and vehicles of tomorrow will be embedded with numerous “Smart Things,” networked with each other and with the Internet. Many of these Things interact with their environment, with other devices, and with human users – and yet most of their communications occur invisibly via wireless networks.How can users express their intent about which devices should communicate – especially in situations when those devices have never encountered each other before? We present our work exploring novel combinations of physical proximity and user interaction to ensure user intent in establishing and securing device interactions.
What happens when an occupant moves out or transfers ownership of her Smart Environment?How does an occupant identify and decommission all the Things in an environment before she moves out?How does a new occupant discover, identify, validate, and configure all the Things in the environment he adopts?When a person moves from smart home to smart office to smart hotel, how is a new environment vetted for safety and security, how are personal settings migrated, and how are they securely deleted on departure?When the original vendor of a Thing (or the service behind it) disappears, how can that Thing (and its data, and its configuration) be transferred to a new service provider?What interface can enable lay people to manage these complex challenges, and be assured of their privacy, security, and safety? We present a list of key research questions to address these important challenges.
Abstract: Providing secure communications between wireless devices that encounter each other on an ad-hoc basis is a challenge that has not yet been fully addressed. In these cases, close physical proximity among devices that have never shared a secret key is sometimes used as a basis of trust; devices in close proximity are deemed trustworthy while more distant devices are viewed as potential adversaries. Because radio waves are invisible, however, a user may believe a wireless device is communicating with a nearby device when in fact the user’s device is communicating with a distant adversary. Researchers have previously proposed methods for multi-antenna devices to ascertain physical proximity with other devices, but devices with a single antenna, such as those commonly used in the Internet of Things, cannot take advantage of these techniques.
We present theoretical and practical evaluation of a method called SNAP – SiNgle Antenna Proximity – that allows a single-antenna Wi-Fi device to quickly determine proximity with another Wi-Fi device. Our proximity detection technique leverages the repeating nature Wi-Fi’s preamble and the behavior of a signal in a transmitting antenna’s near-field region to detect proximity with high probability; SNAP never falsely declares proximity at ranges longer than 14 cm.
Proceedings of the ACM International Conference on Mobile Computing and Networking (MobiCom), October 2019. ACM Press. Accepted for publication. DOI 10.1145/3300061.3300120.
A medical specialty indicates the skills needed by health care providers to conduct key procedures or make critical judgments. However, documentation about specialties may be lacking or inaccurately specified in a health care institution. Thus, we propose to leverage diagnosis histories to recognize medical specialties that exist in practice. Such specialties that are highly recognizable through diagnosis histories are de facto diagnosis specialties. We aim to recognize de facto diagnosis specialties that are listed in the Health Care Provider Taxonomy Code Set (HPTCS) and discover those that are unlisted. First, to recognize the former, we use similarity and supervised learning models. Next, to discover de facto diagnosis specialties unlisted in the HPTCS, we introduce a general discovery‐evaluation framework. In this framework, we use a semi‐supervised learning model and an unsupervised learning model, from which the discovered specialties are subsequently evaluated by the similarity and supervised learning models used in recognition. To illustrate the potential for these approaches, we collect 2 data sets of 1 year of diagnosis histories from a large academic medical center: One is a subset of the other except for additional information useful for network analysis. The results indicate that 12 core de facto diagnosis specialties listed in the HPTCS are highly recognizable. Additionally, the semi‐supervised learning model discovers a specialty for breast cancer on the smaller data set based on network analysis, while the unsupervised learning model confirms this discovery and suggests an additional specialty for Obesity on the larger data set. The potential correctness of these 2 specialties is reinforced by the evaluation results that they are highly recognizable by similarity and supervised learning models in comparison with 12 core de facto diagnosis specialties listed in the HPTCS.
Shubhra Kanti, Karmaker Santu, Vincent Bindschadler, ChengXiang Zhai, and Carl A. Gunter recently published a paper titled NRF: A Naive Re-identification Framework:
The promise of big data relies on the release and aggregation of data sets. When these data sets contain sensitive information about individuals, it has been scalable and convenient to protect the privacy of these individuals by de-identification. However, studies show that the combination of de-identified data sets with other data sets risks re-identification of some records. Some studies have shown how to measure this risk in specific contexts where certain types of public data sets (such as voter roles) are assumed to be available to attackers. To the extent that it can be accomplished, such analyses enable the threat of compromises to be balanced against the benefits of sharing data. For example, a study that might save lives by enabling medical research may be enabled in light of a sufficiently low probability of compromise from sharing de-identified data. In this paper, we introduce a general probabilistic re-identification framework that can be instantiated in specific contexts to estimate the probability of compromises based on explicit assumptions. We further propose a baseline of such assumptions that enable a first-cut estimate of risk for practical case studies. We refer to the framework with these assumptions as the Naive Re-identification Framework (NRF). As a case study, we show how we can apply NRF to analyze and quantify the risk of re-identification arising from releasing de-identified medical data in the context of publicly-available social media data. The results of this case study show that NRF can be used to obtain meaningful quantification of the re-identification risk, compare the risk of different social media, and assess risks of combinations of various demographic attributes and medical conditions that individuals may voluntarily disclose on social media.
ACM Workshop on Privacy in an Electronic Society (WPES ’18), Toronto, Canada, October 2018. DOI: 10.1145/3267323.3268948
Karan Ganju, Qi Wang, Wei Yang, Carl A. Gunter, and Nikita Borisov recently published a paper titled Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations:
With the growing adoption of machine learning, sharing of learned models is becoming popular. However, in addition to the prediction properties the model producer aims to share, there is also a risk that the model consumer can infer other properties of the training data the model producer did not intend to share. In this paper, we focus on the inference of global properties of the training data, such as the environment in which the data was produced, or the fraction of the data that comes from a certain class, as applied to white-box Fully Connected Neural Networks (FCNNs). Because of their complexity and inscrutability, FCNNs have a particularly high risk of leaking unexpected information about their training sets; at the same time, this complexity makes extracting this information challenging. We develop techniques that reduce this complexity by noting that FCNNs are invariant under permutation of nodes in each layer. We develop our techniques using representations that capture this invariance and simplify the information extraction task. We evaluate our techniques on several synthetic and standard benchmark datasets and show that they are very effective at inferring various data properties. We also perform two case studies to demonstrate the impact of our attack. In the first case study we show that a classifier that recognizes smiling faces also leaks information about the relative attractiveness of the individuals in its training set. In the second case study we show that a classifier that recognizes Bitcoin mining from performance counters also leaks information about whether the classifier was trained on logs from machines that were patched for the Meltdown and Spectre attacks.
Juhee Kwon and Eric Johnson recently published an article aimed at the question Does “meaningful-use” attestation improve information security performance?
Certification mechanisms are often employed to assess and signal difficult-to-observe management practices and foster improvement. In the U.S. healthcare sector, a certification mechanism called meaningful-use attestation was recently adopted as part of an effort to encourage electronic health record (EHR) adoption while also focusing healthcare providers on protecting sensitive healthcare data. This new regime motivated us to examine how meaningful-use attestation influences the occurrence of data breaches. Using a propensity score matching technique combined with a difference-in-differences (DID) approach, our study shows that the impact of meaningful-use attestation is contingent on the nature of data breaches and the time frame. Hospitals that attest to having reached Stage 1 meaningful-use standards observe fewer external breaches in the short term, but do not see continued improvement in the following year. On the other hand, attesting hospitals observe short-term increases in accidental internal breaches but eventually see long-term reductions. We do not find any link between malicious internal breaches and attestation. Our findings offer theoretical and practical insights into the effective design of certification mechanisms.
THaW’s A.J. Burns and Eric Johnson recently published a piece in IT Professional:
ABSTRACT: Cyberthreats create unique risks for organizations and individuals, especially regarding breaches of personally identifiable information (PII). However, relatively little research has examined hackings distinct impact on privacy. The authors analyze cyber breaches of PII and found that they are significantly larger compared to other breaches, showing that past breaches are useful for predicting future breaches.
Scott Breece, VP and CISO of Community Health Systems, discusses the rising security threat in healthcare with M. Eric Johnson, Dean of Vanderbilt University’s Owen Graduate School of Management. Scott highlights how health IT is transforming healthcare, improving the patient experience and outcomes. However, digitization of healthcare data also creates new risks for the healthcare system. Scott discusses how Community Health Systems is staying ahead of those threats and securing patient data. This video was partially supported by the THaW project, which is co-led by Eric Johnson.