Date of Award

12-2023

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Electrical and Computer Engineering (Holcomb Dept. of)

Committee Chair/Advisor

Dr. Yingjie Lao

Committee Member

Dr. Shuangshuang Jin

Committee Member

Dr. Judson Ryckman

Committee Member

Dr. Jon Calhoun

Abstract

The significance of security is often overlooked until a catastrophic event occurs. This holds for the security of existing IoT devices. Numerous devices have been deployed and repurposed without adequately considering their security vulnerabilities. There is a lack of comprehensive security protocols for devices that frequently interact with one another. Even in emerging fields like machine learning (ML), which has found extensive applications in strategy optimization, decision-making, data classification, and other domains, discussions around security vulnerabilities are not widespread. Regarding security in ML, the focus tends to revolve around specific topics such as adversarial attacks. However, due to the remarkable success of ML models, they are now being utilized in highly sensitive domains, raising concerns about potential vulnerabilities. ML models have gained popularity in areas such as healthcare, finance, and the judiciary system. To ensure their practical deployment and enhance trust in these models, explainability methods are often employed alongside black box neural networks. These methods aim to provide insights into the model’s behavior by utilizing techniques like feature importance scores to explain the model’s final decision. However, it’s crucial to note that these techniques can inadvertently introduce additional opportunities for attacks by adversaries. In this context, our focus lies in exploiting the vulnerabilities associated with widely-used explainability methods. Another challenge with ML models, despite their notable success, is the growing complexity of ML models that poses new obstacles to designing ML systems. These challenges arise due to the intricate nature of the learning models, demanding innovative solutions and approaches to ensure we maintain this pace in advancing ML systems. Regarding computational resources, utilizing machine learning (ML) on devices with limited resources, such as mobile computing and IoT devices, is hindered by the substantial computational complexity and memory requirements. These demands place additional strain on the system’s performance and memory capacity. This dissertation aims to provide feasible solutions for these security and computational issues with novel and innovative solutions. We propose a novel device-specific signature generation approach by bringing the concept of physical unclonable function (PUF) into legacy smart devices. Experimental results show that the signatures generated from each device are sufficiently unique to be used as secret keys in communication or cryptographic operations. To address the increasing demand for computational resources in ML models, we present a novel training algorithm for pruning the model during the training phase. This algorithm applies to both batch learning and online learning scenarios. By employing this approach, we aim to overcome the resource requirements associated with ML models and enhance their efficiency during the training process. We exploit the membership inference attack (MIA) using popular explainability methods. As opposed to prior methods that only rely on confidence scores of the classification, we perform the MIA and infer whether a particular data point belongs to the training dataset by leveraging explainability results. We delve into the ramifications and consequences of quantization on privacy leakage and propose a novel quantization method that enhances the resistance of a neural network against MIA.

Available for download on Tuesday, December 31, 2024

Share

COinS