Date of Award

5-2024

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Electrical and Computer Engineering (Holcomb Dept. of)

Committee Chair/Advisor

Yingjie Lao

Committee Member

Jon Calhoun

Committee Member

Shuhong Gao

Committee Member

Yongkai Wu

Abstract

Deep neural networks (DNNs) have achieved unprecedented success in many fields. However, robustness and trustworthiness have become emerging concerns since DNNs are vulnerable to various attacks and susceptible to data distributional shifts. Attacks such as data poisoning and out-of-distribution scenarios such as natural corruption significantly undermine the performance and robustness of DNNs in model training and inference and impose uncertainty and insecurity on the deployment in real-world applications. Thus, it is crucial to investigate threats and challenges against deep neural networks, develop corresponding countermeasures, and dig into design tactics to secure their safety and reliability. The works investigated in this dissertation constitute our pioneering efforts in robust and trustworthy deep learning from perspectives of attacks, defenses and designs.

Author ORCID Identifier

https://orcid.org/0000-0003-0372-8198

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.