Date of Award


Document Type


Degree Name

Doctor of Philosophy (PhD)


Mechanical Engineering

Committee Chair/Advisor

Dr. Yue Wang

Committee Member

Dr. John R. Wagner

Committee Member

Dr. Mashrur Chowdhury

Committee Member

Dr. Ardalan Vahidi


By virtue of vehicular connectivity and automation, the vehicle becomes increasingly intelligent and self-driving capable. However, no matter what automation level the vehicle can achieve, humans will still be in the loop despite their roles. First, considering the manual driving car as a disturbance to the connected and autonomous vehicles (CAVs), a novel string stability is proposed for mixed traffic platoons consisting of both autonomous and manual driving cars to guarantee acceptable motion fluctuation and platoon safety. Furthermore, humans are naturally considered as the rider in the passenger vehicle. A human-centered cooperative adaptive cruise control (CACC) is designed to improve physical and psychological comfort with the guarantee of string stability. Compared with the benchmark CACC, the human-centered CACC essentially enhances driving comfort.

In emergencies and adversarial scenarios, the human operator can act as a supervisor of the autonomous driving systems. Correspondingly, a human-robot interaction framework to reduce the vulnerability of a CAV platoon under cyber attacks is proposed. To mitigate the effects of cyber-attacks, an observer-based resilient controller is first designed. The corresponding platoon safety conditions are also derived. Next, to facilitate human supervision, a decision-making aid system is developed, which consists of an anomaly reporting system (ARS), a trust-based information management system (TIMS), and a graphical user interface (GUI). Representative humans-in-the-loop experiment demonstrates that the proposed framework can effectively guide human operators when working with CAV platoons under cyber attacks.

Moreover, as an expert, an experienced driver can even teach autonomous systems to drive safely. To fulfill this imitation learning task, safety awareness is injected into the adversarial inverse reinforcement learning (AIRL) algorithm, forming a novel safe inverse reinforcement learning (SAIRL) algorithm. First, the control barrier function (CBF) analysis is used to guide the training process of a safety critic. Second, the safety critic is integrated with the AIRL discriminator to impose the safety consideration. To further enforce the importance of safety, a safety regulator is also introduced to discourage the recovered reward function from assigning high rewards to the risky state-action pairs. In the simulated highway-driving scenario, the proposed S-AIRL outperformed the original AIRL algorithm with a much lower collision rate.

Finally, to accommodate the maximum adversaries of the neighbor manual driving car, its interaction with the subject autonomous vehicle is modeled as a two-player zero-sum Markov game (TZMG). The TZMG Q learning algorithm is then adopted to train a safe policy corresponding to the Nash Equilibrium. In the human subject test, compared to the conventional policy learning algorithm, the TZMG-based algorithm can produce a much safer driving policy with a lower collision rate to the adversaries of real manual driving cars.

Author ORCID Identifier




To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.