Date of Award

8-2023

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Mechanical Engineering

Committee Chair/Advisor

Yue Wang

Committee Member

Ardalan Vahidi

Committee Member

Atul Kelkar

Committee Member

Amin Khademi

Abstract

 Trust plays a crucial role in enabling effective collaboration and decision-making within human-multi-robot teams. In this context, runtime verification techniques and trust inference models have emerged as valuable tools for assessing and quantifying the trustworthiness of individual robots. This dissertation presents a study on trust-based runtime verification and introduces a Bayesian trust inference model for human-multi-robot teams. Firstly, we discuss the concept of runtime verification, which involves monitoring and analyzing the behavior of robots during their operation. We highlight the importance of trust as a key factor in determining the reliability and credibility of robot actions. By integrating trust metrics into runtime verification, we enhance the capability to detect and respond to potentially untrustworthy behavior, ultimately improving the overall performance and safety of human-multi-robot teams. Building upon this foundation, we propose a Bayesian trust inference model. The model leverages a dynamic Bayesian network (DBN) to capture the dependencies between robot performance, human trust feedback, and human interventions. By employing a categorical Boltzmann machine, we model the conditional relationships within the DBN, enabling an accurate representation of the complex trust dynamics in the team. To validate the proposed model, we conduct experiments involving a human-multi-robot team in a collaborative task. The experiments incorporate human involvement and utilize the Bayesian trust inference model to infer the degrees of human trust in multiple mobile robots. Additionally, the model predicts human interventions, providing a means for model validation. The experimental results demonstrate the effectiveness of the Bayesian trust inference model, with a high accuracy rate of 86.5% in predicting human interactions. The findings of this study confirm the potential of trust-based runtime verification and the Bayesian trust inference model in enhancing the trustworthiness and performance of human-multi-robot teams. By integrating trust assessment and inference into the runtime verification process, we enable a proactive approach to managing trust, fostering more reliable and efficient collaboration between humans and robots.

Available for download on Saturday, August 31, 2024

Share

COinS