Date of Award


Document Type


Degree Name

Master of Science (MS)



Committee Member

Dr. Richard Pak, Committee Chair

Committee Member

Dr. Patrick Rosopa

Committee Member

Dr. Ewart de Visser


ABSTRACT Trust is a critical component to both human-automation and human-human interactions. Interface manipulations, such as visual anthropomorphism and machine politeness, have been used to affect trust in automation. However, these design strategies have been primarily used to facilitate initial trust formation but have not been examined means to actively repair trust that has been violated by a system failure. Previous research has shown that trust in another party can be effectively repaired after a violation using various strategies, but there is little evidence substantiating such strategies in human-automation context. The current study examined the effectiveness of trust repair strategies, derived from a human-human or human-organizational context, in human-automation interaction. During a taxi dispatching task, participants interacted with imperfect automation that either denied or apologized for committing competence- or integrity-based failures. Participants performed two experimental blocks (one for each failure type), and, after each block, reported subjective trust in the automation. Consistent with interpersonal literature, our analysis revealed that automation apologies more successfully repaired trust following competence-based failures than integrity-based failures. However, user trust in automation was not significantly different when the automation denied committing competence- or integrity-based failures. These findings provide important insight into the unique ways in which humans interact with machines.