Date of Award

5-2024

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Computer Science

Committee Chair/Advisor

Nathan McNeese

Committee Member

Guo Freeman

Committee Member

Kapil Madathil

Committee Member

Carlos Hernandez

Abstract

Ground-breaking advances in artificial intelligence (AI) have led to the possibility of AI agents operating not just as useful tools for teams, but also as full-fledged team members with unique, interdependent roles. This possibility is fueled by the human desire to create more and more autonomous systems that possess computational powers beyond human capability and the promise of increasing the productivity and efficiency of human teams dramatically. Yet, for all the promise and potential of these human-AI teams, the inclusion of AI teammates presents several challenges and concerns for both teaming and human-centered AI.

An important part of teaming is the ability of all team members to adjust their autonomy levels based on their tasks, roles, and operating environment. Inherently, human beings sense and learn the amount of additional input and oversight we need from others in completing our tasks, but AI agents are not equipped to do this. AI designers have to consider the right levels of autonomy, or the level of human input, with which to program AI teammates to fulfill their designated purposes. Historically, this has meant that designers choose \textit{one} level, but advances in AI programming can now enable AI teammates to adapt between autonomy levels, in accordance with the team's needs. In order to design such teammates, it is integral that AI designers consider not only the optimal autonomy levels for AI teammates but also the effects that such adaptations would have on the human members of the team, as there is a careful balance between the benefits and risks of higher AI autonomy. Thus, through four successive studies, this dissertation considers how a team's processes and characteristics should influence when and how AI teammates adapt their autonomy.

Studies 1 and 2 focus on a team's decision processes, which are the patterns and steps that teams use to make decisions. Study 1 specifically focuses on how a team's task processes should influence AI autonomy. This mixed methods study demonstrates that the more defined a team's standard processes are, the more accurately an AI teammate's autonomy levels can be predetermined and changed automatically. In contrast, less defined team processes require AI teammates to have a greater level of human oversight. This study highlighted the notion that adaptive autonomy is "human-like" and encourages the perception of AI agents as teammates rather than tools.

Building off the qualitative component of Study 1, the next two-part study aimed to understand how a team's communication processes and needs should influence AI autonomy and utilized both a Wizard of Oz experiment and participatory design sessions. As an AI's autonomy level increases, the information it has to provide to its human teammates subsequently decreases. This change inspired an investigation into how a human teammate's need for AI explanations changes as the AI's autonomy changes. Study 2a discovered counter-intuitive results where the participants perceived the AI agent with lower explainability levels both more trustworthy and more competent. Furthermore, the participatory design portion, Study 2b, revealed that AI autonomy can also be linked to how much real-time and retroactive explanation an AI teammate can practically provide to its teammates. Together, Study 2 underscores the importance of adaptive autonomy in maintaining the right balance of explainability for AI teammates.

Study 3 examines team decision outcomes, particularly how an AI teammate's autonomy levels influence team failure performance and risk levels. This contextual inquiry study considered how human teammate perceptions are affected by changing AI autonomy over the course of a team's action cycle within two human-AI teaming contexts. Results revealed the importance of considering how an AI teammate's specific team role affects its teammate's decisions and success. This study's findings reveal that the extent to which an AI teammate's role is coupled with its human teammates' roles plays a significant part in avoiding risk and defining when an AI teammate needs to adapt its autonomy level.

One of a team's most defining social norms is its ethical code. In order to maintain a positive team dynamic, teams must share and act in line with a shared ethical ideology. The extent to which an AI teammate should operate with more or less human reason is an important design consideration, one where adaptive autonomy can truly shine. As such, Study 4 considered how a team's shared ethical code should influence when and how AI teammates should adapt their autonomy. This study considered four categories of ethical principles that a human-AI team may hold and assessed when and how humans would want the AI teammate to adapt its autonomy in order to best preserve the ethical principle at stake. Findings from this study advise the research and design communities towards the creation of more ethical AI that can recognize and respond appropriately to the inevitable ethical dilemmas they will encounter as teammates.

The above four studies serve to fill a large research gap in human-AI teaming, that of understanding how the needs and perceptions of human teammates should be used to determine dynamic, optimal AI teammate autonomy levels in real time. Each study in this dissertation integrates its findings into detailed, actionable design recommendations that can immediately be implemented into the development of more effective, ethical AI teammates based on a team's decision processes, decision outcomes, and team characteristics. Thus, this research directly benefits researchers, designers, and developers of human-centered AI and human-AI teams.

Author ORCID Identifier

0000-0001-7785-5996

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.