Date of Award

12-2023

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Human Centered Computing

Committee Chair/Advisor

Nathan McNeese

Committee Member

Guo Freeman

Committee Member

Bart Knijnenburg

Committee Member

Richard Pak

Abstract

Advances in artificial intelligence (AI) technologies have enabled AI to be applied across a wide variety of new fields like cryptography, art, and data analysis. Several of these fields are social in nature, including decision-making and teaming, which introduces a new set of challenges for AI research. While each of these fields has its unique challenges, the area of human-AI teaming is beset with many that center around the expectations and abilities of AI teammates. One such challenge is understanding team cognition in these human-AI teams and AI teammates' ability to contribute towards, support, and encourage it. Team cognition is defined as any cognitive activity among the team members regarding their shared knowledge of the team and task, including concepts such as shared task or team mental models, team situation awareness, or schema similarity. Team cognition is fundamental to effective teams, as it is directly linked to the successful and efficient execution of team processes and subsequent team outcomes from decades of research in human-only teaming. However, the construct is challenging in human-AI teams given the significant differences in how we interact with humans versus AI (communication is a notable example). Despite the importance and knowledge of team cognition in human teams, there is little to no empirical research on the construct within human-AI teams. Without this research on human-AI teams, it is difficult to understand how the findings for human teams compare and apply to human-AI teams, leaving practitioners of human-AI teams in the dark.

Very few studies directly examine what components of team cognition are most important to the human teammates working with AI or how including one or more AI teammates affects team cognition development. Without this knowledge, the ability to design better AI teammates that support and encourage the development of team cognition in human-AI teams will be a difficult task, and the need for extensive research in this area is apparent. As such, the current dissertation presents three studies that iterate upon one another to determine how AI teammates influence team cognition, what aspects of team cognition are most important to human-AI teams, and how AI teammate design may support contributions to team cognition.

The initial study of this dissertation utilized a mixed-methods approach to investigate how including AI teammates affects team cognition development in content, structure, and perception. This study also examines how human teammates' attitudes towards AI teammates change alongside those manipulations in team composition. Study 1 found that human-AI teams are similar to human-only teams in that team cognition develops over time; however, human-AI teams are different in that communication containing specific information related to the task and explicitly shared goals are more beneficial to developing team cognition. Additionally, human-AI teams trusted AI teammates less when working with only AI and no other humans, making AI contributions to team cognition difficult in teams with a majority AI composition. Perceived team cognition was also lower for AI teammates than human ones and had significantly inconsistent levels of team mental model similarity compared to human-only teams. These findings highlight the importance of information-sharing attributes of AI teammates to contribute to team cognition and drive the focus of the subsequent study.

Study 2 focused on how AI information-sharing attributes influence team cognition and how human members of human-AI teams want their AI teammates to be designed to contribute to and encourage the growth of various aspects of team cognition. This study contains two sub-studies, with the first making use of a mixed factorial survey design and structural equation modeling to assess how participants in hypothetical human-AI teams respond to various information-sharing attributes used by an AI teammate. The second sub-study used interviews to ascertain how participants want their AI teammates to be designed to contribute towards and encourage team cognition. The interviews also investigate how information-sharing attributes by AI affect participants' attitudes towards their teammates, such as trust and cohesion, to ensure the contributions of the AI teammates are accepted. The results of Study 2 found that AI design features such as explainability and providing situational awareness updates on intra/extra team information changes had the most potent effect on participants' attitudes and perceived levels of team cognition. Additionally, the interview data characterizes the relationship between explainability and situational awareness, the heightened importance of situational awareness to human-AI teams, and the benefits of giving AI teammates defined roles with significant degrees of agency.

Lastly, Study 3 explored which AI teammate design features supported team situational awareness best and how their participation in team discussions affected their team cognition. Team situational awareness was chosen as the component of team cognition to influence based on the results of Studies 1 and 2, which highlighted how vital situational awareness at all levels was to human-AI teams. Study 3 found that AI teammates designed to augment team memory significantly improved participants' perception of a shared mental model with their human and AI teammates. This same AI SA attribute also enhanced the teams' situational awareness and likelihood of overcoming system failures that acted as situational awareness roadblocks. AI participation in team discussions later in the teams' life cycle also enhanced team performance and situational awareness. Study 3's focus group interview data was also qualitatively analyzed, finding that the augmenting team memory SA attribute outperformed others by demonstrating to human teammates what information was necessary, when it was important and to whom it was essential, thereby enhancing teams' understanding of the task and building natural resiliency to roadblocks. These findings significantly inform the design of future AI teammates and future research by deepening the knowledge of how team cognition functions in human-AI teams.

These three studies contribute to three key research outcomes, including: 1) developing an understanding of what constructs within team cognition AI should support to drive effective team processes; 2) defining differences in team cognition between human-AI and human-only teams; and 3) how AI teammates meant to support team cognition can be designed and their effect on human-AI teams. Investigating these research gaps is necessary to develop effective AI designed to engage in highly social situations such as teams. Thus, to ensure this research is applicable, the three studies also synthesize their results into practical design recommendations that are actionable and supported by the empirical results of their respective study. As such, the research community and developers benefit from this work and help lead to more human-centered AI designs.

Author ORCID Identifier

0000-0003-3704-697X

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.