Near-Perfect Automation: Investigating Performance, Trust, and Visual Attention Allocation
ObjectiveAssess performance, trust, and visual attention during the monitoring of a near-perfect automated system.BackgroundResearch rarely attempts to assess performance, trust, and visual attention in near-perfect automated systems even though they will be relied on in high-stakes environments.MethodsSeventy-three participants completed a 40-min supervisory control task where they monitored three search feeds. All search feeds were 100% reliable with the exception of two automation failures: one miss and one false alarm. Eye-tracking and subjective trust data were collected.ResultsThirty-four percent of participants correctly identified the automation miss, and 67% correctly identified the automation false alarm. Subjective trust increased when participants did not detect the automation failures and decreased when they did. Participants who detected the false alarm had a more complex scan pattern in the 2 min centered around the automation failure compared with those who did not. Additionally, those who detected the failures had longer dwell times in and transitioned to the center sensor feed significantly more often.ConclusionNot only does this work highlight the limitations of the human when monitoring near-perfect automated systems, it begins to quantify the subjective experience and attentional cost of the human. It further emphasizes the need to (1) reevaluate the role of the operator in future high-stakes environments and (2) understand the human on an individual level and actively design for the given individual when working with near-perfect automated systems.ApplicationMultiple operator-level measures should be collected in real-time in order to monitor an operator’s state and leverage real-time, individualized assistance.
figshare SAGE Publications
Sibley, Ciara; Devlin, Shannon; Coyne, Joseph T.; Brown, Noelle L.; Foroughi, Cyrus K.; Pak, Richard (2021), "Near-Perfect Automation: Investigating Performance, Trust, and Visual Attention Allocation", figshare SAGE Publications, doi: 10.25384/sage.c.5552335.v1