Date of Award

12-2017

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

School of Computing

Committee Member

Dr. Sabarish V. Babu, Committee Chair

Committee Member

Dr. Christopher C. Pagano

Committee Member

Dr. Larry F. Hodges

Committee Member

Dr. Andrew C. Robb

Abstract

Immersive Virtual Environments (IVEs) are becoming more accessible and more widely utilized for training. Previous research has shown that the matching of visual and proprioceptive information is important for calibration. Many state-of-the art Virtual Reality (VR) systems, commonly known as Immersive Virtual Environments (IVE), are created for training users in tasks that require accurate manual dexterity. Unfortunately, these systems can suffer from technical limitations that may force de-coupling of visual and proprioceptive information due to interference, latency, and tracking error. It has also been suggested that closed-loop feedback of travel and locomotion in an IVE can overcome compression of visually perceived depth in medium field distances in the virtual world [33, 47]. Very few experiments have examined the carryover effects of multi-sensory feedback in IVEs during manual dexterous 3D user interaction in overcoming distortions in near-field or interaction space depth perception, and the relative importance of visual and proprioceptive information in calibrating users' distance judgments. In the first part of this work, we examined the recalibration of movements when the visually reached distance is scaled differently than the physically reached distance. We present an empirical evaluation of how visually distorted movements affects users' reach to near field targets in an IVE. In a between subjects design, participants provided manual reaching distance estimates during three sessions; a baseline measure without feedback (open-loop distance estimation), a calibration session with visual and proprioceptive feedback (closed-loop distance estimation), and a post-interaction session without feedback (open-loop distance estimation). Subjects were randomly assigned to one of three visual feedbacks in the closed-loop condition during which they reached to target while holding a tracked stylus: i) Minus condition (-20% gain condition) in which the visual stylus appeared at 80\% of the distance of the physical stylus, ii) Neutral condition (0% or no gain condition) in which the visual stylus was co-located with the physical stylus, and iii) Plus condition (+20% gain condition) in which the visual stylus appeared at 120% of the distance of the physical stylus. In all the conditions, there is evidence of visuo-motor calibration in that users' accuracy in physically reaching to the target locations improved over trials. Scaled visual feedback was shown to calibrate distance judgments within an IVE, with estimates being farthest in the post-interaction session after calibrating to visual information appearing nearer (Minus condition), and nearest after calibrating to visual information appearing further (Plus condition). The same pattern was observed during closed-loop physical reach responses, participants generally tended to physically reach farther in Minus condition and closer in Plus condition to the perceived location of the targets, as compared to Neutral condition in which participants' physical reach was more accurate to the perceived location of the target. We then characterized the properties of human reach motion in the presence or absence of visuo-haptic feedback in real and IVEs within a participant's maximum arm reach. Our goal is to understand how physical reaching actions to the perceived location of targets in the presence or absence of visuo-haptic feedback are different between real and virtual viewing conditions. Typically, participants reach to the perceived location of objects in the 3D environment to perform selection and manipulation actions during 3D interaction in applications such as virtual assembly or rehabilitation. In these tasks, participants typically have distorted perceptual information in the IVE as compared to the real world, in part due to technological limitations such as minimal visual field of view, resolution, latency and jitter. In an empirical evaluation, we asked the following questions; i) how do the perceptual differences between virtual and real world affect our ability to accurately reach to the locations of 3D objects, and ii) how do the motor responses of participants differ between the presence or absence of visual and haptic feedback? We examined factors such as velocity and distance of physical reaching behavior between the real world and IVE, both in the presence or absence of visuo-haptic information. The results suggest that physical reach responses vary systematically between real and virtual environments especially in situations involving presence or absence of visuo-haptic feedback. The implications of our study provide a methodological framework for the analysis of reaching motions for selection and manipulation with novel 3D interaction metaphors and to successfully characterize visuo-haptic versus non-visuo-haptic physical reaches in virtual and real world situations. While research has demonstrated that self-avatars can enhance ones' sense of presence and improve distance perception, the effects of self-avatar fidelity on near field distance estimations has yet to be investigated. Thus, we investigated the effect of visual fidelity of the self-avatar in enhancing the user's depth judgments, reach boundary perception and properties of physical reach motion. Previous research has demonstrated that self-avatar representation of the user enhances the sense of presence [37] and even a static notion of an avatar can improve distance estimation in far distances [59, 48]. In this study, performance with a virtual avatar was also compared to real-world performance. Three levels of fidelity were tested; 1) an immersive self-avatar with realistic limbs, 2) a low-fidelity self-avatar showing only joint locations, and 3) end-effector only. There were four primary hypotheses; First, we hypothesize that just the existence of self-avatar or end-effector position would calibrate users' interaction space depth perception in an IVE. Therefore, participants' distance judgments would be improved after the calibration phase regardless of self-avatars' visual fidelity. Second, the magnitude of the changes from pre-test to post-test would be significantly different based on the visual details of the self-avatar presented to the participants (self-avatar vs low-fidelity self-avatar and end-effector). Third, we predict distance estimation accuracy would be the highest in immersive self-avatar condition and the lowest in end-effector condition. Forth, we predict that the properties of physical reach responses vary systematically between different visual fidelity conditions. The results suggest that reach estimations become more accurate as the visual fidelity of the avatar increases, with accuracy for high fidelity avatars approaching real-world performance as compared to low-fidelity and end-effector conditions. There was also an effect of the phase where the reach estimate became more accurate after receiving feedback in calibration phase. Overall, in all conditions reach estimations became more accurate after receiving feedback during a calibration phase. Lastly, we examined factors such as path length, time to complete the task, average velocity and acceleration of physical reach motion and compared all the IVEs conditions with real-world. The results suggest that physical reach responses vary systematically between the VR viewing conditions and real-world.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.