Research

Our research aims to verify the learning effectiveness of iVR-based virtual environments (VEs) and to establish virtual-navigation interventions that improve spatial learning. Specifically, our research contribution includes two related areas: immersion in the sense of technical features and geographic scale or environment visibility from single locations. We will briefly introduce each of these two areas below.

Effects of Immersion on Spatial Learning

Travel in real-world settings refers to moving from one location to another either by feet or by other means of transportation. With the advent of immersive Virtual Reality (iVR), we can accurately and efficiently create VEs for any place on earth. Although immersive virtual reality (iVR) is attractive to users, we know relatively little about whether higher immersion levels increase or decrease spatial learning outcomes. In addition, we know little about the effects of different approaches to travel in virtual reality. Active locomotion fosters spatial learning in the real world, but in iVR, teleportation is often used instead to bridge larger distances, deal with size limitations of the area in which movements can be tracked, and reduce the risk of cybersickness. However, discontinuous travel (i.e., teleportation) is likely to reduce spatial learning because it eliminates both visual flow and bodily cues. The main question we address in this research is how different systems and travel modes affect how well users learn an environment. Is it essential to maintain continuous viewpoint transitions or does the immersive VE provide an advantage even if we have to switch to a discrete form of travel? In a nutshell, there is much interest within the field of VEs and iVR in which immersion levels and forms of travel affect spatial learning.

Our study takes advantage of the flexible stimuli manipulation in VR systems by integrating VE navigation with well-established metrics measuring spatial knowledge acquisition, such as estimates of distance, direction judgments, and cognitive mapping, into a cross-platform spatial learning paradigm that can be assessed via, for example, HTC Vive, mobile VR, and desktop computers. In this area, we have one abstract (S55) presented in the International Conference on Spatial Cognition (2018) and one published paper comparing Oculus Go to HTC Vive in the International Conference on Immersive Learning (2019). Another paper on desktop computer versus HTC Vive has been submitted to Spatial Cognition & Computation and is currently in revision. In the future, we plan to extend my spatial learning paradigm to probe the role of immersion in many place-based disciplines (e.g., geosciences, geography, and biology). For instance,  we have a full paper that probes the media effect of virtual field trips in learning about geology accepted the IEEE Virtual Reality 2020 Conference

The ground-level perspective of using teleportation on a desktop screen.

Teleportation using the hand controller in an HTC Vive HMD.

 

Onsite pointing task on the desktop screen.

Model-building task using the HTC Vive.

Effects of Geographic Scale on Spatial Learning

The second area of our research centers on geographic scale, defined as the spatial extent visually accessible from a single viewpoint, and its impact on spatial learning in an environmental space (i.e., which requires movement to apprehend). Investigating the relationship between the human body and its spatial environment is a critical component in understanding the process of acquiring spatial memories. However, few empirical evaluations have looked at how the relative visual accessibility of an environment affects spatial learning. Our research aims to use immersive technologies to establish novel virtual-navigation interventions that can improve spatial learning. The geographic scale was manipulated by changing the perspective of the learner and changing the environment to increase visibility from single locations. Our research in this area has produced one conference paper in the IEEE Virtual Reality 2019 Conference, one extended abstract in the IEEE Virtual Reality 2020 Conference, and one journal paper published in Cognitive Research: Principles and Implications

Change of geographic scale at a single position within a virtual maze. Left: ground perspective (4.5ft/1.4m above ground); the blue flag was the only visible landmark. Right: elevated perspective (17.5ft/5.3m above ground); both the blue flag and Big Ben could be seen from a single viewpoint. The spatial extent visually accessible from a single viewpoint was controlled by hedges along both sides of the path.

 

Left: six participants in the same session using Oculus Go headsets to learn the virtual maze. Right: participant’s view in the onsite pointing task.