Description: When using a 360° image-based VR tour for education, it can be difficult to identify the described entity (or a target) in the 360 ° image-based scene, in particular, given that the VR tour observer will only see a smaller part of the image at any given moment such that targets can be out-of-view. To help the user to better navigate the targets, we designed various mechanisms for guiding a user’s visual attention to a particular point of interest in 360° image-based educational VR tour platforms. We compared three different visual mechanisms (arrow, butterfly guide, and radar) with the fourth condition with no guidance. We used four different environments that were randomized between different mechanisms. To make sure that the target is always noticeable and unambiguously identifiable once it is in the user’s view, a circular, slightly transparent green marker is overlaid on top of the target. When the user gets closer to the target, the marker shrinks to a tiny circle.
Each of the four VR tours used in the study is between 5 and 7 minutes long and features a particular type of environment from completely natural to urban and to indoor. Each tour has been created from three 360 ° images taken at different locations at the respective site. In each tour, users are teleported through this sequence of locations spending a couple of minutes at each location during which they can look around freely while listening to audio commentary explaining three to five different targets, a total of 12 per tour. The targets are selected such that the horizontal angle between consecutive points is at least 100 °, meaning turning of the head is required to get the next target into view. In contrast, the targets’ vertical positions do not vary much; all of them are relatively close to the horizontal plane.
The following image shows the students using Oculus Go and a Swivel chair, experiencing different guiding mechanisms. We also collected head-tracking data in this study. An example of the head tracking data can be seen in the third image.
Results: While all three mechanisms were perceived as improvements over the no-guidance condition and resulted in significantly improved target-finding times, the arrow mechanism stands out as the most generally accepted and favored approach. In contrast, the other two (butterfly guide and radar) received a more polarized assessment due to their specific strengths and drawbacks. From the perspective of designing future XR applications that will benefit from visual attention-guiding capabilities, the large individual differences can be seen as a strong argument for implementing multiple mechanisms and putting the decision into the hands of the users, allowing each to choose the mechanism they feel most comfortable with.
Collaborators: Jan Oliver Walgrun, Mahda M. Bagher, Pejman Sajjadi, Alexander Klippel.
Publication:
Wallgrün, J. O., Bagher, M. M., Sajjadi, P., & Klippel, A. (2020, March). A Comparison of Visual Attention Guiding Approaches for 360° Image-Based VR Tours. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 83-91). IEEE. https://doi.org/10.1109/VR46266.2020.00026