Papers
Session 1: Presence and Immersion
Wednesday 9th, October
11:00 – 12:30
Context Relevant Locations as an Alternative to the Place Illusion in Augmented Reality
Kalila Shapiro, Anthony Steed
Presence is a powerful aspect of Virtual Reality (VR). However, there has been no consensus on how to achieve presence in Augmented Reality (AR) or whether it exists at all. It can reasonably be argued that the plausibility illusion, a key component in presence, exists in AR. The place illusion, however, cannot be obtained in AR as there is no way to make the user feel as though they are transported somewhere else when they are limited to what they can physically see in front of them. However, recently it has been argued that coherence or congruence are important parts of the place and plausibility illusions. The implication for AR is that the AR content might invoke a higher plausibility illusion if it is consistent with the physical place the content is situated in. In this study, we define the concept of a Context-Relevant Location (CRL), a physical place that is congruent with the experience. We present a study that allowed users to interact with AR objects in a CRL and in a generic environment. The results indicate that presence was higher in the CRL setting than the generic environment, contribute to the debate about providing a concrete description of presence-like phenomena in AR, and posit that CRLs play a similar role to the place illusion in an AR setting.
Lifter for VR Headset: Enhancing Immersion, Presence, Flow, and Alleviating Mental and Physical Fatigue during Prolonged Use
Jae Hoon Kim, DongYun Joo, Hyemin Shin, Sun-Uk Lee, Gerard Jounghyun Kim, Hanseob Kim
The virtual reality (VR) headset is still relatively heavy, causing a significant physical and mental burden and negatively affecting the VR user experience, particularly during extended time of use. In this paper, we present a prototype design of the “Lifter”, which utilizes a counterbalanced wire-pulley mechanism to partially relieve the weight of the VR headset (between 50% to 85%). The human subject study has confirmed that the Lifter relieved not only physical fatigue (as expected) but also significantly improved mental burden, sense of immersion, presence, and flow (perception of time passing) during prolonged usage (30 minutes or more).
(Reproduction Paper) The Influence of a Low-Resolution Peripheral Display Extension on the Perceived Plausibility and Presence
Larissa Brübach, Marius Röhm, Franziska Westermeier, Carolin Wienrich, Marc Erich Latoschik
The Field of View (FoV) is a central technical display characteristic of Head-Mounted Displays (HMDs), which has been shown to have a notable impact on important aspects of the user experience. For example, an increased FoV has been shown to foster a sense of presence and improve peripheral information processing, but it also increases the risk of VR sickness. This article investigates the impact of a wider, but inhomogenous FoV on the perceived plausibility, also measuring its impact on presence, spatial presence, and VR sickness as a comparison to and replication of effects from prior work. We developed a low-resolution peripheral display extension to pragmatically increase the FoV, taking into account the lower peripheral acuity of the human eye. While this design results in inhomogenous resolutions of HMDs at the display edges, it also is a low complexity and low-cost extension. However, its effects on important VR qualities have to be identified. We conducted a pre-study (30 participants) and a user study (27 participants). In a randomized 2×3 within-subject design, participants played three rounds of bowling in VR, both with and without the display extension. Two rounds contained incongruencies to induce breaks in plausibility. In the user study, we enhanced one incongruency to make it more noticeable and improved the shortcomings of the display extension that had previously been identified. However, no effect of the low-resolution FoV extension was measured in terms of perceived plausibility, presence, spatial presence, or VR sickness in both studies. We did find that one of the incongruencies was able to cause a break in plausibility without the extension, confirming the results of a previous study.
MeetingBenji: Tackling Cynophobia with Virtual Reality, Gamification, and Biofeedback
Inês Alves, Augusto Esteves
Phobias, particularly animal phobias like cynophobia (fear of dogs), disrupt the lives of those affected by, for instance, limiting outdoor activities. While virtual reality exposure therapy (VRET) has emerged as a potential treatment for this phobia, these efforts have been limited by high dropout rates and lack of control of anxiety in people who suffer from cynophobia. Inspired by these challenges, we present MeetingBenji, a VRET system for cynophobia that uses (i) gamification to enhance motivation and engagement, and (ii) biofeedback to facilitate self-control and reduce physiological responses. In a study (N=10) that compared the effects of displaying dogs in 3D scenes and 360º videos using the Behavioral Approach Test (BAT) – in which participants are increasingly exposed to the source of phobia – participants reported a high level of immersion to the exposure sequence. Further, they reported feeling more anxiety with 3D content than 360º video (60%), lower heart rates in the presence of biofeedback (between 1.71% and 7.46%), and improved self-control across the three exposure levels. They appreciated our gamified elements — completing all exposure levels. This study suggests that VRET with gamification and biofeedback is an effective approach to stimulate the habituation of people with cynophobia.
iStrayPaws: Immersing in a Stray Animal’s World through First-Person VR to Bridge Human-Animal Empathy
Yao Xu, Ding Ding, Yongxin Chen, Zhuying Li, Xiangyu Xu
While Virtual Reality Perspective-Taking (VRPT) demonstrates its efficiency in inducing empathy, its application primarily focuses on vulnerable humans, not animals. Existing animal-related works mainly targets farm animals and wildlife. In this work, we focus on stray animals and introduce iStrayPaws, a VRPT system that simulates stray animals’ challenging lives. The system offers users an immersive first-person journey into the world of stray animals encountering different difficulties like inclement weather, hunger, and illnesses. Enriched with audio-visual and kinesthetic design, the system seeks to deepen users’ understanding of stray animals’ life and foster profound emotional connections. To evaluate the system, a user study was conducted, which showed that VRPT recipients exhibited significant improvement in both state and trait empathy compared to traditional method. Our research not only delivers a novel, accessible, and interactive animal empathy experience but also provides innovative solutions for addressing stray animal issues and advancing broader animal welfare work.
Exploring Presence in Interactions with LLM-Driven NPCs: A Comparative Study of Speech Recognition and Dialogue Options
Frederik Roland Christiansen, Linus Nørgaard Hollensberg, Niko Bach Jensen, Kristian Julsgaard, Kristian Nyborg Jespersen, Ivan Nikolov
Head-mounted displays (HMDs) such as virtual reality (VR) headsets, are an effective way of immersing the user in a virtual environment (VE). HMDs, like other mediated experiences, seek to make the user forget the mediated nature of the experience. We break down the concept of presence, and how to make the user feel present within the VE. We combine theories of how immersive systems create a sense of place illusion (PI) and plausibility illusion (Psi), and how presence can be achieved when engaging with social actors within a medium. By conducting a literature review of the utilization of large-language models (LLMs) in narrative content, we find that research into this topic is scarce, but interacting with LLM NPCs might improve user experience. A state-of-the-art analysis of VR games was done to map out current user-NPC interaction implementations, and how this can be improved using LLMs. UI interfaces (dialogue options) were the most prevalent option for interacting with NPCs, and only one game allowed the player to speak to NPCs using voice commands. Previously, voice commands have been limited in their ability, but advancements in LLMs allow for user-NPC interactions to approach the levels of abstraction in human-to-human interactions. This paper covers the development of a murder mystery game used to test the difference in levels of presence and game experience between two input modalities used in user-NPC interactions: `Dialogue options’ and `speech recognition’. Only the challenge component of the game-experience questionnaire showed a significant difference between modalities, but interaction with NPCs through the use of speech recognition tended to score higher in all three social presence components.
Session 2: Navigation and Motion
Wednesday 9th, October
13:30 – 15:00
Effects of Different Tracker-driven Direction Sources on Continuous Artificial Locomotion in VR
Christos Lougiakis, Theodoros Mandilaras, Akrivi Katifori, Giorgos Ganias, Ioannis Panagiotis Ioannidis, Maria Roussou
Continuous artificial locomotion in VR typically involves users selecting their direction using controller input, with the forward direction determined by the Head, Hands, or less commonly, the Hip. The effects of these different sources on user experience are under-explored, and Feet have not been used as a direction source. To address these gaps, we compared these direction sources, including a novel Feet-based technique. A user study with 22 participants assessed these methods in terms of performance, preference, motion sickness, and sense of presence. Our findings indicate high levels of presence and minimal motion sickness across all methods. Performance differences were noted in one task, where the Head outperformed the Hand. The Hand method was the least preferred, feeling less natural and realistic. The Feet method was found to be more natural than the Head and more realistic than the Hip. This study enhances understanding of direction sources in VR locomotion and introduces Feet-based direction as a viable alternative.
Influence of Rotation Gains on Unintended Positional Drift during Virtual Steering Navigation in Virtual Reality
Hugo Brument, Arthur Chaminade, Ferran Argelaguet Sanz
Unintended Positional Drift (UPD) is a phenomenon that occurs during navigation in Virtual Reality (VR). It is characterized by unconscious or unintentional physical movements of the user in the workspace while using a locomotion technique (LT) that does not require physical displacement (e.g., steering, teleportation). Recent work showed that some factors, such as the LT used and the type of trajectory, can influence UPD. However, little is known about the influence of rotation gains (that are commonly used in redirection-based LTs) on UPD during navigation in VR. In this paper, we conducted two user studies to assess the influence of rotation gains on UPD. In the first study, participants had to perform consecutive turns in a corridor virutal environment. In the second study, participants had to freely explore a large office floor and collect spheres. We compared the conditions between rotation gains and without gains, and we also varied the turning angle to perform the turns while considering factors such as sensitivity to cybersickness and the learning effect. We found that rotation gains and lower turning angle decreased UPD during the first study but the presence of rotation gains increased UPD in the second study. This work contributes to the understanding of UPD, which tends to be an overlooked topic and discusses the design implications of these results to improve navigation in VR.
Exploring the Impact of Visual Scene Characteristics and Short-term Learning Effects on Rotation Gain Perception in VR
Qi Wen Gan, Sen-Zhe Xu, Fang-Lue Zhang, Song-Hai Zhang
Rotation gain is a subtle manipulation technique commonly employed in Redirected Walking (RDW) methods due to its superior capability to alter a user’s virtual trajectory. Previous studies have reported that the imperceptible ranges of rotation gains are influenced by various factors, resulting in different detection threshold values, which may alter RDW performance. In this study, we focus on the effects of scene visual characteristics on the rotation gain and rotation gain thresholds (RGTs), which has been less explored in this area. In our experiments, we focus on three visual characteristics: visual density, spatial size, and realism. Each characteristic is tested at two different levels, resulting in a design of eight distinct VR scenes. Through extensive statistical analysis, we show that spatial size is a meaningful factor that influences user perception of rotation gain in different virtual environments (VEs). No significant results of sensitivity differences were found for visual density and realism. We show that the short-term temporal effect is another predominant factor influencing user perception of rotation gain, even when users experience different visual stimuli in VEs, such as different scene visual characteristic settings in our study. This result indicates that users’ learning effects on rotation gain can occur in as short a time as overnight intervals, rather than over weeks.
Wheelchair Proxemics: interpersonal behaviour between pedestrians and power wheelchair drivers in real and virtual environments
Emilie Leblong, Fabien Grzeskowiak, Sebastien Thomas, Louise Devigne, Marie Babel, Anne-Hélène Olivier
Immersive environments provide opportunities to learn and transfer skills to real life. This opens up new areas of application, such as rehabilitation, where people with neurological disabilities can learn to drive a power wheelchair (PWC) through the development of immersive simulators. To expose these specific users to daily-life study interaction situations, it is important to ensure realistic interactions with the virtual humans that populate the simulated environment, as PWC users should learn to drive and navigate under everyday conditions. While non-verbal pedestrian-pedestrian interactions have been extensively studied, understanding pedestrian-PWC user interactions during locomotion is still an open research area. Our study aimed to investigate the regulation of interpersonal distance (i.e., proxemics) between a pedestrian and a PWC user in real and virtual situations. We designed 2 experiments in which 1) participants had to reach a goal by walking (respectively driving a PWC) and avoid a static PWC confederate (respectively a standing confederate) and 2) participants had to walk to a goal and avoid a static confederate seated on a PWC in real and virtual conditions. Our results showed that interpersonal distances were significantly different whether the pedestrian avoided the PWC user or vice versa, but were not affected by the environment (VR vs. Real). We also showed an influence of the orientation of the person to be avoided. We discuss these findings with respect to pedestrian-pedestrian interactions, as well as their implications for the design of virtual humans interacting with PWC users for rehabilitation applications. In particular, we proposed a proof of concept by adapting existing microscopic crowd simulation algorithms to consider the specificity of pedestrian-PWC user interactions.
Semi-Automated Guided Teleportation through Immersive Virtual Environments
Tim Weissker, Marius Meier-Krueger, Pauline Bimberg, Robert W. Lindeman, Torsten Wolfgang Kuhlen
Immersive knowledge spaces like museums or cultural sites are often explored by traversing pre-defined paths that are curated to unfold a specific educational narrative. To support this type of guided exploration in VR, we present a semi-automated, hands-free path traversal technique based on teleportation that features a slow-paced interaction workflow targeted at fostering knowledge acquisition and maintaining spatial awareness. In an empirical user study with 34 participants, we evaluated two variations of our technique, differing in the presence or absence of intermediate teleportation points between the main points of interest along the route. While visiting additional intermediate points was objectively less efficient, our results indicate significant benefits of this approach regarding the user’s spatial awareness and perception of interface dependability. However, the user’s perception of flow, presence, attractiveness, perspicuity, and stimulation did not differ significantly. The overall positive reception of our approach encourages further research into semi-automated locomotion based on teleportation and provides initial insights into the design space of successful techniques in this domain.
The Effects of Electrical Stimulation of Ankle Tendons on Redirected Walking with the Gradient Gain
Takashi Ota, Keigo Matsumoto, Kazuma Aoyama, Tomohiro Amemiya, Takuji Narumi, Hideaki Kuzuoka
As a redirected walking technique, a method has been proposed to enable users to walk in an undulating virtual space even in a flat physical environment by setting the slope of the floor in the virtual environment to be different from that in the physical environment without causing discomfort. However, the slope range in which discrepancies between visual and proprioceptive sensations are not perceived is limited, restricting the slopes that can be presented. In this study, we proposed redirected walking using electrical stimulation of the Achilles and tibialis anterior muscle tendons, extending the applicable slope range of redirected walking without compromising the natural gait sensation. Electrical stimulation of the ankle tendons affects the proprioceptive sensation and gives the illusion of tilting in the standing posture, expanding the applicable slope range. Two experiments showed that the proposed method improved the experience of uphill and downhill walking in terms of the range of the virtual slope where a high naturalness of gait and a high congruency of visual and proprioceptive sensations are maintained. Notably, electrical stimulation of the Achilles tendons significantly improved the naturalness of the walking experience during virtual downhill walking, which has been considered more challenging in previous studies.
Session 3: Technologies
Wednesday 9th, October
15:30 – 17:00
Neural Motion Tracking: Formative Evaluation of Zero Latency Rendering
Daniel Roth, Valentin Bräutigam, Nidhi Joshi, Constantin Kleinbeck, Hannah Schieber, Julian Kreimeier
Low motion-to-photon latencies between physical movement and rendering updates are crucial for an immersive VR experience avoiding users’ discomfort and sickness. Current methods aim to minimize the delay between the motion measurement and rendering at the cost of increasing technical complexity and possibly decreasing accuracy. By relying on capturing physical motion, these strategies will, by nature, not result in zero latency rendering or will be based on prediction and resulting uncertainty. In this paper, we present and evaluate an alternative concept and proof of principle for VR motion tracking that enables motion-to-photon latencies of zero and below zero in time. In contrast to measuring physical activity, we termed our concept neural motion tracking, which we define as the sensing and assessment of motion through human neural activation of the somatic nervous system. The key principle is that we aim to utilize the physiological timeframe between a user’s intention and physical execution of motion and thus aim to foresee upcoming motion ahead of the physical movement. This is achieved by sampling preceding electromyographic signals before the muscle activation. The EMD between potential change in the muscle activation and actual physical movement opens a gap in which measurement can be taken and evaluated before the physical motion. In a first proof of principle, we evaluated two activities, arm bending and head rotation, and compared them for the first time to professional optical tracking. The measured latencies quantify that it is possible to predict muscle movement and update the rendering up to 2 ms before its physical execution, which is assessed by optical tracking after approximately 4 ms. However, to make the best use of this advantage, EMG sensor data should be as high quality as possible (i.e., low noise and from muscle-near electrodes). Our results empirically quantify this characteristic for the first time when compared to state-of-the-art optical tracking systems for VR. We discuss our results and potential pathways to motivate further work toward marker- and latency-less motion tracking.
Investigation of Redirection Algorithms in Small Tracking Spaces
Linda Krueger
In virtual reality, redirected walking lets users walk in larger virtual spaces than the physical tracking space set aside for their movements. This benefits immersion and spatial navigation compared to virtual locomotion techniques such as teleportation or joystick control. Different algorithms have tried to optimise redirected walking. These algorithms have been tested in simulation in large spaces and with small user studies. However few studies have looked at the user experience of these algorithms in small tracking spaces. We conducted a user study to compare the performance of different redirected walking algorithms in a small tracking space of 3.5m x 3.5m. Three algorithms were chosen based on their approaches to redirection – Reset Only, Steer to Centre and Alignment Based Redirection Control. 36 people participated in the study. It was found users preferred Reset Only in the tracking space. Reset Only redirects users less and is easier to implement than Steer to Centre or Alignment Based Redirection Control. Additionally Reset Only had similar performance to Steer to Centre and better task performance than Alignment Based Redirection Control despite resetting users more often. Based on these findings, we provide guidelines for developers working in small tracking spaces.
Interactive Multi-GPU Light Field Path Tracing Using Multi-Source Spatial Reprojection
Erwan Leria, Markku Mäkitalo, Pekka Jääskeläinen, Mårten Sjöström, Tingting Zhang
Path tracing combined with multi-view displays enables progress towards achieving ultrarealistic virtual reality. However, multi-view displays based on light field technology impose a heavy work- load for real-time graphics due to the large number of views to be rendered. In order to achieve low latency performance, computational effort can be reduced by path tracing only some views (source views), and synthesizing the remaining views (target views) through spatial reprojection, which reuses path traced pixels from source views to target views. Deciding the number of source views with respect to the computational resources is not trivial, since spatial reprojection introduces dependencies in the otherwise trivially parallel rendering pipeline and path tracing multiple source views increases the computation time. In this paper, we demonstrate how to reach near-perfect linear multi-GPU scalability through a coarse-grained distribution of the light field path tracing workload. Our multi-source method path traces a single source view per GPU, which helps decreasing the number of dependencies. Reducing dependencies reduces the over- head of image transfers and G-Buffer rasterization used for spatial reprojection. In a node of 4× RTX A6000 GPUs, given 4 source views, we reach a light field rendering frequency of 3–19 Hz, which corresponds to interactive rate. On four test scenes, we outperform state-of-the-art multi-GPU light field path tracing pipelines, achieving a speedup of 1.65× up to 4.63× for 1D light fields of dimension 100 × 1, each view having a resolution of 768 × 432, and 1.51× up to 3.39× for 2D stereo near-eye light fields of size 12 × 6 (left eye: 6 × 6 views and right eye: 6 × 6 views), 1024 × 1024 per view.
Exploring Visual Conditions in Virtual Reality for the Teleoperation of Robots
Paul Christopher, Stephen Robert Pettifer
In the teleoperation of robots, the absence of proprioception means that visual information plays a crucial role. Previous research has investigated methods to offer optimal vantage points to operators during teleoperation, with virtual reality (VR) being proposed as a mechanism to give the operator intuitive control over the viewpoint for improved visibility and interaction. However, the most effective perspective for robot operation and the optimal portrayal of the robot within the virtual environment remain unclear. This paper examines the impact of various visual conditions on users’ efficiency and preference in controlling a simulated robot via VR. We present a user study that compares two operating perspectives and three robot appearances. The findings indicate mixed user preferences and highlight distinct advantages associated with each perspective and appearance combination. We conclude with recommendations on selecting the most beneficial perspective and appearance based on specific application requirements.
Choose Your Reference Frame Right: An Immersive Authoring Technique for Creating Reactive Behavior
Sevinc Eroglu, Patric Schmitz, Kilian Sinke, David Anders, Torsten Wolfgang Kuhlen, Benjamin Weyers
Immersive authoring enables content creation for virtual environments without a break of immersion. To enable immersive authoring of reactive behavior for a broad audience, we present modulation mapping, a simplified visual programming technique. To evaluate the applicability of our technique, we investigate the role of reference frames in which the programming elements are positioned, as this can affect the user experience. Thus, we developed two interface layouts: “surround-referenced” and “object-referenced”. The former positions the programming elements relative to the physical tracking space, and the latter relative to the virtual scene objects. We compared the layouts in an empirical user study (𝑛 = 34) and found the surround-referenced layout faster, lower in task load, less cluttered, easier to learn and use, and preferred by users. Qualitative feedback, however, revealed the object-referenced layout as more intuitive, engaging, and valuable for visual debugging. Based on the results, we propose initial design implications for immersive authoring of reactive behavior by visual programming. Overall, modulation mapping was found to be an effective means for creating reactive behavior by the participants.
Session 4: Time
Thursday 10th, October
9:00 – 10:30
Development and Validation of a 3D Pose Tracking System towards XR Home Training to Relieve Back Pain
Nikolai Hepke, Moritz Scherer, Jörg Lohscheller, Steffen Mueller, Benjamin Weyers
Back pain significantly impacts society, leading to substantial economic costs and reducing individuals’ quality of life. A digital XR physiotherapist could support adherence to home-based training programs, thereby potentially enhancing treatment effectiveness. For the system to provide accurate biofeedback, which is crucial to the success if the system, it must be capable of tracking exercise execution reliably and accurately. In this paper we present the design of a robust 4-Kinect system, capable of tracking human 3D pose, to be used in an autonomous, home-based XR rehabilitation program. The system is evaluated against OpenPose and validated using a marker-based Vicon system, considered the gold-standard, in a study involving 20 healthy participants. The results show that the Kinects overall have a lower absolute positional error, with a median of 1.2 cm, than OpenPose, with a median of 2.0 cm and a lower median angular error of 5.2° over all keypoints (OpenPose: 5.9°). Furthermore, the time courses of the Kinect joint positions exhibit a higher correlation to the gold standard compared to the OpenPose system, as confirmed by the results of a Bland-Altman analysis. Generally, the joints of the lower body could be tracked with a higher level of accuracy than that of the upper body. The study reveals that the multi-Kinect-system is overall more robust and tracks exercises with higher accuracy than the multi-OpenPose system, which makes it better suited for a quantitative XR training program for home use.
Motion Passwords
Christian Rack, Lukas Schach, Felix Achter, Yousof Shehada, Jinghuai Lin, Marc Erich Latoschik
This paper introduces “Motion Passwords”, a novel biometric authentication approach where virtual reality users verify their identity by physically writing a chosen word in the air with their hand controller. This method allows combining three layers of verification: knowledge-based password input, handwriting style analysis, and motion profile recognition. As a first step towards realizing this potential, we focus on verifying users based on their motion profiles. We conducted a data collection study with 48 participants, who performed over 3800 Motion Password signatures across two sessions. We assessed the effectiveness of feature-distance and similarity-learning methods for motion-based verification using the Motion Passwords as well as specific and uniform ball-throwing signatures used in previous works. In our results, the similarity-learning model was able to verify users with the same accuracy for both signature types. This demonstrates that Motion Passwords, even when applying only the motion-based verification layer, achieve reliability comparable to previous methods. This highlights the potential for Motion Passwords to become even more reliable with the addition of knowledge-based and handwriting style verification layers. Furthermore, we present a proof-of-concept Unity application demonstrating the registration and verification process with our pretrained similarity-learning model. We publish our code, the Motion Password dataset, the pretrained model, and our Unity prototype on https://drive.google.com/drive/folders/1Zy2avoab6EZchMcxJszhvybleUBGk75N?usp=share_link .
Out-Of-Virtual-Body Experiences: Virtual Disembodiment Effects on Time Perception in VR
Fabian Unruh, Jean-Luc Lugrin, Marc Erich Latoschik
This paper presents a novel experiment investigating the relationship between virtual disembodiment and time perception in Virtual Reality (VR). Recent work demonstrated that the absence of a virtual body in a VR application changes their perception of time. However, the effects of simulating an out-of-body experience (OBE) in VR on time perception are still unclear. We designed an experiment with two types of virtual disembodiment techniques based on viewpoint gradual transition: a virtual body’s behind view and facing view transitions. We investigated their effects on forty-four participants in an interactive scenario where a lamp was repeatedly activated and time intervals were estimated. Our results show that, while both techniques elicited a significant virtual disembodiment perception, time duration estimations in the minute range were only shorter in the facing view compared to the eye view condition. We believe that reducing agency in the facing view is a key factor in the time perception alteration. This provides a novel approach to manipulating time perception in VR, with potential applications for mental health treatments such as schizophrenia or depression and for improving our understanding of the relation between body, virtual body, and time.
Some Times Fly: The Effects of Engagement and Environmental Dynamics on Time Perception in Virtual Reality
Sahar Niknam, Stéven Picard, Valentina Rondinelli, Jean Botev
An hour spent with friends seems shorter than an hour waiting for a medical appointment. Many physiological and psychological factors, such as body temperature and emotions, have been shown to correlate with our subjective perception of time. Experiencing virtual reality (VR) has been observed to make users significantly underestimate the duration. This paper explores the effect of virtual environment characteristics on time perception, focusing on two key parameters: user engagement and environmental dynamics. We found that increased presence and interaction with the environment significantly decreased the users’ estimation of the VR experience duration. Furthermore, while a dynamic environment lacks significance in shifting perception toward one specific direction, that is, underestimation or overestimation of the durations, it significantly distorts perceived temporal length. Exploiting these two factors’ influence smartly constitutes a powerful tool in designing intelligent and adaptive virtual environments that can reduce stress, alleviate boredom, and improve well-being by adjusting the pace at which we experience the passage of time.
Enhancing VR Sketching with a Dynamic Shape Display
Wen Ying, Seongkook Heo
Sketching on virtual objects in Virtual Reality (VR) can be challenging due to the lack of a physical surface that constrains the movement and provides haptic feedback for contact and movement. While using a flat physical drawing surface has been proposed, it creates a significant discrepancy between the physical and virtual surfaces when sketching on non-planar virtual objects. We propose using a dynamic shape display that physically mimics the shape of a virtual surface, allowing users to sketch on a virtual surface as if they are sketching on a physical object’s surface. We demonstrate this using VRScroll, a shape-changing device that features seven independently-controlled flaps to automatically imitate the shape of a virtual surface. Our sketching study showed that participants exhibited higher precision when tracing simple shapes with the dynamic shape display and produced clearer sketches. We also provided several design implications for dynamic shape displays aimed at enabling precise sketching in VR.
Session 5: Multimodality
Thursday 10th, October
11:00 – 12:30
Simulating Object Weight in Virtual Reality: The Role of Absolute Mass and Weight Distributions
Alexander Kalus, Johannes Klein, Tien-Julian Ho, Niels Henze
Weight interfaces enable users of Virtual Reality (VR) to perceive the weight of virtual objects, significantly enhancing realism and enjoyment. While research on these systems primarily focused on their implementation, little attention has been given to determining the weight to be rendered by them: As the perceived weight of objects is influenced not only by their absolute mass, but also by their weight distribution and prior expectations, it is currently unknown which simulated mass provides the most realistic representation of a given object. We conducted a study, in which 30 participants chose the best fitting weight for 54 virtual object configurations, whereby we systematically varied the virtual objects’ visual mass, their weight distribution, and the position of the physical mass on the grip. Our Bayesian analysis suggests that the visual weight distribution of objects does not affect which absolute physical mass best represents them, whereas the position of the provided physical mass does. Additionally, participants overweighted virtual objects with lower visual mass while underweighting objects with higher visual mass. We discuss how these findings can be leveraged by designers of weight interfaces and VR experiences to optimize realism.
Enriching Industrial Training Experience in Virtual Reality with Pseudo-Haptics and Vibrotactile Stimulation
Chiwoong Hwang, Tiare Feuchtner, Ian Oakley, Kaj Grønbæk
Virtual Reality (VR) technology facilitates effective, flexible, and safe industrial training for novice technicians when on-site training is not feasible. However, researchers have found that training in VR may be less successful than traditional learning approaches in real-world settings and enriching haptic interactions may be the key to improve virtual training. In this study, we integrated pseudo-haptic feedback from motion delay with vibrotactile stimulation to enhance the sense of presence, enjoyment, and the perception of physical properties in VR, which may be crucial factors when rendering faithful simulations. The impact of combined haptic support was assessed in a complex industrial training procedure completing a variety of tasks such as vacuum cleaning. The results indicate that vibrotactile cues are beneficial for presence and enjoyment, whereas pseudo-haptic illusions effectively enable kinesthetic sensations. Furthermore, multimodal haptic feedback that mixed the two yielded the most advantageous outcomes complementing each other. Our findings highlight the potential of the pseudo-haptic and vibrotactile fusion in industrial training scenarios presenting practical implications of the state-of-the-art haptic technologies for virtual learning.
Investigating the Impact of Odors and Visual Congruence on Motion Sickness in Virtual Reality
Lisa Reichl, Martin Kocur
Motion sickness is a prevalent side effect of exposure to virtual reality (VR). Previous work found that pleasant odors can be effective in alleviating symptoms of motion sickness such as nausea. However, it is unknown whether pleasant odors that do not match the anticipated scent of the virtual environment are also effective as they could, in turn, amplify symptoms such as disorientation. Therefore, we conducted a study with 24 participants experiencing a pleasant odor (rose) and an unpleasant odor (garlic) while being immersed in a virtual environment involving either virtual roses or garlic. We found that participants had lower motion sickness when experiencing the rose odor, however, only in the rose environment. Accordingly, we also showed that the sense of disorientation was lower for the rose odor, however, only while being immersed in the rose environment. Results indicate that whether pleasant odors are effective in alleviating motion sickness symptoms depends on the visual appearance of the virtual environment. We discuss possible explanations for such effects to occur. Our work contributes to the goal of mitigating visually induced motion sickness in VR.
Generative Terrain Authoring with Mid-air Hand Sketching in Virtual Reality
Yushen Hu, Keru Wang, Zhu Wang, Yuli Shao, Jan Plass, Ken Perlin
Terrain generation and authoring in Virtual Reality (VR) offers unique benefits, including 360-degree views, improved spatial perception, immersive and intuitive design experience and natural input modalities. Yet even in VR it can be challenging to integrate natural input modalities, preserve artistic controls and lower the effort of landscape prototyping. To tackle these challenges, we present our VR-based terrain generation and authoring system, which utilizes hand tracking and a generative model to allow users to quickly prototype natural landscapes, such as mountains, mesas, canyons and volcanoes. Via positional hand tracking and hand gesture detection, users can use their hands to draw mid-air strokes to indicate desired shapes for the landscapes. A Conditional Generative Adversarial Network trained by using real-world terrains and their height maps then helps to generate a realistic landscape which combines features of the training data and the mid-air strokes. In addition, users can use their hands to further manipulate their mid-air strokes to edit the landscapes. In this paper, we explore this design space and present various scenarios of terrain generation. Additionally, we evaluate our system across a diverse user base that vary in VR experience and professional background. The study results indicate that our system is feasible, user-friendly and capable of fast prototyping.
How Different Is the Perception of Vibrotactile Texture Roughness in Augmented versus Virtual Reality?
Erwan Normand, Claudio Pacchierotti, Eric Marchand, Maud Marchal
Wearable haptic devices can modify the haptic perception of an object touched directly by the finger in a portable and unobtrusive way. In this paper, we investigate whether such wearable haptic augmentations are perceived differently in Augmented Reality (AR) vs. Virtual Reality (VR), as well as the effect of touching the augmented environment with a virtual hand instead of one’s own hand. We first designed a system for real-time rendering of vibrotactile virtual textures without constraints on hand movements, integrated with an immersive visual AR/VR headset. We then conducted a psychophysical study with 20 participants to evaluate the haptic perception of virtual roughness textures on a real surface touched directly with the finger (1) without visual augmentation, (2) with a realistic virtual hand rendered in AR, and (3) with the same virtual hand in VR. On average, participants overestimated the roughness of haptic textures when touching with their real hand alone (without visual augmentation) and underestimated the roughness of haptic textures when touching with a virtual hand in AR, with VR in between. Their exploration behaviour was also slower in VR than with their real hand alone, although their subjective evaluation of the texture was not affected. This suggests an effect as if the same augmentation touched with a virtual hand in AR was perceived as less rough than when touched with the real hand alone. Finally, we discuss how the perceived visual delay of the virtual hand might produce this effect.
Session 6: Collaboration and Games
Thursday 10th, October
13:30 – 15:00
TeenWorlds: Supporting Emotional Expression for Teenagers with their Parents and Peers through a Collaborative VR Experience
Evropi Stefanidi, Nadine Wagener, Dustin Augsten, Andy Augsten, Leon Reicherts, Paweł W. Woźniak, Johannes Schöning, Yvonne Rogers, Jasmin Niess
Adolescence is a period of growth and exploration, marked by influential relationships with peers and parents. These relationships are essential for teenagers’ well-being, highlighting the need to support their interpersonal interactions. Emotional expression is key in resolving conflicts that can frequently arise. This paper investigates the potential of TeenWorlds, a Virtual Reality (VR) application, to facilitate emotional expression and shared understanding among teenagers and their peers and parents. In our study, teenagers, accompanied by either a peer or a parent (total n=42), used TeenWorlds to visually represent their emotions during a shared conflict, discuss them, and collaborate on a joint VR drawing. Our findings indicate that TeenWorlds can foster communication, reflection, and strengthen interpersonal relationships. However, notable differences were observed in interactions with peers versus parents. We contribute insights into designing VR systems that support reflective experiences and meaningful family interactions, ultimately enhancing the well-being of adolescents, parents, and families.
HistoLab VR: A User Elicitation Study Exploring the Potential of Virtual Reality Game-based Learning for Hazard Awareness
Robin Timon Hänni, Tiffany Luong, Dr Julia Chatain, Felix Mangold, Holger Dressel, Christian Holz
Occupational medicine is a vital field for workplace safety and health but often encounters challenges in engaging students and effectively communicating subtle yet critical workplace hazards. To tackle these issues, we developed HistoLab VR, a virtual reality (VR) game that immerses participants in a histology lab environment based on real-world
practice. Our comprehensive user study with 17 students and experts assessed the game’s impact on hazard awareness, interest in occupational medicine, and user experience through quantitative and qualitative measures. Our findings show that HistoLab VR not just immersed participants in a relatable histology lab worker experience but that it effectively raised awareness about subtle hazards and conveyed the
inherent stress of the job. We discuss our results and highlight the potential of VR as a valuable educational tool for occupational medicine training.
Game-Based Motivation: Enhancing Learning with Achievements in a Customizable Virtual Reality Environment
Michael Holly, Sandra Brettschuh, Ajay Shankar Tiwari, Kaushal Kumar Bhagat, Dr. Johanna Pirker
Digital learning experiences that promote interactive learning and engagement are becoming increasingly relevant. Educational games can be used to generate an engaging learning atmosphere that allows knowledge acquisition through hands-on activities. Combining these games with virtual reality (VR) technology allows users to interact with virtual environments, leading to a highly immersive learning experience. In this study, we explore how game achievements impact students’ motivation and learning in a customizable VR learning environment. Using an AB study involving 50 students, we utilized an interactive wave simulation to assess motivation, engagement, and the overall learning experience. Data collection involved standardized questionnaires, along with tracking interaction time and different interactions within the virtual environment. The findings revealed that users who earned game achievements to unlock customization features felt significantly more accomplished when they successfully mastered challenges and obtained all achievements. However, it was observed that adding achievements could also create pressure on students, leading to feelings of embarrassment when facing task failures. While achievements have the potential to enhance engagement and motivation, their excessive use may lead to distractions, anxiety, and reduced overall engagement. It shows that is crucial to find a good balance in employing game achievements within educational environments to ensure they contribute positively to the learning experience without causing undue stress or deterring learners.
Hands or Controllers? How Input Devices and Audio Impact Collaborative Virtual Reality
Alex Adkins, Ryan Canales, Sophie Joerg
Advancing virtual reality technologies are enabling real-time virtual-face to virtual-face communication. Hand tracking systems that are integrated into Head-Mounted Displays (HMD) enable users to directly interact with their environments and with each other using their hands as opposed to using controllers. Due to the novelties of these technologies our understanding of how they impact our interactions is limited. In this paper, we investigate the consequences of using different interaction control systems, hand tracking or controllers, when interacting with others in a virtual environment. We design and implement NASA’s Survival on the Moon teamwork evaluation exercise in virtual reality (VR) and test for effects with and without allowing verbal communication. We evaluate social presence, perceived comprehension, team cohesion, group synergy, task workload, as well as task performance and duration. Our findings reveal that audio communication significantly enhances social presence, perceived comprehension, and team cohesion, but it also increases effort workload and negatively impacts group synergy. The choice of interaction control systems has limited impact on various aspects of virtual collaboration in this scenario, although participants using hand tracking reported lower effort workload, while participants using controllers reported lower mental workload in the absence of audio.
Exploring User Placement for VR Remote Collaboration in a Constrained Passenger Space
Dr Daniel Medeiros, Dr Graham Wilson, Mauricio Sousa, Nadia Pantidi, Dr Mark McGill, Professor Stephen Anthony Brewster
Extended Reality (XR) offers the potential to transform the passenger experience by allowing users to inhabit varied virtual spaces for
entertainment, work or social interaction, whilst escaping the constrained transit environment. XR allows remote collaborators to feel like they are together and enables them to perform complex 3D tasks. However, the social and physical constraints of the passenger space pose unique challenges to productive and socially acceptable collaboration. Using a collaborative VR puzzle task, we examined the effects of five different f-formations of collaborator placement and orientation in an
interactive workspace on social presence, task workload, and implications for social acceptability. Our quantitative and qualitative results showed that face-to-face formations were preferred for tasks
with a high need for verbal communication but may lead to social collisions, such as inadvertently staring at a neighbouring passenger, or physical intrusions, such as gesturing in another passenger’s personal space. More restrictive f-formations, however, were preferred for passenger use as they caused fewer intrusions on other passengers’ visual and physical space.
Stand Alone or Stay Together: An In-situ Experiment of Mixed-Reality Applications in Embryonic Anatomy Education
Danny Schott, Matthias Kunz, Florian Heinrich, Jonas Mandel, Anne Albrecht, Rüdiger Braun-Dullaeus, Christian Hansen
Understanding the location and function of anatomical structures is essential in medical education. Where traditional educational media and methods reach their limits, mixed-reality (MR) environments can provide effective learning support because of their high interactivity and spatial visualization capabilities. However, the underlying design and
pedagogical requirements are as diverse as the technologies themselves. This paper examines the effectiveness of individual- and collaborative learning environments for anatomy education, using embryonic heart development as an example. Both applications deliver the same content using identical visualizations and hardware but differ in interactivity and pedagogical approach. The environments were evaluated in a user
study with medical students (n = 90) during their examination phase, assessing usability, user experience, social interaction/co-presence, cognitive load, and personal preference. Additionally, we conducted a knowledge test before and after an MR learning session to determine educational effects compared to a conventional anatomy seminar. Results
indicate that the individual learning environment was generally preferred. However, no significant difference in learning effectiveness could be shown between the conventional approach and the two MR applications. This suggests that both can effectively complement traditional seminars despite their different natures. Moreover, the
novelty of MR applications seemed to overshadow usability and user experience measures. This study contributes to understanding how different MR settings could be tailored for anatomical education.
Session 7: Cognitive Aspects
Friday 11th, October
9:00 – 10:30
Contextual Matching Between Learning and Testing Within VR Does Not Always Enhance Memory Retrieval
Takato Mizuho, Takuji Narumi, Professor Hideaki Kuzuoka
Episodic memory is influenced by environmental contexts, such as location and auditory stimuli. The most well-known effect is the reinstatement effect, which refers to the phenomenon where contextual matching between learning and testing enhances memory retrieval. Previous studies have investigated whether the reinstatement effect can
be observed within immersive virtual environments. However, only a limited number of studies have reported a significant reinstatement effect using virtual reality, while most have failed to detect it. In this study, we re-examined the reinstatement effect using 360-degree video-based virtual environments. Specifically, we carefully selected virtual environments to elicit different emotional responses, which has been suggested as a key factor in inducing a robust reinstatement effect in the physical world. Surprisingly, we found a significant reversed reinstatement effect with a large effect size. This counter-intuitive result suggests that contextual congruence does not necessarily enhance memory and may even interfere with it. This outcome may be explained by the retrieval-induced forgetting phenomenon, but further exploration is needed. This finding is particularly important for virtual reality-based educational applications and highlights the need for a deeper understanding of the complex interactions between memory and contextual cues within virtual environments.
Toward Facilitating Search in VR With the Assistance of Vision Large Language Models
Chao LIU, Chi Cheung, Mingqing XU, Zhongyue Zhang, Mingyang Su, Mingming Fan
While search is a common need in Virtual reality (VR) applications, current approaches are cumbersome and tend to require users to type on a mid-air keyboard using controllers in VR or remove VR equipment to search on a computer. We first performed the literature review and a formative study, from which we identified six common searching needs:
knowing about one object, knowing about the object’s partial details, knowing objects with environmental context, knowing about interactions with objects, and finding objects within field of view (FOV) and out of FOV in the VR scene. Informed by the needs, we designed technology probes that leveraged recent advances in Vision Large Language Models
and conducted a probe-based study with users to elicit feedback. Based on the findings, we derived design principles for VR designers and developers to consider when designing a user-friendly search interface in VR. While prior work about VR search tended to address specific aspects of search, our work contributes design considerations toward
making search easy in VR and potential future directions.
Evaluating Gaze Interactions within AR for Nonspeaking Autistic Users
Ahmadreza Nazari, Dr. Lorans Alabood, Molly Kay Rathbun, Dr. Vikram K. Jaswal, Dr. Diwakar Krishnamurthy
Nonspeaking autistic individuals often face significant inclusion barriers in various aspects of life, mainly due to a lack of effective communication means. Specialized computer software, particularly delivered via Augmented Reality (AR), offers a promising and accessible way to improve their ability to engage with the world. While research
has explored near-hand interactions within AR for this population, gaze-based interactions remain unexamined. Given the fine motor skill requirements and potential for fatigue associated with near-hand interactions, there is a pressing need to investigate the potential of gaze interactions as a more accessible option. This paper presents a study investigating the feasibility of eye gaze interactions within an AR environment for nonspeaking autistic individuals. We utilized the HoloLens 2 to create an eye gaze-based interactive system, enabling users to select targets either by fixating their gaze for a fixed period or by gazing at a target and triggering selection with a physical button (referred to as a ‘clicker’). We developed a system called HoloGaze that allows a caregiver to join an AR session to train an autistic individual in gaze-based interactions as appropriate. Using
HoloGaze, we conducted a study involving 14 nonspeaking autistic participants. The study had several phases, including tolerance testing, calibration, gaze training, and interacting with a complex interface: a virtual letterboard. All but one participant were able to wear the device and complete the system’s default eye calibration; 10 participants completed all training phases that required them to select targets using gaze only or gaze-click. Interestingly, the 7 users who chose to continue to the testing phase with gaze-click were much more successful than those who chose to continue with gaze alone. We also report on challenges and improvements needed for future gaze-based interactive AR systems for this population. Our findings pave the way for new opportunities for specialized AR solutions tailored to the needs of this under-served and under-researched population.
Exploring Immersive Debriefing in Virtual Reality Training: A Comparative Study
Kelly Minotti, Guillaume Loup, Thibault Harquin, Samir Otmane
Simulation and debriefing are two essential and inseparable phases of virtual reality training. With the widespread adoption of these training tools, it is crucial to define the best pedagogical approaches for trainers and learners to maximize their effectiveness. However, despite their educational benefits, virtual reality-specific debriefing methods remain underexplored in research.This article proposes an architecture and interface for an all-in-one immersive debriefing module that is adaptable to different types of training, including a complete system for recording, replaying, and redoing actions—a study with 36 participants compared this immersive debriefing system with traditional discussion-based and video-supported debriefing. Participants were divided into three groups to evaluate the effectiveness of each method. The results showed no significant differences between these debriefing methods across several criteria, such as satisfaction, motivation, or information retention. Immersive debriefing is as usable and retentive as traditional or video debriefing in this context. The next step will be to evaluate the Redo system in other training courses involving more dynamic scenarios.
Reviewing the Applications, Benefits and Vulnerabilities of Virtual and Extended Reality Immersive Police Training
Dr Lena Podoletz, Niamh Healy, Dr Mark McGill, Professor David McIlhatton, Jill Marshall, Marina Heilbrunn, Dr Leonie Maria Tanczer
Training is integral to a Police officer’s ability to respond swiftly and effectively to both commonly occurring and emergency situations. Virtual and Extended Reality headsets have promised to enhance police training through the delivery of immersive simulations able to be conducted anywhere, anytime, with increasing degrees of perceptual realism and plausibility. However, whilst we know much about the benefits of XR training through individual examples, little consideration has been given to holistically reviewing the available evidence regarding not just those benefits, but crucially also the potential issues posed by XR police training. In this paper, we summarise the evidenced benefits and types of XR police training through a formative targeted literature review (n=41 publications). Informed by this review, we then reflect on the prospective technical, security, social and legal issues posed by XR police training, identifying four areas where issues or vulnerabilities exist: training content, trainees / trainers, systems and devices, and state and institutional stakeholders. Across these, we highlight significant concerns around e.g. the validity and efficacy of training; the short- and long-term psychological impact and risks of trauma; the safety and privacy risks posed to trainees and trainers, in particular through data leakage, breaches and immersive attacks; and the risks to the institutions leveraging said training, potentially undermining public trust through unvalidated training. We aim to provoke further consideration of the risks posed by immersive training, encouraging end-user communities (e.g. police forces) to more openly reflect on the risks posed, so we can ultimately move towards transparent, validated, trusted training that is evidenced to improve policing outcomes.
The Impact of Task-Responsibility on User Experience and Behaviour under Asymmetric Knowledge Conditions
Pauline Bimberg, Daniel Zielasko, Benjamin Weyers
Virtual Reality presents a promising tool for knowledge transfer, allowing users to learn in different environments and with the help of three-dimensional visualizations. At the same time, having to learn new ways of interacting with their environment can present a significant hurdle for novice users. When users enter a virtual space to receive knowledge from a more experienced person, the question arises as to whether they benefit from learning VR-specific interaction techniques instead of letting the expert take over some or all interactions. Based on related work about expert-novice interaction in virtual spaces, this paper presents a user study comparing three different distributions of interaction responsibilities between participants and an expert user. The Role-Based interaction mode gives the expert the full interaction responsibility. The Shared interaction mode gives both users the same interaction capabilities, allowing them to share the responsibility of interacting with the virtual space. Finally, the Parallel interaction mode gives participants full interaction responsibility, while the expert can provide guidance through oral communication and visual demonstration. Our results indicate that assuming interaction responsibility led to higher task loads but also increased the participant’s engagement and feeling of presence. For most participants, sharing interaction responsibilities with the expert represented the best trade-off between engagement and challenge. While we did not measure a significant increase in learning success, participant comments indicated that they also paid more attention to details when assuming more interaction responsibility
Session 8: Displays
Friday 11th, October
11:00 – 12:30
Evaluating the effects of Situated and Embedded Visualisation in Augmented Reality Guidance for Isolated Medical Assistance
Frederick George VICKERY, Sébastien Kubicki, Dr Charlotte Hoareau, Lucas Brand, Aurelien Duval, Seamus Thierry, Ronan Querrec
One huge advantage of Augmented Reality (AR) is its numerous possibilities of displaying information in the physical world, especially when applying Situated Analytics (SitA). AR devices and their respective interaction techniques allow for supplementary guidance to assist an operator carrying out complex procedures such as medical diagnosis and surgery or industrial maintenance and assembly. Their usage promotes user autonomy by presenting relevant information when the operator may not necessarily possess expert knowledge of every procedure and may also not have access to external help such as in a remote or isolated situation (e.g., International Space Station, middle of an ocean, desert). In this paper, we propose a comparison of two different forms of AR visualisation : An Embedded Visualisation and a Situated Projected Visualisation, with the aim to assist operators with the most appropriate visualisation format when carrying out procedures (medical in our case). To evaluate these forms of visualisation, we carried out an experiment involving 23 participants possessing latent/novice medical knowledge of similar medical certification (Encompassing workplace safety risk management under EN ISO 14971). These participant profiles were representative of operators who are medically trained yet do not apply their knowledge every day (e.g., an astronaut in orbit, a sailor out at sea or perhaps a soldier in hostile territory). We discuss our findings which include the advantages of Embedded Visualised Information in terms of precision compared to Situated/Projected Information with the accompanying limitations in addition to future improvements to our proposition. We conclude with the prospects of our work, notably the continuation and possibility of evaluating our proposition in a less controlled and real context in collaboration with our national space agency.
Optimizing spatial resolution in head-mounted displays: evaluating characteristics of peripheral visual field
Masamitsu Harasawa Ph.D., Yamato Miyashita, Kazuteru Komine
An ideal head-mounted display (HMD) is defined as a device that provides a visual experience indistinguishable from that given by the naked eyes. Such an HMD must display images with spatial resolution surpassing that of the human visual system. However, excessively high spatial resolution is resource-wasting and inefficient. To optimize this balance, we evaluated the spatial resolution characteristics of the human visual system in the peripheral visual area. Our experiment was performed based on head-centered coordinate system, acknowledging that users can move their eyes freely within the HMD housing fixed on users’ head. We measured threshold eccentricities where low-pass filtered noise patterns could be distinguished from intact noise patterns, manipulating cut-off spatial frequencies between one to eight cycles per degree. The results revealed clear asymmetries between the temporal and nasal visual fields, as well as between the upper and lower visual fields. In the temporal and lower fields, lower cut-off spatial frequencies resulted in higher eccentricity thresholds. Notably, the smaller impact of spatial frequencies in the nasal and upper visual fields is likely due to visual obstruction by facial structures such as the nose. Our findings can play a role of a standard for pixel arrangement design in an ideal HMD.
An Evaluation of Targeting Methods in Spatial Computing Interfaces with Visual Distractions
Fabian Räthel, Susanne Schmidt, Jenny Gabel, Lukas Posniak, Frank Steinicke
With the advancements in spatial computing technologies, the question of the most suitable object targeting and selection methods in virtual environments is becoming increasingly relevant for end users. Although there are a plethora of empirical studies on this question, there is just as much variation in the methods users are confronted with in current consumer devices. For example, the recently introduced Apple Vision Pro utilizes eye gaze as the primary technique for targeting objects and interface elements. This makes it the latest addition to a line-up of mixed reality headsets that already feature diverse default interaction mechanisms, such as hand gestures (cf. Microsoft HoloLens 2), touch gestures (cf. Google Glass Enterprise Edition 2), and external controllers (cf. Magic Leap 2). A limiting factor of several previous studies on object selection (such as classical Fitts’ law studies) is given by the partly artificial setups, which have not adequately accounted for influences in practical settings. In this paper, we present a user study comparing four (i.e., two hand-based and two gaze-based) state-of-the-art selection methods, particularly differing in their targeting approach, using the HoloLens 2. We utilized a Fitts’ law-inspired task and extended the traditional study design by incorporating a visual task that simulates changes in the user interface after a successful selection. An analysis of the results revealed a significant decrease in movement times for all methods when the visual task was present. While without a visual task, gaze-based techniques were on average faster than hand-based techniques, this performance gain was eliminated (for head gaze) or even reversed (for eye gaze) when the visual task was active. These findings underscore the value of continued practice-oriented research to obtain more realistic results on the performance of targeting methods in real-world scenarios.
Evaluation of AR Pattern Guidance Methods for a Surface Cleaning Task
Jeroen Ceyssens, Mathias Jans, Gustavo Alberto Rovelo Ruiz, Kris Luyten, Prof. dr. Fabian Di Fiore
In this study, we investigate the efficacy of augmented reality (AR) in enhancing a cleanroom cleaning task by implementing various pattern guidance designs. Cleanroom cleaning is an example of a surface coverage task that is hard to execute where the pattern should be followed correctly and the entire surface should be covered. We developed an AR guidance system for the cleaning procedure and evaluate four distinct pattern guidance methods: (1) breadcrumbs, (2) examples, (3) middle lines, and (4) outlines. We also vary the scale of the instructions, where information is present either on the entire surface or show the instructions as a single step when they are necessary. To measure the performance, accuracy, and user satisfaction associated with each guidance method, we conducted a large-scale (n=864) between-subjects study. Our findings indicate that the single step instructions proved to be more intuitive and efficient than the full instructions, especially for the breadcrumbs. We also discussed the implications of our results for the development of AR applications for surface coverage and pattern optimization, providing a multitude of observations related to instruction behaviors.
Aerial Imaging System to Reproduce Reflections in Specular Surfaces
Ayaka Sano, Motohiro Makiguchi, Ayami Hoshi, Mr. Hiroshi Chigira, Takayoshi Mochiduki
A table-top reflective aerial imaging system can display digital information as if it existed on a specular horizontal surface, such as a marble or acrylic table-top. This system allows a user sitting in a chair to naturally look down at an aerial image displayed on a table,
thus integrating the aerial image into daily life. However, although it displays an aerial image on a specular surface, it does not reproduce the reflected image, thus failing to achieve optical consistency. To improve the presence of aerial images, we propose a new table-top reflective aerial imaging system that not only displays aerial images on a specular surface but also reproduces the reflected images in that surface. The proposed system consists of an aerial imaging optical system that can display independent aerial images on and in a specular surface, and a reflected image reproduction method that designs the luminance of the aerial image inside the specular surface according to
the actual measured material reflectance. We implemented a prototype to verify the principle of aerial imaging optics and evaluated the naturalness of the reflected image reproduced by our method through user experimentation. The results show that the difference between the material of the specular surface and the shape of the aerial image
affects whether the reflected image produced by the proposed method is perceived as natural or not.
Superpixel-guided Sampling for Compact 3D Gaussian Splatting
Myoung Gon Kim, SeungWon Jeong, Seohyeon Park, JungHyun Han
In this paper, we propose to integrate superpixel-guided sampling into the framework of 3D Gaussian Splatting for novel view synthesis. Given a sequence of frames, where each frame is a pair of RGB-D image and the camera pose that captures the image, we first select the keyframes. Then, we decompose each keyframe image into superpixels, back-project each superpixel’s center into 3D space, and initialize a 3D Gaussian at the back-projected position. This superpixel-guided sampling produces a set of sparse but well-distributed Gaussians, which enables the optimization process to converge quickly. The experimental results show that with a significantly reduced computing cost, we can synthesize a novel view the quality of which is comparable to that generated by the state-of-the-art methods.