VRST ’24: 30th ACM Symposium on Virtual Reality Software and Technology

VRST ’24: 30th ACM Symposium on Virtual Reality Software and Technology

Full Citation in the ACM Digital Library

SESSION: Session 1: Presence and Immersion

Context-Relevant Locations as an Alternative to the Place Illusion in Augmented Reality

  • Kalila Shapiro
  • Anthony Steed

Presence is a powerful aspect of Virtual Reality (VR). However, there has been no consensus on how to achieve presence in Augmented Reality (AR) or whether it exists at all. The Place Illusion, a key component in presence as defined in VR, cannot be obtained in AR as there is no way to make the user feel as though they are transported somewhere else when they are limited to what they can physically see in front of them. However, recently it has been argued that coherence or congruence are important parts of the Place and Plausibility Illusions. The implication for AR is that the AR content might invoke a higher Plausibility Illusion if it is consistent with the physical place the content is situated in. In this study, we define the concept of a Context-Relevant Location (CRL), a physical place that is congruent with the experience. We present a study with a between-subjects design that allowed users to interact with AR objects in a CRL and in a generic environment. The results indicate that presence was higher in the CRL setting than the generic environment, contribute to the debate about providing a concrete description of presence-like phenomena in AR, and posit that CRLs play a similar role to the Place Illusion in an AR setting.

Lifter for VR Headset: Enhancing Immersion, Presence, Flow, and Alleviating Mental and Physical Fatigue during Prolonged Use

  • JaeHoon Kim
  • DongYun Joo
  • Hyemin Shin
  • Sun-Uk Lee
  • Gerard Jounghyun Kim
  • Hanseob Kim

The virtual reality (VR) headset is still relatively heavy, causing a significant physical and mental burden and negatively affecting the VR user experience, particularly during extended periods of use. In this paper, we present a prototype design of the “Lifter,” which utilizes a counterbalanced wire-pulley mechanism to partially relieve the weight of the VR headset (between 50% and 85%). The human subject study has confirmed that the Lifter relieved not only physical fatigue but also significantly improved mental burden, sense of immersion, presence, and flow (perception of time passing) during prolonged usage (30 minutes or more).

The Influence of a Low-Resolution Peripheral Display Extension on the Perceived Plausibility and Presence

  • Larissa Brübach
  • Marius Röhm
  • Franziska Westermeier
  • Carolin Wienrich
  • Marc Erich Latoschik

The Field of View (FoV) is a central technical display characteristic of Head-Mounted Displays (HMDs), which has been shown to have a notable impact on important aspects of the user experience. For example, an increased FoV has been shown to foster a sense of presence and improve peripheral information processing, but it also increases the risk of VR sickness. This article investigates the impact of a wider but inhomogeneous FoV on the perceived plausibility, measuring its effects on presence, spatial presence, and VR sickness as a comparison to and replication of effects from prior work. We developed a low-resolution peripheral display extension to pragmatically increase the FoV, taking into account the lower peripheral acuity of the human eye. While this design results in inhomogeneous resolutions of HMDs at the display edges, it also is a low complexity and low-cost extension. However, its effects on important VR qualities have to be identified. We conducted two experiments with 30 and 27 participants, respectively. In a randomized 2×3 within-subject design, participants played three rounds of bowling in VR, both with and without the display extension. Two rounds contained incongruencies to induce breaks in plausibility. In experiment 2, we enhanced one incongruency to make it more noticeable and improved the shortcomings of the display extension that had previously been identified. However, neither study measured the low-resolution FoV extension’s effect in terms of perceived plausibility, presence, spatial presence, or VR sickness. We found that one of the incongruencies could cause a break in plausibility without the extension, confirming the results of a previous study.

MeetingBenji: Tackling Cynophobia with Virtual Reality, Gamification, and Biofeedback

  • Inês Alves
  • Augusto Esteves

Phobias, particularly animal phobias like cynophobia (fear of dogs), disrupt the lives of those affected by, for instance, limiting outdoor activities. While virtual reality exposure therapy (VRET) has emerged as a potential treatment for this phobia, these efforts have been limited by high dropout rates and a lack of ability to handle stressful situations in people who suffer from cynophobia. Inspired by these challenges, we present MeetingBenji, a VRET system for cynophobia that uses (i) gamification to enhance motivation and engagement, and (ii) biofeedback to facilitate self-control and reduce physiological responses. In a study (N=10) that compared the effects of displaying dogs in 3D scenes and 360º videos using the Behavioral Approach Test (BAT) – in which participants are increasingly exposed to the source of phobia – participants reported a high level of immersion to the exposure sequence. Further, they reported feeling more anxiety with 3D content than 360º video (60%), lower heart rates in the presence of biofeedback (between 1.71% and 7.46%), and improved self-control across the three exposure levels. They appreciated our gamified elements – completing all exposure levels. This study suggests that VRET with gamification and biofeedback is an effective approach to stimulate the habituation of people with cynophobia.

iStrayPaws: Immersing in a Stray Animal’s World through First-Person VR to Bridge Human-Animal Empathy

  • Yao Xu
  • Ding Ding
  • Yongxin Chen
  • Zhuying Li
  • Xiangyu Xu

While Virtual Reality Perspective-Taking (VRPT) demonstrates its efficiency in inducing empathy, its application primarily focuses on vulnerable humans, not animals. Existing animal-related works mainly targets farm animals and wildlife. In this work, we focus on stray animals and introduce iStrayPaws, a VRPT system that simulates stray animals’ challenging lives. The system offers users an immersive first-person journey into the world of stray animals encountering different difficulties like inclement weather, hunger, and illnesses. Enriched with audio-visual and kinesthetic design, the system seeks to deepen users’ understanding of stray animals’ life and foster profound emotional connections. To evaluate the system, a user study was conducted, which showed that VRPT recipients exhibited significant improvement in both state and trait empathy compared to traditional method. Our research not only delivers a novel, accessible, and interactive animal empathy experience but also provides innovative solutions for addressing stray animal issues and advancing broader animal welfare work.

Exploring Presence in Interactions with LLM-Driven NPCs: A Comparative Study of Speech Recognition and Dialogue Options

  • Frederik Roland Christiansen
  • Linus Nørgaard Hollensberg
  • Niko Bach Jensen
  • Kristian Julsgaard
  • Kristian Nyborg Jespersen
  • Ivan Nikolov

Combining modern technologies like large-language models (LLMs), speech-to-text, and text-to-speech can enhance immersion in virtual reality (VR) environments. However, challenges exist in effectively implementing LLMs and educating users. This paper explores implementing LLM-powered virtual social actors and facilitating user communication. We developed a murder mystery game where users interact with LLM-based non-playable characters (NPCs) through interrogation, clue-gathering, and exploration. Two versions were tested: one using speech recognition and another with traditional dialog boxes. While both provided similar social presence, users felt more immersed with speech recognition but found it overwhelming, while the dialog version was more challenging. Slow NPC response times were a source of frustration, highlighting the need for faster generation or better masking for a seamless experience.

SESSION: Session 2: Navigation and Motion

Effects of Different Tracker-driven Direction Sources on Continuous Artificial Locomotion in VR

  • Christos Lougiakis
  • Theodoros Mandilaras
  • Akrivi Katifori
  • Giorgos Ganias
  • Ioannis-Panagiotis Ioannidis
  • Maria Roussou

Continuous artificial locomotion in VR typically involves users selecting their direction using controller input, with the forward direction determined by the Head, Hands, or less commonly, the Hip. The effects of these different sources on user experience are under-explored, and Feet have not been used as a direction source. To address these gaps, we compared these direction sources, including a novel Feet-based technique. A user study with 22 participants assessed these methods in terms of performance, preference, motion sickness, and sense of presence. Our findings indicate high levels of presence and minimal motion sickness across all methods. Performance differences were noted in one task, where the Head outperformed the Hand. The Hand method was the least preferred, feeling less natural and realistic. The Feet method was found to be more natural than the Head and more realistic than the Hip. This study enhances understanding of direction sources in VR locomotion and introduces Feet-based direction as a viable alternative.

Influence of Rotation Gains on Unintended Positional Drift during Virtual Steering Navigation in Virtual Reality

  • Hugo Brument
  • Arthur Chaminade
  • Ferran Argelaguet

Unintended Positional Drift (UPD) is a phenomenon that occurs during navigation in Virtual Reality (VR). It is characterized by the user’s unconscious or unintentional physical movements in the workspace while using a locomotion technique (LT) that does not require physical displacement (e.g., steering, teleportation). Recent work showed that some factors, such as the LT used and the type of trajectory, can influence UPD. However, little is known about the influence of rotation gains (commonly used in redirection-based LTs) on UPD during navigation in VR. In this paper, we conducted two user studies to assess the influence of rotation gains on UPD. In the first study, participants had to perform consecutive turns in a corridor virtual environment. In the second study, participants had to explore a large office floor and collect spheres freely. We compared the conditions between rotation gains and without gains, and we also varied the turning angle to perform the turns while considering factors such as sensitivity to cybersickness and the learning effect. We found that rotation gains and lower turning angles decreased UPD during the first study, but the presence of rotation gains increased UPD in the second study. This work contributes to the understanding of UPD, which tends to be an overlooked topic and discusses the design implications of these results for improving navigation in VR.

Exploring the Impact of Visual Scene Characteristics and Adaptation Effects on Rotation Gain Perception in VR

  • Qi Wen Gan
  • Sen-Zhe Xu
  • Fang-Lue Zhang
  • Song-Hai Zhang

Rotation gain is a subtle manipulation technique commonly employed in Redirected Walking (RDW) methods due to its superior capability to alter a user’s virtual trajectory. Previous studies have reported that the imperceptible ranges of rotation gains are influenced by various factors, resulting in different detection threshold values, which may alter RDW performance. In this study, we focus on the effects of scene visual characteristics on the rotation gain and rotation gain thresholds (RGTs), which have been less explored in this area. In our experiments, we focus on three visual characteristics: visual density, spatial size, and realism. Each characteristic is tested at two different levels, resulting in a design of eight distinct VR scenes. Through extensive statistical analysis, we find that spatial size may influence user perception of rotation gain in different virtual environments (VEs), though the effect appears to be small. No significant results of sensitivity differences were found for visual density and realism. We show that the short-term temporal effect is another predominant factor influencing user perception of rotation gain, even when users experience different visual stimuli in VEs, such as different scene visual characteristic settings in our study. This result indicates that users’ adaptation effects on rotation gain can occur in as short a time as overnight intervals, rather than over weeks.

Wheelchair Proxemics: interpersonal behaviour between pedestrians and power wheelchair drivers in real and virtual environments

  • Emilie Leblong
  • Fabien Grzeskowiak
  • Sebastien Thomas
  • Louise Devigne
  • Marie Babel
  • Anne-Hélène Olivier

Immersive environments provide opportunities to learn and transfer skills to real life. This opens up new areas of application, such as rehabilitation, where people with neurological disabilities can learn to drive a power wheelchair (PWC) through the development of immersive simulators. To expose these specific users to daily-life study interaction situations, it is important to ensure realistic interactions with the virtual humans that populate the simulated environment, as PWC users should learn to drive and navigate under everyday conditions. While non-verbal pedestrian-pedestrian interactions have been extensively studied, understanding pedestrian-PWC user interactions during locomotion is still an open research area. Our study aimed to investigate the regulation of interpersonal distance (i.e., proxemics) between a pedestrian and a PWC user in real and virtual situations. We designed 2 experiments in which 1) participants had to reach a goal by walking (respectively driving a PWC) and avoid a static PWC confederate (respectively a standing confederate) and 2) participants had to walk to a goal and avoid a static confederate seated on a PWC in real and virtual conditions. Our results showed that interpersonal distances were significantly different whether the pedestrian avoided the PWC user or vice versa. We also showed an influence of the orientation of the person to be avoided. We discuss these findings with respect to pedestrian-pedestrian interactions, as well as their implications for the design of virtual humans interacting with PWC users for rehabilitation applications. In particular, we proposed a proof of concept by adapting existing microscopic crowd simulation algorithms to consider the specificity of pedestrian-PWC user interactions.

Semi-Automated Guided Teleportation through Immersive Virtual Environments

  • Tim Weissker
  • Marius Meier-Krueger
  • Pauline Bimberg
  • Robert W. Lindeman
  • Torsten Wolfgang Kuhlen

Immersive knowledge spaces like museums or cultural sites are often explored by traversing pre-defined paths that are curated to unfold a specific educational narrative. To support this type of guided exploration in VR, we present a semi-automated, hands-free path traversal technique based on teleportation that features a slow-paced interaction workflow targeted at fostering knowledge acquisition and maintaining spatial awareness. In an empirical user study with 34 participants, we evaluated two variations of our technique, differing in the presence or absence of intermediate teleportation points between the main points of interest along the route. While visiting additional intermediate points was objectively less efficient, our results indicate significant benefits of this approach regarding the user’s spatial awareness and perception of interface dependability. However, the user’s perception of flow, presence, attractiveness, perspicuity, and stimulation did not differ significantly. The overall positive reception of our approach encourages further research into semi-automated locomotion based on teleportation and provides initial insights into the design space of successful techniques in this domain.

The Effects of Electrical Stimulation of Ankle Tendons on Redirected Walking with the Gradient Gain

  • Takashi Ota
  • Keigo Matsumoto
  • Kazuma Aoyama
  • Tomohiro Amemiya
  • Takuji Narumi
  • Hideaki Kuzuoka

As a redirected walking technique, a method has been proposed to enable users to walk in an undulating virtual space even in a flat physical environment by setting the slope of the floor in the virtual environment to be different from that in the physical environment without causing discomfort. However, the slope range in which discrepancies between visual and proprioceptive sensations are not perceived is limited, restricting the slopes that can be presented. In this study, we proposed redirected walking using electrical stimulation of the Achilles and tibialis anterior muscle tendons, extending the applicable slope range of redirected walking without compromising the natural gait sensation. Electrical stimulation of the ankle tendons affects the proprioceptive sensation and gives the illusion of tilting in the standing posture, expanding the applicable slope range. Two experiments showed that the proposed method improved the experience of uphill and downhill walking in terms of the range of the virtual slope where a high naturalness of gait and a high congruency of visual and proprioceptive sensations are maintained. Notably, electrical stimulation of the Achilles tendons significantly improved the naturalness of the walking experience during virtual downhill walking, which has been considered more challenging in previous studies.

SESSION: Session 3: Technologies

Neural Motion Tracking: Formative Evaluation of Zero Latency Rendering

  • Daniel Roth
  • Valentin Bräutigam
  • Nidhi Joshi
  • Constantin Kleinbeck
  • Hannah Schieber
  • Julian Kreimeier

Low motion-to-photon latencies between physical movement and rendering updates are crucial for an immersive virtual reality (VR) experience and to avoidusers’ discomfort and sickness. Current methods aim to minimize the delay between the motion measurement and rendering at the cost of increasing technical complexity and possibly decreasing accuracy. By relying on capturing physical motion, these strategies will, by nature, not result in zero latency rendering or will be based on prediction and resulting uncertainty. This paper presents and evaluates a novel alternative and proof of principle for VR motion tracking that could enable motion-to-photon latencies of zero and below zero in time. We termed our concept Neural Motion Tracking, which we define as the sensing and assessment of motion through human neural activation of the somatic nervous system. In contrast to measuring physical activity, the key principle is that we aim to utilize the physiological timeframe between a user’s intention and the execution of motion. We aim to foresee upcoming motion ahead of the physical movement, by sampling preceding electromyographic signals before the muscle activation. The electromechanical delay (EMD) between potential change in the muscle activation and actual physical movement opens a gap in which measurement can be taken and evaluated before the physical motion. In a first proof of principle, we evaluated the concept with two activities, arm bending and head rotation, measured with a binary activation measure. Our results indicate that it is possible to predict movement and update a rendering up to 2 ms before its physical execution, which is assessed by optical tracking after approximately 4 ms. However, to make the best use of this advantage, electromyography (EMG) sensor data should be as high quality as possible (i.e., low noise and from muscle-near electrodes). Our results empirically quantify this characteristic for the first time when compared to state-of-the-art optical tracking systems for VR. We discuss our results and potential pathways to motivate further work toward marker- and latency-less motion tracking.

Investigation of Redirection Algorithms in Small Tracking Spaces

  • Linda Krueger
  • Charles Markham
  • Ralf Bierig

In virtual reality, redirected walking lets users walk in larger virtual spaces than the physical tracking space set aside for their movements. This benefits immersion and spatial navigation compared to virtual locomotion techniques such as teleportation or joystick control. Different algorithms have tried to optimise redirected walking. These algorithms have been tested in simulation in large spaces and with small user studies. However, few studies have looked at the user experience of these algorithms in small tracking spaces.

We conducted a user study to compare the performance of different redirected walking algorithms in a small tracking space of 3.5m x 3.5m. Three algorithms were chosen based on their approaches to redirection – Reset Only, Steer to Centre and Alignment Based Redirection Control. 36 people participated in the study.

It was found users preferred Reset Only in the tracking space. Reset Only redirects users less and is easier to implement than Steer to Centre or Alignment Based Redirection Control. Additionally, Reset Only had similar performance to Steer to Centre and better task performance than Alignment Based Redirection Control despite resetting users more often. Based on these findings, we provide guidelines for developers working in small tracking spaces.

Interactive Multi-GPU Light Field Path Tracing Using Multi-Source Spatial Reprojection

  • Erwan Leria
  • Markku Mäkitalo
  • Pekka Jääskeläinen
  • Mårten Sjöström
  • Tingting Zhang

Path tracing combined with multiview displays enables progress towards achieving ultrarealistic virtual reality. However, multiview displays based on light field technology impose a heavy workload for real-time graphics due to the large number of views to be rendered. In order to achieve low latency performance, computational effort can be reduced by path tracing only some views (source views), and synthesizing the remaining views (target views) through spatial reprojection, which reuses path traced pixels from source views to target views. Deciding the number of source views with respect to the computational resources is not trivial, since spatial reprojection introduces dependencies in the otherwise trivially parallel rendering pipeline and path tracing multiple source views increases the computation time.

In this paper, we demonstrate how to reach near-perfect linear multi-GPU scalability through a coarse-grained distribution of the light field path tracing workload. Our multi-source method path traces a single source view per GPU, which helps decreasing the number of dependencies. Reducing dependencies reduces the overhead of image transfers and G-Buffers rasterization used for spatial reprojection. In a node of 4 × RTX A6000 GPUs, given 4 source views, we reach a light field rendering frequency of 3–19 Hz, which corresponds to interactive rate. On four test scenes, we outperform state-of-the-art multi-GPU light field path tracing pipelines, achieving a speedup of 1.65 × up to 4.63 × for 1D light fields of dimension 100 × 1, each view having a resolution of 768 × 432, and 1.51 × up to 3.39 × for 2D stereo near-eye light fields of size 12 × 6 (left eye: 6 × 6 views and right eye: 6 × 6 views), 1024 × 1024 per view.

Exploring Visual Conditions in Virtual Reality for the Teleoperation of Robots

  • Paul Christopher Gloumeau
  • Stephen Robert Pettifer

In the teleoperation of robots, the absence of proprioception means that visual information plays a crucial role. Previous research has investigated methods to offer optimal vantage points to operators during teleoperation, with virtual reality (VR) being proposed as a mechanism to give the operator intuitive control over the viewpoint for improved visibility and interaction. However, the most effective perspective for robot operation and the optimal portrayal of the robot within the virtual environment remain unclear. This paper examines the impact of various visual conditions on users’ efficiency and preference in controlling a simulated robot via VR. We present a user study that compares two operating perspectives and three robot appearances. The findings indicate mixed user preferences and highlight distinct advantages associated with each perspective and appearance combination. We conclude with recommendations on selecting the most beneficial perspective and appearance based on specific application requirements.

Choose Your Reference Frame Right: An Immersive Authoring Technique for Creating Reactive Behavior

  • Sevinc Eroglu
  • Patric Schmitz
  • Kilian Sinke
  • David Anders
  • Torsten Wolfgang Kuhlen
  • Benjamin Weyers

Immersive authoring enables content creation for virtual environments without a break of immersion. To enable immersive authoring of reactive behavior for a broad audience, we present modulation mapping, a simplified visual programming technique. To evaluate the applicability of our technique, we investigate the role of reference frames in which the programming elements are positioned, as this can affect the user experience. Thus, we developed two interface layouts: “surround-referenced” and “object-referenced”. The former positions the programming elements relative to the physical tracking space, and the latter relative to the virtual scene objects. We compared the layouts in an empirical user study (n = 34) and found the surround-referenced layout faster, lower in task load, less cluttered, easier to learn and use, and preferred by users. Qualitative feedback, however, revealed the object-referenced layout as more intuitive, engaging, and valuable for visual debugging. Based on the results, we propose initial design implications for immersive authoring of reactive behavior by visual programming. Overall, modulation mapping was found to be an effective means for creating reactive behavior by the participants.

SESSION: Session 4: Time

Development and Validation of a 3D Pose Tracking System towards XR Home Training to Relieve Back Pain

  • Nikolai Hepke
  • Moritz Scherer
  • Jörg Lohscheller
  • Steffen Müller
  • Benjamin Weyers

Back pain significantly impacts society, leading to substantial economic costs and reducing individuals’ quality of life. A digital XR physiotherapist could support adherence to home-based training programs, thereby potentially enhancing treatment effectiveness. For the system to provide accurate biofeedback, which is crucial to the success of the system, it must be capable of tracking exercise execution reliably and accurately.

In this paper we present the design of a robust multi-Kinect system, capable of tracking human 3D pose, to be used in an autonomous, home-based XR rehabilitation program. The system is evaluated against OpenPose and validated using a marker-based Vicon system, considered the gold-standard, in a study involving 20 healthy participants.

The results show that the Kinects overall have a lower absolute positional error, with a median of 1.2 cm, than OpenPose, with a median of 2.0 cm and a lower median angular error of 5.2° over all keypoints (OpenPose: 5.9°). Furthermore, the time courses of the Kinect joint positions exhibit a higher correlation to the gold standard compared to the OpenPose system, as confirmed by the results of a Bland-Altman analysis. Generally, the joints of the lower body could be tracked with a higher level of accuracy than that of the upper body.

The study reveals that the multi-Kinect-system is overall more robust and tracks exercises with higher accuracy than the multi-OpenPose system, which makes it better suited for a quantitative XR training program for home use. It furthermore shows, that a Kinect aimed at the back of the person is not beneficial to the overall tracking accuracy.

Motion Passwords

  • Christian Rack
  • Lukas Schach
  • Felix Achter
  • Yousof Shehada
  • Jinghuai Lin
  • Marc Erich Latoschik

This paper introduces “Motion Passwords”, a novel biometric authentication approach where virtual reality users verify their identity by physically writing a chosen word in the air with their hand controller. This method allows combining three layers of verification: knowledge-based password input, handwriting style analysis, and motion profile recognition. As a first step towards realizing this potential, we focus on verifying users based on their motion profiles. We conducted a data collection study with 48 participants, who performed over 3800 Motion Password signatures across two sessions. We assessed the effectiveness of feature-distance and similarity-learning methods for motion-based verification using the Motion Passwords as well as specific and uniform ball-throwing signatures used in previous works. In our results, the similarity-learning model was able to verify users with the same accuracy for both signature types. This demonstrates that Motion Passwords, even when applying only the motion-based verification layer, achieve reliability comparable to previous methods. This highlights the potential for Motion Passwords to become even more reliable with the addition of knowledge-based and handwriting style verification layers. Furthermore, we present a proof-of-concept Unity application demonstrating the registration and verification process with our pretrained similarity-learning model. We publish our code, the Motion Password dataset, the pretrained model, and our Unity prototype on https://github.com/cschell/MoPs

Out-Of-Virtual-Body Experiences: Virtual Disembodiment Effects on Time Perception in VR

  • Fabian Unruh
  • Jean-Luc Lugrin
  • Marc Erich Latoschik

This paper presents a novel experiment investigating the relationship between virtual disembodiment and time perception in Virtual Reality (VR). Recent work demonstrated that the absence of a virtual body in a VR application changes the perception of time. However, the effects of simulating an out-of-body experience (OBE) in VR on time perception are still unclear. We designed an experiment with two types of virtual disembodiment techniques based on viewpoint gradual transition: a virtual body’s behind view and facing view transitions. We investigated their effects on forty-four participants in an interactive scenario where a lamp was repeatedly activated and time intervals were estimated. Our results show that, while both techniques elicited a significant virtual disembodiment perception, time duration estimations in the minute range were only shorter in the facing view compared to the eye view condition. We believe that reducing agency in the facing view is a key factor in the time perception alteration. This provides first steps towards a novel approach to manipulating time perception in VR, with potential applications for mental health treatments such as schizophrenia or depression and for improving our understanding of the relation between body, virtual body, and time.

Some Times Fly: The Effects of Engagement and Environmental Dynamics on Time Perception in Virtual Reality

  • Sahar Niknam
  • Stéven Picard
  • Valentina Rondinelli
  • Jean Botev

An hour spent with friends seems shorter than an hour waiting for a medical appointment. Many physiological and psychological factors, such as body temperature and emotions, have been shown to correlate with our subjective perception of time. Experiencing virtual reality (VR) has been observed to make users significantly underestimate the duration. This paper explores the effect of virtual environment characteristics on time perception, focusing on two key parameters: user engagement and environmental dynamics. We found that increased presence and interaction with the environment significantly decreased the users’ estimation of the VR experience duration. Furthermore, while a dynamic environment lacks significance in shifting perception toward one specific direction, that is, underestimation or overestimation of the durations, it significantly distorts perceived temporal length. Exploiting these two factors’ influence smartly constitutes a powerful tool in designing intelligent and adaptive virtual environments that can reduce stress, alleviate boredom, and improve well-being by adjusting the pace at which we experience the passage of time.

Enhancing VR Sketching with a Dynamic Shape Display

  • Wen Ying
  • Seongkook Heo

Sketching on virtual objects in Virtual Reality (VR) can be challenging due to the lack of a physical surface that constrains the movement and provides haptic feedback for contact and movement. While using a flat physical drawing surface has been proposed, it creates a significant discrepancy between the physical and virtual surfaces when sketching on non-planar virtual objects. We propose using a dynamic shape display that physically mimics the shape of a virtual surface, allowing users to sketch on a virtual surface as if they are sketching on a physical object’s surface. We demonstrate this using VRScroll, a shape-changing device that features seven independently controlled flaps to imitate the shape of a virtual surface automatically. Our user study showed that participants exhibited higher precision when tracing simple shapes with the dynamic shape display and produced clearer sketches. We also provided several design implications for dynamic shape displays aimed at enabling precise sketching in VR.

SESSION: Session 5: Multimodality

Simulating Object Weight in Virtual Reality: The Role of Absolute Mass and Weight Distributions

  • Alexander Kalus
  • Johannes Klein
  • Tien-Julian Ho
  • Niels Henze

Weight interfaces enable users of Virtual Reality (VR) to perceive the weight of virtual objects, significantly enhancing realism and enjoyment. While research on these systems primarily focused on their implementation, little attention has been given to determining the weight to be rendered by them: As the perceived weight of objects is influenced not only by their absolute mass, but also by their weight distribution and prior expectations, it is currently unknown which simulated mass provides the most realistic representation of a given object. We conducted a study, in which 30 participants chose the best fitting weight for a virtual object in 54 experimental trials. Across these trials, we systematically varied the virtual objects’ visual mass (three levels), their weight distribution (six levels), and the position of the physical mass on the grip (three levels). Our Bayesian analysis suggests that the visual weight distribution of objects does not affect which absolute physical mass best represents them, whereas the position of the provided physical mass does. Additionally, participants overweighted virtual objects with lower visual mass while underweighting objects with higher visual mass. We discuss how these findings can be leveraged by designers of weight interfaces and VR experiences to optimize realism.

Enriching Industrial Training Experience in Virtual Reality with Pseudo-Haptics and Vibrotactile Stimulation

  • Chiwoong Hwang
  • Tiare Feuchtner
  • Ian Oakley
  • Kaj Grønbæk

Virtual Reality (VR) technology facilitates effective, flexible, and safe industrial training for novice technicians when on-site training is not feasible. However, previous research has shown that training in VR may be less successful than traditional learning approaches in real-world settings, and haptic interaction may be the key to improving virtual training. In this study, we integrated pseudo-haptic feedback from motion delay with vibrotactile stimulation to enhance the sense of presence, enjoyment, and the perception of physical properties in VR, which may be crucial for achieving faithful simulations. The impact of combined haptic support was assessed in a complex industrial training procedure completing a variety of tasks such as component assembly and cleaning. The results indicate that vibrotactile cues are beneficial for presence and enjoyment, whereas pseudo-haptic illusions effectively enable kinesthetic sensations. Furthermore, multimodal haptic feedback that mixed the two yielded the most advantageous outcomes. Our findings highlight the potential of the pseudo-haptic and vibrotactile fusion in industrial training scenarios, presenting practical implications of the state-of-the-art haptic technologies for virtual learning.

Investigating the Impact of Odors and Visual Congruence on Motion Sickness in Virtual Reality

  • Lisa Reichl
  • Martin Kocur

Motion sickness is a prevalent side effect of exposure to virtual reality (VR). Previous work found that pleasant odors can be effective in alleviating symptoms of motion sickness such as nausea. However, it is unknown whether pleasant odors that do not match the anticipated scent of the virtual environment are also effective as they could, in turn, amplify symptoms such as disorientation. Therefore, we conducted a study with 24 participants experiencing a pleasant odor (rose) and an unpleasant odor (garlic) while being immersed in a virtual environment involving either virtual roses or garlic. We found that participants had lower motion sickness when experiencing the rose odor, however, only in the rose environment. Accordingly, we also showed that the sense of disorientation was lower for the rose odor, however, only while being immersed in the rose environment. Results indicate that whether pleasant odors are effective in alleviating motion sickness symptoms depends on the visual appearance of the virtual environment. We discuss possible explanations for such effects to occur. Our work contributes to the goal of mitigating visually induced motion sickness in VR.

Generative Terrain Authoring with Mid-air Hand Sketching in Virtual Reality

  • Yushen Hu
  • Keru Wang
  • Yuli Shao
  • Jan Plass
  • Zhu Wang
  • Ken Perlin

Terrain generation and authoring in Virtual Reality (VR) offers unique benefits, including 360-degree views, improved spatial perception, immersive and intuitive design experience and natural input modalities. Yet even in VR it can be challenging to integrate natural input modalities, preserve artistic controls and lower the effort of landscape prototyping. To tackle these challenges, we present our VR-based terrain generation and authoring system, which utilizes hand tracking and a generative model to allow users to quickly prototype natural landscapes, such as mountains, mesas, canyons and volcanoes. Via positional hand tracking and hand gesture detection, users can use their hands to draw mid-air strokes to indicate desired shapes for the landscapes. A Conditional Generative Adversarial Network trained by using real-world terrains and their height maps then helps to generate a realistic landscape which combines features of training data and the mid-air strokes. In addition, users can use their hands to further manipulate their mid-air strokes to edit the landscapes. In this paper, we explore this design space and present various scenarios of terrain generation. Additionally, we evaluate our system across a diverse user base that varies in VR experience and professional background. The study results indicate that our system is feasible, user-friendly and capable of fast prototyping.

How Different Is the Perception of Vibrotactile Texture Roughness in Augmented versus Virtual Reality?

  • Erwan Normand
  • Claudio Pacchierotti
  • Eric Marchand
  • Maud Marchal

Wearable haptic devices can modify the haptic perception of an object touched directly by the finger in a portable and unobtrusive way. In this paper, we investigate whether such wearable haptic augmentations are perceived differently in Augmented Reality (AR) vs. Virtual Reality (VR) and when touching with a virtual hand instead of one’s own hand. We first designed a system for real-time rendering of vibrotactile virtual textures without constraints on hand movements, integrated with an immersive visual AR/VR headset. We then conducted a psychophysical study with 20 participants to evaluate the haptic perception of virtual roughness textures on a real surface touched directly with the finger (1) without visual augmentation, (2) with a realistic virtual hand rendered in AR, and (3) with the same virtual hand in VR. On average, participants overestimated the roughness of haptic textures when touching with their real hand alone and underestimated it when touching with a virtual hand in AR, with VR in between. Exploration behaviour was also slower in VR than with real hand alone, although subjective evaluation of the texture was not affected. We discuss how the perceived visual delay of the virtual hand may produce this effect.

SESSION: Session 6: Collaboration and Games

TeenWorlds: Supporting Emotional Expression for Teenagers with their Parents and Peers through a Collaborative VR Experience

  • Evropi Stefanidi
  • Nadine Wagener
  • Dustin Augsten
  • Andy Augsten
  • Leon Reicherts
  • Paweł W. Woźniak
  • Johannes Schöning
  • Yvonne Rogers
  • Jasmin Niess

Adolescence is a period of growth and exploration, marked by influential relationships with peers and parents. These relationships are essential for teenagers’ well-being, highlighting the need to support their interpersonal interactions. Emotional expression is key in resolving conflicts that can frequently arise. This paper investigates the potential of TeenWorlds, a Virtual Reality (VR) application, to facilitate emotional expression and shared understanding among teenagers and their peers and parents. In our study, teenagers, accompanied by either a peer or a parent (total n=42), used TeenWorlds to visually represent their emotions during a shared conflict, discuss them, and collaborate on a joint VR drawing. Our findings indicate that TeenWorlds can foster communication, reflection, and strengthen interpersonal relationships. However, notable differences were observed in interactions with peers versus parents. We contribute insights into designing VR systems that support reflective experiences and meaningful family interactions, ultimately enhancing the well-being of adolescents, parents, and families.

HistoLab VR: A User Elicitation Study Exploring the Potential of Virtual Reality Game-based Learning for Hazard Awareness

  • Robin Timon Hänni
  • Tiffany Luong
  • Julia Chatain
  • Felix Mangold
  • Holger Dressel
  • Christian Holz

Occupational medicine is a vital field for workplace safety and health but often encounters challenges in engaging students and effectively communicating subtle yet critical workplace hazards. To tackle these issues, we developed HistoLab VR, a Virtual Reality (VR) game that immerses participants in a histology lab environment based on real-world practice. Our comprehensive user study with 17 students and experts assessed the game’s impact on hazard awareness, interest in occupational medicine, and user experience through quantitative and qualitative measures. Our findings show that HistoLab VR not just immersed participants in a relatable histology lab worker experience but that it effectively raised awareness about subtle hazards and conveyed the inherent stress of the job. We discuss our results and highlight the potential of VR as a valuable educational tool for occupational medicine training.

Game-Based Motivation: Enhancing Learning with Achievements in a Customizable Virtual Reality Environment

  • Michael Holly
  • Sandra Brettschuh
  • Ajay Shankar Tiwari
  • Kaushal Kumar Bhagat
  • Johanna Pirker

Digital learning experiences that promote interactive learning and engagement are becoming increasingly relevant. Educational games can be used to create an engaging learning atmosphere that allows knowledge acquisition through hands-on activities. Combining it with virtual reality (VR) allows users to interact with virtual environments, leading to a highly immersive learning experience. In this study, we explore how game achievements impact motivation and learning in a customizable VR learning environment. Using an A/B test involving 50 students, we utilized an interactive wave simulation to assess motivation, engagement, and the overall learning experience. Data collection involved standardized questionnaires, along with tracking interaction time and interactions within the virtual environment. The findings revealed that users who earned game achievements to unlock customization features felt significantly more accomplished when they mastered challenges and obtained all achievements. However, it was observed that adding achievements could also create pressure on students, leading to feelings of embarrassment when facing task failures. While achievements have the potential to enhance engagement and motivation, their excessive use may lead to distractions, anxiety, and reduced overall engagement. It shows that is crucial to find a good balance in employing game achievements within educational environments to ensure they contribute positively to the learning experience without causing undue stress or deterring learners.

Hands or Controllers? How Input Devices and Audio Impact Collaborative Virtual Reality

  • Alex Adkins
  • Ryan Canales
  • Sophie Jörg

Advancing virtual reality technologies are enabling real-time virtual-face to virtual-face communication. Hand tracking systems that are integrated into Head-Mounted Displays (HMD) enable users to directly interact with their environments and with each other using their hands as opposed to using controllers. Due to the novelties of these technologies our understanding of how they impact our interactions is limited. In this paper, we investigate the consequences of using different interaction control systems, hand tracking or controllers, when interacting with others in a virtual environment. We design and implement NASA’s Survival on the Moon teamwork evaluation exercise in virtual reality (VR) and test for effects with and without allowing verbal communication. We evaluate social presence, perceived comprehension, team cohesion, group synergy, task workload, as well as task performance and duration. Our findings reveal that audio communication significantly enhances social presence, perceived comprehension, and team cohesion, but it also increases effort workload and negatively impacts group synergy. The choice of interaction control systems has limited impact on various aspects of virtual collaboration in this scenario, although participants using hand tracking reported lower effort workload, while participants using controllers reported lower mental workload in the absence of audio.

Exploring User Placement for VR Remote Collaboration in a Constrained Passenger Space

  • Daniel Medeiros
  • Graham Wilson
  • Mauricio Sousa
  • Nadia Pantidi
  • Mark McGill
  • Diego Drago
  • Stephen Brewster

Extended Reality (XR) offers the potential to transform the passenger experience by allowing users to inhabit varied virtual spaces for entertainment, work or social interaction, whilst escaping the constrained transit environment. XR allows remote collaborators to feel like they are together and enables them to perform complex 3D tasks. However, the social and physical constraints of the passenger space pose unique challenges to productive and socially acceptable collaboration. Using a collaborative VR puzzle task, we examined the effects of five different f-formations of collaborator placement and orientation in an interactive workspace on social presence, task workload, and implications for social acceptability. Our quantitative and qualitative results showed that face-to-face formations were preferred for tasks with a high need for verbal communication but may lead to social collisions, such as inadvertently staring at a neighbouring passenger, or physical intrusions, such as gesturing in another passenger’s personal space. More restrictive f-formations, however, were preferred for passenger use as they caused fewer intrusions on other passengers’ visual and physical space.

Stand Alone or Stay Together: An In-situ Experiment of Mixed-Reality Applications in Embryonic Anatomy Education

  • Danny Schott
  • Matthias Kunz
  • Florian Heinrich
  • Jonas Mandel
  • Anne Albrecht
  • Rüdiger Braun-Dullaeus
  • Christian Hansen

Where traditional media and methods reach their limits in anatomy education, mixed-reality (MR) environments can provide effective learning support because of their high interactivity and spatial visualization capabilities. However, the underlying design and pedagogical requirements are as diverse as the technologies themselves. This paper examines the effectiveness of individual- and collaborative learning environments for anatomy education, using embryonic heart development as an example. Both applications deliver the same content using identical visualizations and hardware but differ in interactivity and pedagogical approach. The environments were evaluated in a user study with medical students (n = 90) during their examination phase, assessing usability, user experience, social interaction/co-presence, cognitive load, and personal preference. Additionally, we conducted a knowledge test before and after an MR learning session to determine educational effects compared to a conventional anatomy seminar. Results indicate that the individual learning environment was generally preferred. However, no significant difference in learning effectiveness could be shown between the conventional approach and the MR applications. This suggests that both can effectively complement traditional seminars despite their different natures. Our study contributes to understanding how different MR settings could be tailored for anatomical education.

SESSION: Session 7: Cognitive Aspects

Contextual Matching Between Learning and Testing Within VR Does Not Always Enhance Memory Retrieval

  • Takato Mizuho
  • Takuji Narumi
  • Hideaki Kuzuoka

Episodic memory is influenced by environmental contexts, such as location and auditory stimuli. The most well-known effect is the reinstatement effect, which refers to the phenomenon where contextual matching between learning and testing enhances memory retrieval. Previous studies have investigated whether the reinstatement effect can be observed within immersive virtual environments. However, only a limited number of studies have reported a significant reinstatement effect using virtual reality, while most have failed to detect it. In this study, we re-examined the reinstatement effect using 360-degree video-based virtual environments. Specifically, we carefully selected virtual environments to elicit different emotional responses, which has been suggested as a key factor in inducing a robust reinstatement effect in the physical world. Surprisingly, we found a significant reversed reinstatement effect with a large effect size. This counter-intuitive result suggests that contextual congruence does not necessarily enhance memory and may even interfere with it. This outcome may be explained by the retrieval-induced forgetting phenomenon, but further exploration is needed. This finding is particularly important for virtual reality-based, educational applications and highlights the need for a deeper understanding of the complex interactions between memory and contextual cues within virtual environments.

Toward Facilitating Search in VR With the Assistance of Vision Large Language Models

  • Chao Liu
  • Chi San (Clarence) Cheung
  • Mingqing Xu
  • Zhongyue Zhang
  • Mingyang Su
  • Mingming Fan

While search is a common need in Virtual Reality (VR) applications, current approaches are cumbersome, often requiring users to type on a mid-air keyboard using controllers in VR or remove VR equipment to search on a computer. We first conducted a literature review and a formative study, identifying six common search needs: knowing about one object, knowing about the object’s partial details, knowing objects with environmental context, knowing about interactions with objects, and finding objects within field of view (FOV) and out of FOV in the VR scene. Informed by these needs, we designed technology probes that leveraged recent advances in Vision Large Language Models and conducted a probe-based study with users to elicit feedback. Based on the findings, we derived design principles for VR designers and developers to consider when designing a user-friendly search interface in VR. While prior work about VR search tended to address specific aspects of search, our work contributes design considerations aimed at enhancing the ease of search in VR and potential future directions.

Evaluating Gaze Interactions within AR for Nonspeaking Autistic Users

  • Ahmadreza Nazari
  • Lorans Alabood
  • Molly Kay Rathbun
  • Vikram K. Jaswal
  • Diwakar Krishnamurthy

Nonspeaking autistic individuals often face significant inclusion barriers in various aspects of life, mainly due to a lack of effective communication means. Specialized computer software, particularly delivered via Augmented Reality (AR), offers a promising and accessible way to improve their ability to engage with the world. While research has explored near-hand interactions within AR for this population, gaze-based interactions remain unexamined. Given the fine motor skill requirements and potential for fatigue associated with near-hand interactions, there is a pressing need to investigate the potential of gaze interactions as a more accessible option. This paper presents a study investigating the feasibility of eye gaze interactions within an AR environment for nonspeaking autistic individuals. We utilized the HoloLens 2 to create an eye gaze-based interactive system, enabling users to select targets either by fixating their gaze for a fixed period or by gazing at a target and triggering selection with a physical button (referred to as a ‘clicker’). We developed a system called HoloGaze that allows a caregiver to join an AR session to train an autistic individual in gaze-based interactions as appropriate. Using HoloGaze, we conducted a study involving 14 nonspeaking autistic participants. The study had several phases, including tolerance testing, calibration, gaze training, and interacting with a complex interface: a virtual letterboard. All but one participant were able to wear the device and complete the system’s default eye calibration; 10 participants completed all training phases that required them to select targets using gaze only or gaze-click. Interestingly, the 7 users who chose to continue to the testing phase with gaze-click were much more successful than those who chose to continue with gaze alone. We also report on challenges and improvements needed for future gaze-based interactive AR systems for this population. Our findings pave the way for new opportunities for specialized AR solutions tailored to the needs of this under-served and under-researched population.

Exploring Immersive Debriefing in Virtual Reality Training: A Comparative Study

  • Kelly Minotti
  • Guillaume Loup
  • Thibault Harquin
  • Samir Otmane

Simulation and debriefing are two essential and inseparable phases of virtual reality training. With the widespread adoption of these training tools, it is crucial to define the best pedagogical approaches for trainers and learners to maximize their effectiveness. However, despite their educational benefits, virtual reality-specific debriefing methods remain underexplored in research. This article proposes an architecture and interface for an all-in-one immersive debriefing module that is adaptable to different types of training, including a complete system for recording, replaying, and redoing actions. A study with 36 participants compared this immersive debriefing system with traditional discussion-based and video-supported debriefing. Participants were divided into three groups to evaluate the effectiveness of each method. The results showed no significant differences between these debriefing methods across several criteria, such as satisfaction, motivation, or information retention. Immersive debriefing is as usable and retentive as traditional or video debriefing in this context. The next step will be to evaluate the redo system in other training courses involving more dynamic scenarios.

A Critical Review of Virtual and Extended Reality Immersive Police Training: Application Areas, Benefits & Vulnerabilities

  • Lena Podoletz
  • Mark McGill
  • David McIlhatton
  • Jill Marshall
  • Niamh Healy
  • Leonie Maria Tanczer

Virtual and Extended Reality (VR/XR) headsets have promised to enhance police training through the delivery of immersive simulations able to be conducted anywhere, anytime. However, little consideration has been given to reviewing the evidenced benefits and potential issues posed by XR police training. In this paper, we summarise the evidenced usage and benefits of XR police training through a formative targeted literature review (n=41 publications). We then reflect on the prospective technical, security, social and legal issues posed by XR police training, identifying four areas where issues or vulnerabilities exist: training content, trainees and trainers, systems and devices, and state and institutional stakeholders. We highlight significant concerns around e.g. the validity of training; the psychological impact and risks of trauma; the safety and privacy risks posed to trainees and trainers; and the risks to policing institutions. We aim to encourage end-user communities (e.g. police forces) to more openly reflect on the risks of immersive training, so we can ultimately move towards transparent, validated, trusted training that is evidenced to improve policing outcomes.

The Impact of Task-Responsibility on User Experience and Behaviour under Asymmetric Knowledge Conditions

  • Pauline Bimberg
  • Daniel Zielasko
  • Benjamin Weyers

Virtual Reality presents a promising tool for knowledge transfer, allowing users to learn in different environments and with the help of three-dimensional visualizations. At the same time, having to learn new ways of interacting with their environment can present a significant hurdle for novice users. When users enter a virtual space to receive knowledge from a more experienced person, the question arises as to whether they benefit from learning VR-specific interaction techniques instead of letting the expert take over some or all interactions. Based on related work about expert-novice interaction in virtual spaces, this paper presents a user study comparing three different distributions of interaction responsibilities between participants and an expert user. The Role-Based interaction mode gives the expert the full interaction responsibility. The Shared interaction mode gives both users the same interaction capabilities, allowing them to share the responsibility of interacting with the virtual space. Finally, the Parallel interaction mode gives participants full interaction responsibility, while the expert can provide guidance through oral communication and visual demonstration. Our results indicate that assuming interaction responsibility led to higher task loads but also increased the participant’s engagement and feeling of presence. For most participants, sharing interaction responsibilities with the expert represented the best trade-off between engagement and challenge. While we did not measure a significant increase in learning success, participant comments indicated that they also paid more attention to details when assuming more interaction responsibility.

SESSION: Session 8: Displays

Evaluating the effects of Situated and Embedded Visualisation in Augmented Reality Guidance for Isolated Medical Assistance

  • Frederick George Vickery
  • Sébastien Kubicki
  • Charlotte Hoareau
  • Lucas Brand
  • Aurelien Duval
  • Seamus Thierry
  • Ronan Querrec

One huge advantage of Augmented Reality (AR) is its numerous possibilities of displaying information in the physical world, especially when applying Situated Analytics (SitA). AR devices and their respective interaction techniques allow for supplementary guidance to assist an operator carrying out complex procedures such as medical diagnosis and surgery, for instance. Their usage promotes user autonomy by presenting relevant information when the operator may not necessarily possess expert knowledge of every procedure and may also not have access to external help such as in a remote or isolated situation (e.g., International Space Station, middle of an ocean, desert).

In this paper, we propose a comparison of two different forms of AR visualisation: An embedded visualisation and a situated projected visualisation, with the aim to assist operators with the most appropriate visualisation format when carrying out procedures (medical in our case). To evaluate these forms of visualisation, we carried out an experiment involving 23 participants possessing latent/novice medical knowledge. These participant profiles were representative of operators who are medically trained yet do not apply their knowledge every day (e.g., an astronaut in orbit or a sailor out at sea). We discuss our findings which include the advantages of embedded visualised information in terms of precision compared to situated projected information with the accompanying limitations in addition to future improvements to our proposition. We conclude with the prospects of our work, notably the continuation and possibility of evaluating our proposition in a less controlled and real context in collaboration with our national space agency.

Optimizing spatial resolution in head-mounted displays: evaluating characteristics of peripheral visual field

  • Masamitsu Harasawa
  • Yamato Miyashita
  • Kazuteru Komine

An ideal head-mounted display (HMD) is a device that provides a visual experience indistinguishable from that given by the naked eye. Such an HMD must display images with spatial resolution surpassing that of the human visual system. However, excessively high spatial resolution is resource-wasting and inefficient. To optimize this balance, we evaluated the spatial resolution characteristics of the human visual system in the peripheral visual area. Our experiment was performed based on a head-centered coordinate system, acknowledging that users can move their eyes freely within the HMD housing fixed on the user’s head. We measured threshold eccentricities where low-pass filtered noise patterns could be distinguished from intact noise patterns, manipulating cut-off spatial frequencies between one to eight cycles per degree. The results revealed clear asymmetries between the temporal and nasal, as well as between the upper and lower visual fields. In the temporal and lower fields, lower cut-off spatial frequencies resulted in higher eccentricity thresholds. Notably, the smaller impact of spatial frequencies in the nasal and upper visual fields is likely due to visual obstruction by facial structures, such as the nose. Our results can serve as a standard for pixel arrangement design in an ideal HMD.

An Evaluation of Targeting Methods in Spatial Computing Interfaces with Visual Distractions

  • Fabian Räthel
  • Susanne Schmidt
  • Jenny Gabel
  • Lukas Posniak
  • Frank Steinicke

In modern spatial computing devices, users are confronted with diverse methods for object selection, including eye gaze (cf. Apple Vision Pro), hand gestures (cf. Microsoft HoloLens 2), touch gestures (cf. Google Glass Enterprise Edition 2), and external controllers (cf. Magic Leap 2). Although there are a plethora of empirical studies on which selection techniques perform best, a common limiting factor stems from the partly artificial setups. These typically exclude practical influences such as visual distraction.

In this paper, we present a user study comparing two hand-based and two gaze-based state-of-the-art selection methods, using the HoloLens 2. We extended a traditional Fitts’ law-inspired study design by incorporating a visual task that simulates changes in the user interface after a successful selection. Without a visual task, gaze-based techniques were on average faster than hand-based techniques. This performance gain was eliminated (for head gaze) or even reversed (for eye gaze) when the visual task was active. These findings underscore the value of continued practice-oriented research of targeting methods in virtual environments.

Evaluation of AR Pattern Guidance Methods for a Surface Cleaning Task

  • Jeroen Ceyssens
  • Mathias Jans
  • Gustavo Rovelo Ruiz
  • Kris Luyten
  • Fabian Di Fiore

Cleanroom cleaning is a surface coverage task where the pattern should be followed correctly, and the entire surface should be covered. We investigate the efficacy of augmented reality (AR) by implementing various pattern guidance designs to enhance a cleanroom cleaning task. We developed an AR guidance system for cleaning procedures and evaluated four distinct pattern guidance methods: (1) breadcrumbs, (2) examples, (3) middle lines, and (4) outlines. We vary the instructions on the entire surface or as a single step. To measure performance, accuracy, and user satisfaction associated with each guidance method, we conducted a large-scale (n=864) between-subjects study. Our findings indicate that single step instructions proved to be more intuitive and efficient than full instructions, especially for the breadcrumbs. We also discussed the implications of our results for the development of AR applications for surface coverage and pattern optimization.

Aerial Imaging System to Reproduce Reflections in Specular Surfaces

  • Ayaka Sano
  • Motohiro Makiguchi
  • Ayami Hoshi
  • Hiroshi Chigira
  • Takayoshi Mochizuki

A table-top reflective aerial imaging system can display digital information as if it existed on a specular horizontal surface, such as a marble or acrylic table-top. This system allows a user sitting in a chair to naturally look down at an aerial image displayed on a table, thus integrating the aerial image into daily life. However, although it displays an aerial image on a specular surface, it does not reproduce the reflected image, thus failing to achieve optical consistency. To improve the presence of aerial images, we propose a new table-top reflective aerial imaging system that not only displays aerial images on a specular surface but also reproduces the reflected images in that surface. The proposed system consists of an aerial imaging optical system that can display independent aerial images on and in a specular surface, and a reflected image reproduction method that designs the luminance of the aerial image inside the specular surface according to the actual measured material reflectance. We implemented a prototype to verify the principle of aerial imaging optics and evaluated the naturalness of the reflected image reproduced by our method through user experimentation. The results show that the difference between the material of the specular surface and the shape of the aerial image affects whether the reflected image produced by the proposed method is perceived as natural or not.

Superpixel-guided Sampling for Compact 3D Gaussian Splatting

  • Myoung Gon Kim
  • SeungWon Jeong
  • Seohyeon Park
  • JungHyun Han

In this paper, we propose to integrate superpixel-guided sampling into the framework of 3D Gaussian Splatting for novel view synthesis. Given a sequence of frames, where each frame is a pair of RGB-D image and the camera pose that captures the image, we first select the keyframes. Then, we decompose each keyframe image into superpixels, back-project each superpixel’s center into 3D space, and initialize a 3D Gaussian at the back-projected position. This superpixel-guided sampling produces a set of sparse but well-distributed Gaussians, which enables the optimization procedure to converge quickly. The experimental results show that with a significantly reduced computing cost, we can synthesize a novel view the quality of which is comparable to that generated by the state-of-the-art methods.

SESSION: Poster Abstracts

Editing Immersive Recordings: An Elicitation Study

  • Anton Benjamin Lammert
  • Jonas Linß
  • Juan Camilo Garcia Cano
  • Tony Jan Zoeppig
  • Bernd Froehlich

Immersive recordings capture virtual reality interactions and are used in various contexts such as education and entertainment. However, there has been only limited research on requirements and techniques for editing such recordings. We interviewed expert editors of video recordings to understand their workflows, familiarised them with immersive recordings, and asked them about what editing challenges and capabilities they can envision for immersive recordings. The experts identified several functionalities they considered relevant for editing, including viewer placement, control over the viewer’s size, support for live and asynchronous collaboration, and different transition types.

Towards an Avatar Customization System for Semi-realistic Ethnically-diverse Virtual Reality Avatars

  • Nathanaël Lambert
  • Matthew Lakier

Due to the Proteus effect, in which people modify their behaviour based on their avatar, participant avatar representation is an important factor in virtual reality (VR) studies. We develop an open source prototype avatar customization system that enables quick customization of semi-realistic, ethnically-diverse avatars. The prototype provides options for customizing body and face shape, hairstyle, glasses, religious clothing, and skin, eye, and hair colour. The prototype generates avatar assets that are fully rigged and textured for incorporation into VR study code, and it serves as a step towards designing more inclusive VR research studies.

Comparing Tracking Accuracy in Standalone MR-HMDs: Apple Vision Pro, Hololens 2, Meta Quest 3, and Pico 4 Pro

  • Long Cheng
  • Michael Schreiner
  • Andreas Kunz

Modern Mixed Reality Head-Mounted Displays (MR-HMDs) can track user movements across large spaces without external markers. This study evaluates the tracking accuracy and the loop closure capabilities of four commercially available MR-HMDs across four distinct scenarios. We found consistent tracking performance in well-lit and expansive environments for all devices. Tracking accuracy remained stable even in outdoor nighttime conditions. Furthermore, most HMDs demonstrated effective error correction during loop closure, with errors in non-loop scenarios consistently exceeding those in loop scenarios.

Exploring Alternative Text Input Modalities in Virtual Reality: A Comparative Study

  • Mathias Jans
  • Jeroen Ceyssens
  • Kris Luyten

Text input in Virtual Reality (VR) is crucial for communication, search, and productivity. We compared four keyboard designs for VR text entry, leveraging the flexibility and the tracking options of a 3D environment. We used the Dvorak layout to control for experience differences. The designs were: (a) a floating keyboard with touch input, (b) a keyboard attached on the back of the hand with touch input, (c) a floating keyboard with eye tracking and pinch input, and (d) a keyboard laid out over a rolling shape with touch input. Designs (b), (c), and (d) can move in 3D space, while design (a) is static. Design (d) had similar efficiency to design (a) but with better usability and lower Physical Demand. Design (b) led to higher Physical Demand, Effort, and Frustration. Design (c) had lower Physical Demand but higher Mental Demand, Effort, and error rates. Typing speeds averaged 6.51 WPM (1.24% error rate) for (a), 5.56 WPM (3.82% error rate) for (b), 5.33 WPM (1.43% error rate) for (c), and 6.70 WPM (1.64% error rate) for (d).

A Study of Haptic Interaction Techniques Utilizing Body Possession by Virtual Characters

  • Wakana Oshiro
  • Haruno Kataoka
  • Masanori Yokoyama
  • Ryuji Yamamoto

In order to realize touch communication with virtual characters, we propose a new tactile presentation method which uses the user’s own body part for a sense of touch along with a sense that the body part is possessed by virtual character. In this paper, as an initial study, we created a system in which the user and a virtual character’s hands are visually fused, and a pseudo-high-five is performed. The applicability of the proposed method and points for improvement are discussed based on the results of user experiments.

A Study on the Effectiveness of Augmented Reality Signal-Integrated Camera Monitor Systems for Safe Lane Changing

  • Haechan Lee
  • Uijong Ju

This study investigates the effectiveness of augmented reality (AR) signals in camera monitor systems (CMS) for enhancing safety during lane changes. Seventy participants used seven side mirror conditions, including traditional side mirrors and six CMS conditions with and without AR signals. Results showed that CMS with AR signals significantly reduced the number of collisions and reaction time compared to CMS without AR signals.

Exploring Influencers’ and Users’ Experiences in Douyin’s Virtual Reality Live-Streaming

  • Rongyi Chen
  • Jingjia Xiao
  • Zilu Wang
  • Menghan Yin
  • Xianzhe Fan
  • Zihe Ran
  • Qing Xiao

VR live-streaming has become an emerging part on Douyin. This study aims to explore the technical modes, content strategies, user experiences in Douyin‘s VR live-streaming. Through interviews and focus groups, we found that VR technology is recognized by influencers and has become an essential part of their creative practice. For some influencers, VR technology is a key factor in enhancing audience engagement and immersive experiences, although technical literacy barriers may arise when setting up VR scenes. We also provide dimensions for improving and developing user adoption and experience of VR technology in social media environments.

Digital Eyes: Social Implications of XR EyeSight

  • Maurizio Vergari
  • Tanja Kojic
  • Wafaa Wardah
  • Maximilian Warsinke
  • Sebastian Möller
  • Jan-Niklas Voigt-Antons
  • Robert Spang

The EyeSight feature, introduced with the new Apple Vision Pro XR headset, promises to revolutionize user interaction by simulating real human eye expressions on a digital display. This feature could enhance XR devices’ social acceptability and social presence when communicating with others outside the XR experience. In this pilot study, we explore the implications of the EyeSight feature by examining social acceptability, social presence, emotional responses, and technology acceptance. Eight participants engaged in conversational tasks in three conditions to contrast experiencing the Apple Vision Pro with EyeSight, the Meta Quest 3 as a reference XR headset, and a face-to-face setting. Our preliminary findings indicate that while the EyeSight feature improves perceptions of social presence and acceptability compared to the reference headsets, it does not match the social connectivity of direct human interactions.

SOLDAR: Supporting Low-Volume PCB Prototyping Using Collaborative Robots and Augmented Reality

  • Xander Vaes
  • Dries Cardinaels
  • Mannu Lambrichts
  • Raf Ramakers

Printed circuit boards (PCBs) are fundamental to modern electronics and are present in almost every electronic device. However, despite their ubiquity, current PCB assembly methods can be time-consuming and lack flexibility for one-off designs. This poster investigates how low-volume PCB prototyping can be enhanced by integrating collaborative robots (cobots) and Augmented Reality (AR). Specifically, we introduce SOLDAR, a system that facilitates the soldering of electronic through-hole components on PCBs. By using a cobot for optimal PCB positioning and AR glasses for step-by-step guidance, SOLDAR aims to streamline the assembly process. The expected outcomes are increased efficiency, reduced assembly time, and greater flexibility for low-volume PCB prototyping designs. To validate these hypotheses, user experiments are necessary.

Earscape: A VR Auditory Educational Escape Room

  • Christian Tsalidis
  • Ali Adjorlu
  • Lone Marianne Percy-Smith
  • Stefania Serafin

According to the World Health Organisation’s World Report on Hearing, there is a strong need to provide better education on hearing loss from a young age. This project aims to educate the Danish young population (13 to 17-year-olds) about the hearing sense through an educational multiplayer virtual reality-based escape room with the benefits of educational escape rooms. In collaboration with relevant audiologist stakeholders, this project follows an iterative process of design, implementation, and evaluation of the application. The developed solution will undergo several user studies in the following months.

Testing 360-Degree Video Communication in a Debate Training between Teens: An Exploratory Field Study

  • Marta Orduna
  • Pablo Perez
  • Kamil Koniuch
  • Ester Gonzalez-Sosa
  • Alvaro Villegas

This paper evaluates a virtual teleportation system based on 360-degree video communication in a field study at a secondary school and a guided debate training session. Socioemotional aspects, including social presence and avatar representation, were assessed by eight teens. Key findings include a higher “wow” effect for those joining the immersive environment, and challenges with avatar interaction such as raising their hand. The system shows promise as an educational tool for debate training and student engagement.

3D Human Pose Estimation Using Egocentric Depth Data

  • Seongmin Baek
  • Youn-Hee Gil
  • Yejin Kim

In this paper, we present a novel approach for 3D human pose estimation using depth data from egocentric viewpoints. Depth data has the advantage that it is less sensitive to color and lighting changes. We acquired depth data streamed from multiple depth cameras attached to a user’s head and calibrated them into a depth map. For joint detection, a ResNet-based network was optimized with the skeletal joints of a Kinect camera. Unlike previous approaches, the proposed approach can track 3D human poses in an egocentric setup with a small dataset.

Haptic and Auditory Feedback on Immersive Media in Virtual Reality

  • Vanessa Pfeiffer
  • Sebastian von Mammen
  • Daniel Pohl

In Virtual Reality (VR), visual and auditory sensations are effectively leveraged to create immersive experiences. However, touch is significantly underutilized in immersive media. We enhance the VR image viewing experience by integrating haptic and auditory feedback into 3D environments constructed from immersive media. We address the challenges of utilizing depth maps from various image formats to create intractable environments. The VR experience is enhanced using vibrohaptic feedback and audio cues triggered by controller collisions with haptic materials.

From Ground to Sky: Flying-motion Generation via Motion Dataset Adaptation

  • Jinwoo Jeong
  • youngho chai

We conducted a study utilizing a lightweight generative network to create flying motions. The existing datasets used for training did not include any data on flying motions. Therefore, we selected certain classes from the existing motion datasets and transformed these motions to resemble flying actions. By training the existing generative network with the modified dataset, we were able to generate motions that closely resemble flying. The results of this study demonstrate the potential for generating flying motions. The generation of flying motions for human avatars is expected to be a critical technology not only in 3D animation or game industry but also in virtual environments, enabling users to experience various activities through their avatars.

Walking of uphill slopes in immersive virtual environments

  • Michelle Meyer
  • Markus Zank

We explore three visual manipulation techniques aiming to create a realistic feeling of walking an uphill slope while in reality being on flat ground. The techniques are based on real physical visual perception and consist of modification of height and display of virtual shoes, modification of speed, and modification of view pitch. Quantitative and qualitative evaluation indicated that modification of speed, and pitch contributed to user discomfort, as well as a general increase in discomfort correlating with the slope’s increasing inclination. However, height manipulation was well received and can be used in future projects for more realistic landscape.

Enhanced Wayfinding Insights Through VR and Eye-Tracking Analysis

  • Gerard T Mulvany
  • Christian John Dyson Hayes
  • Tuba Kocaturk
  • Victoria Duckett
  • Thuong Hoang
  • Stefan Greuter

This paper presents a novel method for evaluating wayfinding within a public building to provide meaningful insights for stakeholders. Our approach features unique methods for both data collection and evaluation, with a holistic digital capture of the entire virtual environment experienced by participants, maintained in an interactive format for in-depth analysis. We also captured and output data in point cloud formats, raw data text files, and task-specific metrics, which support interactive replays of participants’ experiences. We developed algorithms to extract meaningful insights from the raw data based on assumptions about wayfinding characteristics. The contribution is a flexible framework that can be easily adapted for future projects with adjustable variables to suit specific applications.

Pipelining Processors for Decomposing Character Animation

  • Christian Merz
  • Jonathan Tschanter
  • Florian Kern
  • Jean-Luc Lugrin
  • Carolin Wienrich
  • Marc Erich Latoschik

This paper presents an openly available implementation of a modular pipeline architecture for character animation. It effectively decomposes frequently necessary processing steps into dedicated character processors, such as copying data from various motion sources, applying inverse kinematics, or scaling the character. Processors can easily be parameterized, extended (e.g., with AI), and freely arranged or even duplicated in any order necessary, greatly reducing side effects and fostering fine-tuning, maintenance, and reusability of the complex interplay of real-time animation steps.

Study of inpainting based on generative AI for noise-canceling HMDs

  • Nobuchika Sakata
  • Waki Youei

Entering a small space such as an elevator or a crowded train with a stranger can cause discomfort and suffocation. This is because the stranger is invading the individual’s personal space. However, it is difficult to maintain an appropriate interpersonal distance from others at all times in various situations. Therefore, a noise-canceling HMD [2][3] that uses AR to change the size of the person in the field of vision has been proposed as a means of reducing noise such as discomfort caused by inappropriate interpersonal distance. In this paper, we propose an improvement method using generative AI for background completion in noise-canceling HMDs.

A Comparison between Vibrotactile Error-correction Feedback on Upper and Lower Body in the VR Snowboard Balancing Task

  • Jaewan Lim
  • Jiyoung Park
  • Dohoon Kwak
  • Yongjae Yoo

This study investigated the effect of vibrotactile stimulus location on the balancing task in virtual reality (VR). Using a virtual snowboarding system with wearable haptic devices, we conducted a between-subject user study comparing the effectiveness of two different body locations–upper body (UB; torso vibrations) and lower body (LB; ankle vibrations). The real-time vibrotactile balance-correction feedback was generated by the Center of Pressure (CoP) calculated from the sensor array on insoles. The initial results showed that UB feedback is better than LB to improve users’ balance ability.

Towards Effective Sensorimotor Skill Transfer: Initial Comparison between Haptic Guidance and Disturbance on VR Engraving Arts

  • Jiyoung Park
  • Jaewan Lim
  • Taeyoon Lee
  • Yongjae Yoo

This paper compares force-feedback methods for sensorimotor skill transfer in a VR engraving art task. Using a 3-DoF force-feedback device, we implemented a VR engraving art system that provides two different haptic feedback methods of haptic guidance and disturbance. As haptic guidance, the force and profile we gathered from an art expert were presented as is, while the inverse direction of the profile was given as haptic disturbance. We evaluated the user’s task performance with two methods with a baseline of no haptic feedback and the results showed haptic disturbance led to slightly better performance than guidance.

A Transfer Learning Approach for Music-driven 3D Conducting Motion Generation with Limited Data

  • Jisoo Oh
  • Jinwoo Jeong
  • Youngho Chai

Generating motions based on audio using deep learning has been studied steadily. However, previous research has mainly focused on speech-driven 3D gesture generation and music-driven 3D dance motion generation. We aim to generate 3D motions for specific scenarios, such as conducting. To address the challenge of lacking existing training datasets, we constructed a multi-modal 3D conducting motion dataset, which containing 1.43 hours and is, to our knowledge, a small-scale dataset. Furthermore, we propose a novel approach that uses transfer learning with a model pre-trained on a speech gesture dataset to generate 3D conducting motions. We evaluate the generated motions both with and without transfer learning, using quantitative and qualitative metrics. Our results show that the proposed method improves performance in both aspects compared to the baseline without transfer learning.

The MASTER XR Platform for Robotics Training in Manufacturing

  • László Kopácsi
  • Panagiotis Karagiannis
  • Sotiris Makris
  • Johan Kildal
  • Andoni Rivera-Pinto
  • Judit Ruiz de Munain
  • Jesús Rosel
  • Maria Madarieta
  • Nikolaos Tseregkounis
  • Konstantina Salagianni
  • Panagiotis Aivaliotis
  • Michael Barz
  • Daniel Sonntag

The MASTER project introduces an open Extended Reality (XR) platform designed to enhance human-robot collaboration and train workers in robotics within manufacturing settings. It includes modules for creating safe workspaces, intuitive robot programming, and user-friendly human-robot interactions (HRI), including eye-tracking technologies. The development of the platform is supported by two open calls targeting technical SMEs and educational institutes to enhance and test its functionalities. By employing the learning-by-doing methodology and integrating effective teaching principles, the MASTER platform aims to provide a comprehensive learning environment, preparing students and professionals for the complexities of flexible and collaborative manufacturing settings.

Object-Specific and Generic Difference Detection for Non-Destructive Testing Methods

  • Andreas Dietze
  • Yvonne Jung
  • Paul Grimm

In this arcticle, we present a concept of a tool suite that combines a generic and object-specific difference detection between 3D measurment and 3D planning data (cp. Figure 1). In a Generic Difference Detection (GDD) between measurement and planning data, the objects of interest to be compared are represented by the acquired 3D data (often the solid of a 3D reconstruction) and the 3D planning data in their entirety. An example of this is a shape analysis used as a non-destructive testing method for products from an additive (e.g. 3D printing) or subtractive (e.g. CNC milling) manufacturing process, in which a 3D reconstruction of the produced object is compared with its orignial 3D planning data [2]. The range of applications in this area is very broad and includes quality controls, plagiarism checks and measuring wear and tear. On the other hand, in an Object-Specific Difference Detection (OSDD) the objects of interest are located within the measurement and planning data and are represented by specific components that have to be compared. Here, the sector of digital construction monitoring can be mentioned as an example, in which the construction process and associated specific construction components, such as walls, passages and window openings, are checked against the planning data based on acquired measurement data [1]. Both difference detection techniques work, regardless of whether the object of interest is present in both the measurement and planning data or only in the measurement or planning data (e.g. after a building refurbishment). Besides the benefit of quality assurance following a production or construction process that is covered by both difference detection concepts, an early detection of errors is of great advantage. For example, follow-up costs after a construction process (e.g. building) can be reduced or avoided by identifying errors during the construction process. In addition, this can also help to ensure that the schedule is adhered to. Depending on the application, there is also the option of feeding identified deviations or errors back into the planning data in order to achieve synchronization between the planning data and the actual state of the object, in case the differences are intentional. While our approach for GDD is limited to a shape similarity analysis of two 3D objects and a subsequent real-time visualization of detected differences, our method for OSDD can already feed detected deviations and errors back into the 3D planning data and allows a collaborative result visualization based on a multi-codal presentation (e.g. tabular data or 3D rendering).

VR4UrbanDev: An Immersive Virtual Reality Experience for Energy Data Visualization

  • Saeed Safikhani
  • Georg Arbesser Rastburg
  • Anna Schreuer
  • Jürgen Suschek-Berger
  • Hermann Edtmayer
  • Johanna Pirker

In this demonstration paper, we present our interactive virtual reality (VR) experience, which has been designed to facilitate interaction with energy-related information. This experience consists of two main modes: the world in miniature for large-scale and first-person for real-world scale visualizations. Additionally, we presented our approach to potential target groups in interviews. The results of these interviews can help developers for future implementation considering the requirements of each group.

Learning From Yourself: Effects of Doppelgängers on Foreign Language Anxiety, Trust, and Learning Outcomes

  • Milana Wolff
  • Jennifer LaVanchy
  • Yvonne Swader
  • Stephanie May
  • Cade W Anderson
  • Amy Banić

We investigate the role of doppelgängers on foreign language anxiety, trustworthiness, and Spanish vocabulary recall. Participants (n = 31) completed three Spanish language lessons presented by a doppelgänger, a generic VH, or a disembodied voice in immersive or desktop VR; completed the Foreign Language Anxiety Classroom Scale (FLACS), Ohanian inventory on trustworthiness; and were assessed on vocabulary recall. Our findings suggest potential avenues for leverage with the design of VHs for foreign language learning.

UXR-kit: An Ideation Kit and Method for Collaborative and User-Centered Design about Extended Reality systems.

  • Juliette Vauchez
  • Charles Bailly
  • Julien Castet

Emerging kits and methods about Extended Reality (XR) systems are mainly centered on the prototyping phase. The ideation phase, which comes before prototyping, is currently still under-explored. In this work, we propose UXR-kit: a toolkit and a method for the co-design of ideas for XR systems. UXR-kit is based on an approach inspired by design studios and generative techniques and highlights the specificities of XR systems. Results from an experimental study suggest that UXR-kit allows the emergence of ideas for XR designs through both World-In-Miniature representations and first-person representations at scale 1:1.

Cultural Windows: Towards Immersive Journeys into Global Living Spaces

  • Hessam Djavaherpour
  • Pierre Dragicevic
  • Yvonne Jansen

“Cultural Windows” is a research initiative aimed at enhancing cross-cultural understanding through immersive extended reality (XR) experiences. The project deploys AR and VR platforms to allow users to explore diverse living spaces, bridging the gap between preconceived notions and the actual appearance of these spaces. By using 3D scanning to create accurate models of culturally significant objects and integrating them into immersive systems, the project provides insights into the use of immersive technologies in cultural education, promoting engagement with global living designs.

Single Vs Dual: Influence of the Number of Displays on User Experience within Virtually Embodied Conversational Systems

  • Navid Ashrafi
  • Francesco Vona
  • Sina Hinzmann
  • Philipp Graf
  • Philipp Harnisch
  • Jan-Niklas Voigt-Antons

The current research evaluates user experience and preference when interacting with a patient-reported outcome measure (PROM) healthcare application displayed on a single tablet in comparison to interaction with the same application distributed across two tablets. We conducted a within-subject user study with 43 participants who engaged with and rated the usability of our system and participated in a post-experiment interview to collect subjective data. Our findings showed significantly higher usability and higher pragmatic quality ratings for the single tablet condition. However, some users attribute a higher level of presence to the avatar and prefer it to be placed on a second tablet.

Single Distance Clipping in Vertex Shader for VR Portal

  • Guodong Rong
  • Yichao Wang
  • Steven Lansel

VR portal is a powerful tool connecting multiple VR spaces and providing impressive visual effects. The implementation of VR portals requires the VR compositor to perform a clipping operation. For performance reason, the clipping needs to be performed in vertex shader, which prevents most of existing algorithm to be used here. This paper proposes a novel algorithm that performs the clipping operation with a single distance value. The proposed algorithm is well suited for the vertex shader, and can handle both normal clipping and inverse clipping. This enables users to view a VR portal from both sides. The proposed algorithm also saves the number of clip planes for other purposes, and its performance is superior to the traditional algorithm with multiple clip planes.

VReflect: Designing VR-Based Movement Training with Perspectives, Mirrors and Avatars

  • Dennis Dietz
  • Fabian Berger
  • Changkun Ou
  • Francesco Chiossi
  • Giancarlo Graeber
  • Andreas Martin Butz
  • Matthias Hoppe

Physical training in virtual environments, such as VR, has gained popularity, especially due to the coronavirus pandemic. VR training offers new opportunities compared to traditional methods, including the use of different perspectives, mirrors, and avatars to enhance the understanding of personal movements. However, the interaction of these elements has been less studied. To address this, we developed VReflect, a VR environment that uses mirrors and avatars as virtual self-visualization techniques (VSVT) to improve self-awareness during movement training. In a preliminary study on learning beginner karate movements, we tested four combinations of perspectives and VSVTs. The results indicate that neither first-person nor third-person perspectives can be universally recommended, which is in alignment with previous work. Interviews revealed a preference for the traditional combination of mirrors and first-person perspective.

A Volumetric Video Application to Enhance Museum Experiences

  • Anh Nguyen
  • Jan Lässig
  • Anna F. Kälin
  • Ariana Huwiler
  • Philipp Haslbauer
  • Gareth W. Young
  • Lam Kit Yung
  • Aljosa Smolic

Volumetric video (VV) is an emerging 3D format that allows the integration of real people into XR (extended reality) applications. Recent cost-effective AI-based methods have enabled VV capture using single handheld cameras or mobile phones. This study addresses the quality, integration, and acceptance of AI-based VV content creation in an augmented reality (AR) application designed to enhance museum experiences. The main result reveals that, although the current VV quality is lower than professional standards, users still find significant added value and enjoy its immersive experience.

Effectiveness of Adaptive Difficulty Settings on Self-efficacy in VR Exercise

  • Yusuke Goutsu
  • Tetsunari Inamura

The difficulty is a fundamental factor of the user’s motivation and engagement in some tasks. Dynamic difficulty adjustment (DDA) systems provide users with an optimal level of challenge. Previously, some studies developed a DDA system that can set the task’s difficulty to any level. However, these studies lack the investigation of the influence of the difficulty levels on the psychological aspect. For this purpose, we consider a difficulty setting that consists of stepwise difficulty levels (e.g., hard, normal, and easy) set to adapt to each user’s skill and evaluate it using self-efficacy. In the experiment, we employ a Kendama task in a VR space where the difficulty level can be easily adjusted. The result shows that the difficulty levels in our method can be set according to the user’s skill. Moreover, we experimentally clarify a strong correlation between successful experiences in imagination and the enhancement of self-efficacy in the difficulty setting, which means that adapting difficulty levels to the user’s skill has the potential to enhance self-efficacy effectively.

Investigation of Simulator Sickness in Walking with Multiple Locomotion Technologies in Virtual Reality

  • Yu Wang
  • Jakob Eckkrammer
  • Martin Kocur
  • Philipp Wintersberger

With the increasing development of Virtual Reality, locomotion has become an essential component of interaction in VR. Currently, various locomotion technologies have been developed to provide users with a natural walking experience in virtual environments. However, the multiple walking techniques impact users’ walking experience in different ways. Simulator sickness is a common issue in VR experiences. Since different walking methods may influence simulator sickness differently, we conducted a user study to evaluate simulator sickness in walking with three relevant walking methods: real walking, arm-swing, and omnidirectional treadmill, and the results indicated that these three walking methods caused different levels of simulator sickness, and people perceived stronger sickness when they walked on the omnidirectional treadmill.

Exploring an XR Indoor Navigation System for Remote Collaboration

  • Rishab Bhattacharyya
  • Linda Hirsch
  • Leif Oppermann

While collaboration in shared extended reality spaces has been extensively explored, larger environments like entire floors or buildings have garnered less attention. To address this gap, spatial navigation and collaboration across realities must be made possible so that users can find each other and foster shared spatial understanding independent from reality. Current developments target either navigation or collaboration but lack the combination. In this poster, we present an extended reality remote collaboration system using an augmented reality (AR) based indoor navigation for on-site and a Building Information Model (BIM) of the physical environment Virtual Reality (VR) system for remote users. We conducted a user study with ten participants (five pairs) to gather initial insights into the system’s usability and preferences for collaborative tools. The results offer initial insights into creating shared spatial understanding across realities. Our work contributes to a collaborative XR navigation system for extensive shared spaces.

Wheel-Based Attachable Footwear for VR: Challenges and Opportunities in Seated Walking-in-Place Locomotion

  • Zheyu Zhang
  • Syed Masum Billah

This poster explores the potential of Cybershoes, a foot-based consumer input device, used with a swivel chair to enable seated walking-in-place (WIP) locomotion in virtual reality (VR). Through a qualitative study with 12 participants, we investigated the effects of Cybershoes on user comfort, presence, motion sickness, and overall experience during various sightseeing tasks. Our findings reveal both opportunities and challenges for Cybershoes as a seated-WIP solution. Participants perceived Cybershoes as more natural for navigation compared to handheld controllers, with most reporting reduced motion sickness. However, challenges included perceived slower movement speed, ergonomic issues, and limited action detection. Our work also highlights Cybershoes’ potential beyond gaming, including applications in exercise, professional training, remote work, and accessibility.

White Lies in Virtual Reality: Impact on Enjoyment and Fatigue

  • Haruka Murakami
  • Vittorio Fiscale
  • Agata Marta Soccini
  • Tetsunari Inamura

This study examined the impact of a “white lie” designed to boost motivation during virtual reality exercise on enjoyment and mental fatigue. Participants engaged in a ball-throwing or ball-targeting task and were randomly assigned to groups with or without the white lie. Results indicated that both groups experienced similar levels of enjoyment and fatigue, suggesting the white lie had minimal effect on these factors. All participants, regardless of group, reported high levels of enjoyment, with 17 out of 18 indicating they had fun, no significant differences in mental fatigue were found between groups while participants generally favored the white lie. However, the positive experience across all participants highlights the potential of Virtual Reality for promoting exercise engagement.

Dynamic Difficulty Adjustment in Virtual Reality Exergaming to Regulate Exertion Levels via Heart Rate Monitoring

  • Lucas Küntzer
  • Moritz Scherer
  • Tilo Mentler
  • Georg Rock

By regulating exertion levels, Dynamic difficulty adjustment (DDA) has the potential to enhance user experience and optimize exercise in Virtual Reality (VR) exergames. This pilot study assesses the effectiveness of adjusting the difficulty of gameplay challenges based on heart rate (HR) data to control the intensity of physical activity in VR exergaming. Observational results from 13 participants indicate that the HR-based DDA more effectively maintained target heart rate zones compared to randomized adjustments. Improved perceived exertion, and increased enjoyment underlines the potential of this approach for VR-based exercise and rehabilitation programs.

SESSION: Demo Abstracts

The Guide’s Apprentices: Engaging Visitors of Virtual Museums through Appropriate Agent Embodiments

  • Ephraim Schott
  • Irene López García
  • Tony Jan Zoeppig
  • Manuel Hartmann
  • Bernd Froehlich

Embodied conversational agents are an established approach for immersively conveying the narratives and information of virtual museums. Recent advances in large language models and text-to-speech systems enable the active guidance of users by agents and raise the question of how such assistants should be visually embodied to engage visitors and promote knowledge transfer. This demo showcases a stylized humanoid guide, a novel animism-based approach with speaking objects, and a combination of both concepts to interactively guide users through a digital replica of the famous Gropius Room at the Bauhaus University in Weimar.

Hybrid Input Technique for VR Combining Head Mounted Keyboard with Head Pointing

  • Ryoto Nohara
  • Hironori Ishikawa
  • Hiroyuki Manabe

The Head Mounted Keyboard (HMK) is a text input method that attaches a physical split keyboard to the head mounted display (HMD). It allows high-speed text input by utilizing the touch-typing skills that many PC users have. However, numeric input with HMK is challenging because the user cannot see the keyboard. Our solution is a hybrid technique that combines HMK with head pointing. The technique, inputs letters via the HMK while numeric input uses head pointing to a virtual keyboard. In an experiment on typing tasks, the proposed technique outperforms the conventional alternatives. Some application examples were implemented to explore the application space of the technique.

Rendering diffraction Phenomena on rough surfaces in Virtual Reality

  • Olaf Clausen
  • Arnulph Fuhrmann
  • Martin Mišiak
  • Marc Erich Latoschik
  • Ricardo Marroquim

Wave-optical phenomena, such as diffraction, significantly impact the visual appearance of surfaces. Despite their importance, wave-optical reflection models are rare and computationally expensive. Recently, we presented a real-time model that accounts for diffraction-induced color shifts and speckle. Given that diffraction phenomena are highly dependent on illumination and viewing directions, as well as stereoscopic vision, we developed a VR demo to evaluate the new model. This demo shows the substantial impact of diffraction on the appearance of rough surfaces, particularly in stereoscopic viewing.

Supporting Wildfire Evacuation Preparedness through a Virtual Reality Simulation

  • Alison Crosby
  • MJ Johns

This demo presents a virtual reality simulation of a wildfire evacuation. Players are tasked with going through a home environment and collecting items they believe they would need and want to take if they were under an evacuation notice. The experience is playable on the Meta Quest 2 headset.

Row your boat in VR and solve thinking exercises on the way: The Brain-Row Challenge

  • Christoph Lürig
  • Tilo Mentler
  • Sven Karstens

In this demo, we showcase Brain-Row Challenge. Brain-Row Challenge is a research prototype for dual-task training in Virtual Reality (VR). Dual-task training combines a mental and a physical task. This training is relevant in neurodegenerative diseases, especially in Parkinson’s disease. The user is rowing with a Concept 2 ergometer over a Nordic lake, must follow a marked route and answers multiple-choice questions by rowing through gates. Steering is done with an inertial measurement unit that is attached to the handlebar. The VR experience can also be compared to a less immersive representation of the rowing course on a TV screen.

ChronoShore: Diagetic Temporal Exploration in a Simulated Virtual Coast Environment

  • Yuen C. Law
  • Lucca Troll
  • Daniel Zielasko

This paper introduces ChronoShore, an immersive virtual reality (VR) experience designed to explore diegetic time manipulation mechanics within a semi-realistic coastal environment. Traditional 2D video scrubbing methods fall short in immersive settings, particularly for understanding time-bound processes such as simulations of geology or biology. ChronoShore addresses this by allowing users to interact with celestial bodies to dynamically control and experience the passage of time, currently showcasing different weather events and atmospheric phenomena.

Virtual Lab – A VR Showroom for Biosignals Research

  • Asmus Eike Eilks
  • Ahmed Seyit Kücük
  • Felix Putze
  • Tanja Schultz

In this demo, we present Virtual Lab, a VR Showroom for Biosignals devices and experiments. In Virtual Lab, visitors can explore a digital twin of the real Biosignals Lab at the Cognitive Systems Lab at the University of Bremen, and interact with digital clones of real biosignals devices, including visualizations of their function and purpose.

Hands-On Plant Root System Reconstruction in Virtual Reality

  • Dirk Norbert Baker
  • Tobias Selzner
  • Jens Henrik Göbbert
  • Hanno Scharr
  • Morris Riedel
  • Ebba Þóra Hvannberg
  • Andrea Schnepf
  • Daniel Zielasko

VRoot is an immersive extended reality reconstruction tool for root system architectures from 3D volumetric scans of soil columns. We have conducted a laboratory user study to assess the performance of new users with our software in comparison to established software. We utilize a plant model to derive a synthetic root architecture, providing a baseline for reconstruction. This demo showcases the processes and techniques contributing to exact and efficient manual root architecture reconstruction in Virtual Reality. The extraction task typically is the sparse graph-structure extraction from a 3D magnetic-resonance imaging (MRI) data set. We visualize the RSA directly within the MRI and offer selection-set-based methods of adapting and augmenting the root architecture. This application is in productive use at our partner institute, where it is used to analyze complex root images.

Cybersicker: An Open Source VR Sickness Testbed – Do you still have fun, or are you already sick?

  • Daniel Zielasko
  • Yuen C. Law

Cybersickness poses a significant barrier to the widespread adoption of VR applications, particularly when virtual movement occurs without corresponding physical movement. While current strategies to mitigate cybersickness exist, none are entirely effective, especially if virtual movement is essential. Further research is needed, and this requires controlled induction of cybersickness in empirical studies. In this demo, we introduce Cybersicker, a simulator designed to bridge the gap between high-intensity, proprietary simulators and less engaging alternatives. The simulator provides controlled translational and rotational vection while encouraging user participation.

Off-The-Shelf: Exploring 3D Arrangements of See-Through Masks to Switch between Virtual Environments

  • Kevin Linne
  • Sven Thomas
  • Martin Weigel

This demo explores prioritization techniques to arrange see-through masks in virtual reality (VR). The oval masks show live previews of different virtual environments (VEs) and allow for seamless teleportation into a corresponding VE by putting the mask on the face. Each environment includes a mini-game (e.g., basketball and archery) in which the user has to perform a small task. The arrangement of the masks changes depending on a calculated rating, which considers the time since the game was last played and the game score. We envision this system to help users to multitask in VR. For example, to control multiple characters in VR games, to experience multi-strand (nonlinear) narratives, and to supervise semi-autonomous agents in different VEs.

EcoDive: Enhancing Presence and Ambient Environmental Awareness in a Virtual Reality Experience for Underwater Marine Debris Collection

  • Michael Feldmann
  • Philipp Geier
  • Jan Niclas Ruppenthal
  • Daniel Zielasko

This paper presents a VR-based serious game. The game aims to raise awareness about ocean pollution by immersing players in a virtual underwater world where they collect trash to prevent coral bleaching and save marine life. Despite their efforts, players inevitably face game over, highlighting the futility of merely collecting trash and underscoring the need to prevent waste from entering oceans. The game uses various diegetic feedback mechanisms and enhanced user presence features to deepen emotional engagement and promote pro-environmental behavior.

GazeLock: Gaze- and Lock Pattern-Based Authentication

  • László Kopácsi
  • Tobias Sebastian Schneider
  • Chiara Karr
  • Michael Barz
  • Daniel Sonntag

Password entry is common authentication approach in Extended Reality (XR) applications for its simplicity and familiarity, but it faces challenges in public and dynamic environments due to its cumbersome nature and susceptibility to observation attacks. Manual password input can be disruptive and prone to theft through shoulder surfing or surveillance. While alternative knowledge-based approaches exist, they often require complex physical gestures and are impractical for frequent public use. We present GazeLock, an eye-tracking and lock pattern-based authentication method. This method aims to provide an easy-to-learn and efficient alternative by leveraging familiar lock patterns operated through gaze. It ensures resilience to external observation, as physical interaction is unnecessary and eyes are obscured by the headset. Its hands-free, discreet nature makes it suitable for secure public use. We demonstrate this method by simulating the unlocking of a smart lock via an XR headset, showcasing its potential applications and benefits in real-world scenarios.

Us Xtended – Tracking and Sensing through Embedded and Embodied Design in Virtual Reality

  • Michaela Pnacekova

This short paper presents an embodied and embedded design method via biometric data tracking on the example of the virtual reality prototype Us Xtended. Users are taken through different immersive worlds and their task is to manipulate the environments via a certain type of physiological interaction (i.e. heart rate, gaze, voice, cognitive load). By employing biofeedback, the system tailors the immersive environment via audiovisual and haptic stimuli to user’s psycho-physiological responses and reflects them on its scale which is part of the virtual environment. By recording their voice, users can self-assess their own affects. In the finale, users stand in a pastiche-like world filled with different artifacts of psycho-physiological evaluations they co-created with the biofeedback system throughout their journey.

Make America Great Again and Again: How to Adapt Interactive Installation Art for Virtual Reality

  • Byeongwon Ha

Make America Great Again and Again features a large, fluttering American flag accompanied by the Star-Spangled Banner, with sixty small screens displaying one-minute video clips in sequence. This one-hour loop continues until participants upload their own videos, transforming the flag into a collage of visitor selfies. By providing a public sphere for local visitors, this interactive art project encourages them to share their opinions on this controversial issue. To capture global perspectives on the topic, the project was adapted into a virtual reality environment using the metaverse platform Styly. This paper outlines the process of converting the installation into virtual reality artwork.

Real-Time Scent Prediction and Release for Video Games

  • Yuchen Zhang
  • Henry Raymond
  • Ayça Takmaz
  • Börge Scheel
  • Henning Metzmacher
  • Fabio Zünd

This demo explores the use of computer vision technologies for the integration of scent in video games and interactive applications. We present an extendable system that is domain-independent and allows for customization and debugging based on the targeted game. Using Minecraft as a case study, we optimized the system configuration and evaluated its performance. Our aim is to advance the exploration of scent integration in gaming and inspire future designs for olfactory experiences.

ICELab Demo: an industrial digital-twin and simulator in VR

  • Deborah Pintani
  • Marco Emporio
  • Ariel Caputo
  • Dong Seon Cheng
  • Lorenzo Genghini
  • Nicola Tomasoni
  • Andrea Giachetti

In this demo we present an application featuring the integration of Virtual Reality (VR) technologies with the demonstration laboratory (ICELab) built around Industry 4.0/5.0 concepts. In particular, we showcase a digital twin of the real laboratory that allows the user to explore its environment in VR and interact with the different machinery to obtain several data and information.

SESSION: Reproduction Challenge Abstracts

Travel Speed, Spatial Awareness, And Implications for Egocentric Target-Selection-Based Teleportation – A Replication Design

  • Daniel Zielasko
  • Tim Weissker
  • Doug Bowman

Virtual travel in Virtual Reality experiences is common, offering users the ability to explore expansive virtual spaces. Various interfaces exist for virtual travel, with speed playing a crucial role in user experience and spatial awareness. Teleportation-based interfaces provide instantaneous transitions, whereas continuous and semi-continuous methods vary in speed and control. Prior research by Bowman et al. highlighted the impact of travel speed on spatial awareness demonstrating that instantaneous travel can lead to user disorientation. However, additional cues, such as visual target selection, can aid in reorientation. This study replicates and extends Bowman’s experiment, investigating the influence of travel speed and visual target cues on spatial orientation.

Walking > Walking-in-Place > Flying/Steering > Teleportation? Designing Locomotion Research for Replication and Extension

  • Daniel Zielasko
  • Gerd Bruder
  • Gregor Domes
  • Richard Skarbez
  • Mary C. Whitton
  • Anthony Steed

In this abstract, we discuss the demand for replication and extension efforts related to two seminal studies focused on virtual reality (VR) locomotion interfaces, initially centered around a VR implementation of the Visual Cliff, often referred to as Virtual Pit. The original experiments by Slater et al. (1995) and Usoh et al. (1999) compared different locomotion methods, including Real Walking, Walking-in-Place, and Flying/Steering, with a focus on presence and ease of use. We discuss the importance of these studies for the field, motivate replication efforts focused on these studies, discuss potential confounding factors, and present considerations for a concerted effort to reproduce the findings with state-of-the-art VR systems and measures, extensions to locomotion methods like Teleportation, and means to support future replications and extensions.

Fade-to-Black Duration in Egocentric Target-Selection-Based Teleport – A Replication Design

  • Matthias Wölwer
  • Daniel Zielasko

Fade-to-black animations are a commonly used technique to visualize transitions during teleportation. However, their duration varies across different implementations and has not been extensively researched. This abstract details a study design to understand how the level of environmental detail affects the preferred duration of fade-to-black animations. We propose a within-subject study, comparing participants’ preferred duration across three virtual environments with varying levels of detail. We discuss improvements to the task design of an existing study. Other than the level of environmental detail, we motivate research into the effects of different tasks (i.e. hurried or calm) on the preferred duration.

Generative Multi-Modal Artificial Intelligence for Dynamic Real-Time Context-Aware Content Creation in Augmented Reality

  • Majid Behravan
  • Denis Gracanin

We introduce a framework that uses generative Artificial Intelligence (AI) for dynamic and context-aware content creation in Augmented Reality (AR). By integrating Vision Language Models (VLMs), our system detects and understands the physical space around the user, recommending contextually relevant objects. These objects are transformed into 3D models using a text-to-3D generative AI techniques, allowing for real-time content inclusion within the AR space. This approach enhances user experience by enabling intuitive customization through spoken commands, while reducing costs and improving accessibility to advanced AR interactions. The framework’s vision and language capabilities support the generation of comprehensive and context-specific 3D objects.