Virtual Reality, Augmented Reality, and Human-Computer Interaction

A special issue of Big Data and Cognitive Computing (ISSN 2504-2289).

Deadline for manuscript submissions: closed (31 May 2022) | Viewed by 41487

Special Issue Editors

Computer Graphics and Human Computer Interaction Lab, University of Kaiserslautern, Gottlieb-Daimler-Str., 67663 Kaiserslautern, Germany
Interests: human–computer interaction; information visualization and applications
Special Issues, Collections and Topics in MDPI journals
Faculty of Engineering, RheinMain University of Applied Sciences, Am Brückweg 26, 65428 Rüsselsheim, Germany
Interests: human–computer interaction; information visualization and simulation
Department of Computer Science, Vrije Universiteit Amsterdam, NU Buolding Level 10, De Boellaan 1111, 1081HV Amsterdam, The Netherlands
Interests: experience design; multimedia and multimodality; design for cultural heritage and fine arts

Special Issue Information

Dear Colleagues,

Extended Reality (i.e., Cross Reality, XR) is an umbrella term, which includes Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). All three technologies are related to each other, with VR and AR lying at opposite ends of the so-called Reality–Virtuality continuum (Milgram94). XR technologies offer the possibility of visualizing entities that are not perceptible in reality, such as structures and processes, using virtual overlays. This makes them interesting for a huge variety of application areas, for example, in assistance and training systems (Psotka95) (e.g., in the fields of production, logistics, and surgery), navigation systems (Krolewski11), virtual room design and furnishing (Kaleja17), and computer games (Fahn13). However, not only is the development of new XR technologies and applications a key issue, it is also of utmost relevance to ensure optimal utilization of the user’s cognitive resources and to be able to detect cognitive overloads at an early stage.

In this context, we must consider different input modalities for human perception. Visuals and sound modalities are the most important ones for human perception, and they are also prevailing in XR environments. These modalities allow representing a large fraction of a (real or artificial) world. They permit a good immersion of the user, e.g., have the user moving in the (real or artificial) world (wearing goggles) or have the user explore the world on some display device using input devices such as a mouse or position and orientation sensors integrated into the display device (such as in a smartphone or tablet).

However, when dealing with these input modalities, consistency of view and sound is an important topic. For a long time, sound has been considered a second-rate addition to VR. Today, we see the importance of including sound equal to visuals into VR or XR environments. Modern XR solutions require the rendering of a VR soundscape, including background as well as sound related to relevant actions such as speech and sound of phenomena that are relevant for experiencing visuals (Johansson19).

Some other modalities are only able to represent the world in direct contact (smell, taste, tactile information, temperature, gravity, pressure). Internal phenomena may well be directly caused by interaction with an external world (muscle tension, pain). In newer VR applications, some of these modalities are used, e.g., using wearable effectors for pressure, vibration, or temperature.

We notice that truly intuitive and usable virtual worlds are hard to design and implement taking into account the considerations described above. Researchers must solve multiple problems, e.g.:

  • Development of new improved XR hardware;
  • Development of new interaction paradigms;
  • Supporting efficient collaborations in virtual worlds;
  • Context awareness;
  • Modalities (tactile and haptic simulated reality are now realistic options);
  • Intuitive interaction and integrated haptic feedback;
  • Scalability: from 2D to AR to VR;
  • Ensuring real-time capabilities;
  • Acknowledging safety aspects (e.g., use of AR in real environments);
  • Transferring XR technologies to new application areas;
  • Soundscapes (VR and AR are not only in a visual world);
  • Support of impaired people;
  • Translating reality (e.g., visuals or tactiles to replace sound);
  • Evaluating XR settings.

In addition to these topics related to design and implementation of virtual worlds, user experience is another relevant field in XR: Experience is now generally considered the concept that joins perceived ease of use with all aspects of a human’s direct holistic reaction to any incoming stimulation. It may include some or all of the following:

  • A meaning that a person attributes to what is perceived;
  • The complex of emotions and feelings triggered;
  • A positive or negative valuation in terms of being attracted;
  • A tendency to act, including action of paying attention or focusing on the perceived phenomena; mental actions like information processing and decision taking; and physical actions like moving the head, closing the eyes, withdrawing the hand, walking, talking, etc. (Vyas11).

This Special Issue will provide an insight into the current state of the art of Extended Realities. It will describe current research on how to evaluate and guarantee their usability and provide a positive user experience. It will show recent works in the related fields as well as trends for future development.

Researchers are invited to submit recent unpublished work in the field of Extended Reality and Human Computer Interaction. The scope of contributions to this Special Issue includes but is not limited to the research problems listed above.

Related Work (excerpt)

Fahn, CS., Wu ML., Liu WT. On the Use of Augmented Reality Technology for Creating Interactive Computer Games. In: Shumaker R. (eds.) Virtual, Augmented and Mixed Reality. Systems and Applications. VAMR 2013. Lecture Notes in Computer Science, vol 8022. Springer, Berlin, Heidelberg. 2013. https://0-doi-org.brum.beds.ac.uk/10.1007/978-3-642-39420-1_37

Johansson, M. VR for Your Ears: Dynamic 3D Audio Is Coming Soon - A truly realistic experience in VR requires immersive audio. IEEE Spectrum, January 24, 2019

Kaleja, P., Kozlovská, M. Virtual Reality as Innovative Approach to the Interior Designing. Selected Scientific Papers - Journal of Civil Engineering 12. 2017. https://0-doi-org.brum.beds.ac.uk/10.1515/sspjce-2017-0011

Krolewski J., Gawrysiak P. The Mobile Personal Augmented Reality Navigation System. In: Czachórski T., Kozielski S., Stańczyk U. (eds) Man-Machine Interactions 2. Advances in Intelligent and Soft Computing, Vol 103. Springer, Berlin, Heidelberg, 2011. https://0-doi-org.brum.beds.ac.uk/10.1007/978-3-642-23169-8_12

Milgram, P. Takemura, H., Utsumi, A., and Kishino, F. Augmented reality: A class of displays on the reality-virtuality continuum. Telemanipulator and Telepresence Technologies, 1994.

Psotka, J. Immersive training systems: Virtual Reality and Education and training. Instr Sci 23, pp. 405–431, 1995. https://0-doi-org.brum.beds.ac.uk/10.1007/BF00896880

Vyas, D. Designing for Awareness: An Experience-focused HCI Perspective. PhD University of Twente, 2011. Publisher: University of Twente, Enschede, Netherlands, ISBNs 978-90-365-3135-1

Prof. Dr. Achim Ebert
Prof. Dr. Peter Dannenmann
Prof. Dr. Gerrit van der Veer
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Big Data and Cognitive Computing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Virtual Reality
  • Augmented Reality
  • Mixed Reality
  • Extended/Cross-Reality
  • Interactive visualization
  • Intuitive interaction
  • Real-Time interaction
  • Input and output modalities
  • Sound and haptics
  • XR hardware development
  • Collaborative environments
  • Scalability (e.g., usage on different devices)
  • Experience aspects
  • Measuring usability in XR
  • Cognition and cognitive overload
  • Application environments

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

25 pages, 18716 KiB  
Article
Island Design Camps—Interactive Video Projections as Extended Realities
by Bert Bongers
Big Data Cogn. Comput. 2023, 7(2), 71; https://0-doi-org.brum.beds.ac.uk/10.3390/bdcc7020071 - 12 Apr 2023
Viewed by 1485
Abstract
Over the course of seven years during ten events, the author explored real-time interactive audiovisual projections, using ad hoc and portable projections and audio systems. This was done in the specific location of Cockatoo Island in the waters of a part of Sydney [...] Read more.
Over the course of seven years during ten events, the author explored real-time interactive audiovisual projections, using ad hoc and portable projections and audio systems. This was done in the specific location of Cockatoo Island in the waters of a part of Sydney Harbour, Australia. The island offers a unique combination of the remnants of a shipyard industrial precinct, other buildings, and increasingly restored natural environment. The project explored real-time audiovisual responses through projected overlays reminiscing the rich history and past events, interactively resonating with the current landscape and built environment. This included the maritime industrial history, as well as other historical layers such as convict barracks, school, and the significance of the location for Australia’s original inhabitants before colonisation by the British started in 1788. But most prominently, the recent use of the island for large scale art projects (such as the Outpost street art festival in 2011, and over a decade of use as part of the Sydney Biennale of Art, and the use of the island for film sets). This was a rich source of image material collected by the author and used to extend and reflect on current realities. By using the projections, overlaying and extending the present reality with historical data in the form of sounds and video, dialogues were facilitated and a conflation of past and present explored. The main activity were the VideoWalks, where the author, using a custom built portable audiovisual projection system and a bank of audiovisual material was able to re-place sound and video of previous events in the present context, in some instances whilst delivering a performative lecture on the way. The explorations are part of the author’s Traces project, exploring traces and remnants of past events and how these can inform design approaches. The project over the years also developed an element of recursion, by using footage of an earlier projection into the current, the footage of which was then used in the next event, and so on—up to five layers of extended reality. Full article
(This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction)
Show Figures

Figure 1

19 pages, 4377 KiB  
Article
Exploring Extended Realities in Environmental Artistic Expression through Interactive Video Projections
by Bert Bongers
Big Data Cogn. Comput. 2022, 6(4), 125; https://0-doi-org.brum.beds.ac.uk/10.3390/bdcc6040125 - 25 Oct 2022
Cited by 1 | Viewed by 2643
Abstract
The paper discusses the role of artistic expression and practices in extending realities. It presents a reflection and framework for VR/AR/MxR, Virtual Reality to Augmented or Mixed, or Extended Realities, as a continuum from completely virtual to mixed to completely real. It also [...] Read more.
The paper discusses the role of artistic expression and practices in extending realities. It presents a reflection and framework for VR/AR/MxR, Virtual Reality to Augmented or Mixed, or Extended Realities, as a continuum from completely virtual to mixed to completely real. It also reflects on the role of contemporary art practices and their role in extending or augmenting realities, from figurative to abstract, street art, to the scale of landscape art. Furthermore, the paper discusses the notion of live performance in art, which is common in music, contrasting a real-time ‘improvisation’ practice and composition practices, again a continuum as several mixed modes do exist. An effective technique for extending realities is video projection. This is discussed with examples from artistic expressions in various forms, and illustrated with some examples from the author’s practice in interactive audiovisual installations, and real time mobile projection performances and activities. Full article
(This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction)
Show Figures

Figure 1

21 pages, 5657 KiB  
Article
Computational Techniques Enabling the Perception of Virtual Images Exclusive to the Retinal Afterimage
by Staas de Jong and Gerrit van der Veer
Big Data Cogn. Comput. 2022, 6(3), 97; https://0-doi-org.brum.beds.ac.uk/10.3390/bdcc6030097 - 13 Sep 2022
Cited by 2 | Viewed by 1995
Abstract
The retinal afterimage is a widely known effect in the human visual system, which has been studied and used in the context of a number of major art movements. Therefore, when considering the general role of computation in the visual arts, this begs [...] Read more.
The retinal afterimage is a widely known effect in the human visual system, which has been studied and used in the context of a number of major art movements. Therefore, when considering the general role of computation in the visual arts, this begs the question whether this effect, too, may be induced using partly automated techniques. If so, it may become a computationally controllable ingredient of (interactive) visual art, and thus take its place among the many other aspects of visual perception which already have preceded it in this sense. The present moment provides additional inspiration to lay the groundwork for extending computer graphics in general with the retinal afterimage: Historically, we are in a phase where some head-mounted stereoscopic AR/VR technologies are now providing eye tracking by default, thereby allowing realtime monitoring of the processes of visual fixation that can induce the retinal afterimage. A logical starting point for general investigation is then shape display via the retinal afterimage, since shape recognition lends itself well to unambiguous reporting. Shape recognition, however, may also occur due to normal vision, which happens simultaneously. Carefully and rigorously excluding this possibility, we develop computational techniques enabling shape display exclusive to the retinal afterimage. Full article
(This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction)
Show Figures

Figure 1

10 pages, 554 KiB  
Article
How Does AR Technology Adoption and Involvement Behavior Affect Overseas Residents’ Life Satisfaction?
by Nargis Dewan, Md Billal Hossain, Gwi-Gon Kim, Anna Dunay and Csaba Bálint Illés
Big Data Cogn. Comput. 2022, 6(3), 80; https://0-doi-org.brum.beds.ac.uk/10.3390/bdcc6030080 - 25 Jul 2022
Viewed by 1855
Abstract
This study aims to better understand foreign residents’ life satisfaction by exploring residents’ AR technology adoption behavior (a combination of transportation applications’ usefulness and ease of use) and travel involvement. Data were collected from 400 respondents randomly through a questionnaire-based survey. SPSS and [...] Read more.
This study aims to better understand foreign residents’ life satisfaction by exploring residents’ AR technology adoption behavior (a combination of transportation applications’ usefulness and ease of use) and travel involvement. Data were collected from 400 respondents randomly through a questionnaire-based survey. SPSS and AMOS were used to analyze and gather results. This study suggests overall life satisfaction as an operationalized dependent variable to measure a traveler’s sense of satisfaction, a traveler’s involvement, and AR adoption of necessary transportation apps is constructed as an independent variable. The model was proposed to explore the impacts of travel satisfaction on overall life satisfaction. The model focused on the role of traveling involvement when it is considered a first variable to explore the impact of travel satisfaction on the overall quality of life. Furthermore, AR technology adoption behavior is where people use traveling apps before and during traveling to fulfill travel needs, obtain details about locations, and make proper arrangements, as well as other facilities. Two significant roles of transportation apps and travelers’ involvement in travel-satisfaction development and overall life satisfaction were found; variables had a positive effect on travel satisfaction and life satisfaction. The results also revealed that AR mobile travel applications with traveler involvement could help improve individual overseas residents’ travel satisfaction; travel satisfaction provides more feelings of satisfaction with life in South Korea. Full article
(This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction)
Show Figures

Figure 1

29 pages, 5206 KiB  
Article
Virtual Reality-Based Stimuli for Immersive Car Clinics: A Performance Evaluation Model
by Alexandre Costa Henriques, Thiago Barros Murari, Jennifer Callans, Alexandre Maguino Pinheiro Silva, Antonio Lopes Apolinario, Jr. and Ingrid Winkler
Big Data Cogn. Comput. 2022, 6(2), 45; https://0-doi-org.brum.beds.ac.uk/10.3390/bdcc6020045 - 20 Apr 2022
Cited by 2 | Viewed by 3654
Abstract
This study proposes a model to evaluate the performance of virtual reality-based stimuli for immersive car clinics. The model considered Attribute Importance, Stimuli Efficacy and Stimuli Cost factors and the method was divided into three stages: we defined the importance of fourteen attributes [...] Read more.
This study proposes a model to evaluate the performance of virtual reality-based stimuli for immersive car clinics. The model considered Attribute Importance, Stimuli Efficacy and Stimuli Cost factors and the method was divided into three stages: we defined the importance of fourteen attributes relevant to a car clinic based on the perceptions of Marketing and Design experts; then we defined the efficacy of five virtual stimuli based on the perceptions of Product Development and Virtual Reality experts; and we used a cost factor to calculate the efficiency of the five virtual stimuli in relation to the physical. The Marketing and Design experts identified a new attribute, Scope; eleven of the fifteen attributes were rated as Important or Very Important, while four were removed from the model due to being considered irrelevant. According to our performance evaluation model, virtual stimuli have the same efficacy as physical stimuli. However, when cost is considered, virtual stimuli outperform physical stimuli, particularly virtual stimuli with glasses. We conclude that virtual stimuli have the potential to reduce the cost and time required to develop new stimuli in car clinics, but with concerns related to hardware, software, and other definitions. Full article
(This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction)
Show Figures

Figure 1

19 pages, 14228 KiB  
Article
Comparison of Object Detection in Head-Mounted and Desktop Displays for Congruent and Incongruent Environments
by René Reinhard, Erinchan Telatar and Shah Rukh Humayoun
Big Data Cogn. Comput. 2022, 6(1), 28; https://0-doi-org.brum.beds.ac.uk/10.3390/bdcc6010028 - 07 Mar 2022
Cited by 1 | Viewed by 2964
Abstract
Virtual reality technologies, including head-mounted displays (HMD), can provide benefits to psychological research by combining high degrees of experimental control with improved ecological validity. This is due to the strong feeling of being in the displayed environment (presence) experienced by VR users. As [...] Read more.
Virtual reality technologies, including head-mounted displays (HMD), can provide benefits to psychological research by combining high degrees of experimental control with improved ecological validity. This is due to the strong feeling of being in the displayed environment (presence) experienced by VR users. As of yet, it is not fully explored how using HMDs impacts basic perceptual tasks, such as object perception. In traditional display setups, the congruency between background environment and object category has been shown to impact response times in object perception tasks. In this study, we investigated whether this well-established effect is comparable when using desktop and HMD devices. In the study, 21 participants used both desktop and HMD setups to perform an object identification task and, subsequently, their subjective presence while experiencing two-distinct virtual environments (a beach and a home environment) was evaluated. Participants were quicker to identify objects in the HMD condition, independent of object-environment congruency, while congruency effects were not impacted. Furthermore, participants reported significantly higher presence in the HMD condition. Full article
(This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction)
Show Figures

Figure 1

16 pages, 6769 KiB  
Article
Spatial Sound in a 3D Virtual Environment: All Bark and No Bite?
by Radha Nila Meghanathan, Patrick Ruediger-Flore, Felix Hekele, Jan Spilski, Achim Ebert and Thomas Lachmann
Big Data Cogn. Comput. 2021, 5(4), 79; https://0-doi-org.brum.beds.ac.uk/10.3390/bdcc5040079 - 13 Dec 2021
Cited by 4 | Viewed by 3622
Abstract
Although the focus of Virtual Reality (VR) lies predominantly on the visual world, acoustic components enhance the functionality of a 3D environment. To study the interaction between visual and auditory modalities in a 3D environment, we investigated the effect of auditory cues on [...] Read more.
Although the focus of Virtual Reality (VR) lies predominantly on the visual world, acoustic components enhance the functionality of a 3D environment. To study the interaction between visual and auditory modalities in a 3D environment, we investigated the effect of auditory cues on visual searches in 3D virtual environments with both visual and auditory noise. In an experiment, we asked participants to detect visual targets in a 360° video in conditions with and without environmental noise. Auditory cues indicating the target location were either absent or one of simple stereo or binaural audio, both of which assisted sound localization. To investigate the efficacy of these cues in distracting environments, we measured participant performance using a VR headset with an eye tracker. We found that the binaural cue outperformed both stereo and no auditory cues in terms of target detection irrespective of the environmental noise. We used two eye movement measures and two physiological measures to evaluate task dynamics and mental effort. We found that the absence of a cue increased target search duration and target search path, measured as time to fixation and gaze trajectory lengths, respectively. Our physiological measures of blink rate and pupil size showed no difference between the different stadium and cue conditions. Overall, our study provides evidence for the utility of binaural audio in a realistic, noisy and virtual environment for performing a target detection task, which is a crucial part of everyday behaviour—finding someone in a crowd. Full article
(This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction)
Show Figures

Figure 1

16 pages, 2196 KiB  
Article
Effects of Neuro-Cognitive Load on Learning Transfer Using a Virtual Reality-Based Driving System
by Usman Alhaji Abdurrahman, Shih-Ching Yeh, Yunying Wong and Liang Wei
Big Data Cogn. Comput. 2021, 5(4), 54; https://0-doi-org.brum.beds.ac.uk/10.3390/bdcc5040054 - 13 Oct 2021
Cited by 9 | Viewed by 4205
Abstract
Understanding the ways different people perceive and apply acquired knowledge, especially when driving, is an important area of study. This study introduced a novel virtual reality (VR)-based driving system to determine the effects of neuro-cognitive load on learning transfer. In the experiment, easy [...] Read more.
Understanding the ways different people perceive and apply acquired knowledge, especially when driving, is an important area of study. This study introduced a novel virtual reality (VR)-based driving system to determine the effects of neuro-cognitive load on learning transfer. In the experiment, easy and difficult routes were introduced to the participants, and the VR system is capable of recording eye-gaze, pupil dilation, heart rate, as well as driving performance data. So, the main purpose here is to apply multimodal data fusion, several machine learning algorithms, and strategic analytic methods to measure neurocognitive load for user classification. A total of ninety-eight (98) university students participated in the experiment, in which forty-nine (49) were male participants and forty-nine (49) were female participants. The results showed that data fusion methods achieved higher accuracy compared to other classification methods. These findings highlight the importance of physiological monitoring to measure mental workload during the process of learning transfer. Full article
(This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction)
Show Figures

Figure 1

17 pages, 1968 KiB  
Article
Exploration of Feature Representations for Predicting Learning and Retention Outcomes in a VR Training Scenario
by Alec G. Moore, Ryan P. McMahan and Nicholas Ruozzi
Big Data Cogn. Comput. 2021, 5(3), 29; https://0-doi-org.brum.beds.ac.uk/10.3390/bdcc5030029 - 12 Jul 2021
Cited by 1 | Viewed by 3897
Abstract
Training and education of real-world tasks in Virtual Reality (VR) has seen growing use in industry. The motion-tracking data that is intrinsic to immersive VR applications is rich and can be used to improve learning beyond standard training interfaces. In this paper, we [...] Read more.
Training and education of real-world tasks in Virtual Reality (VR) has seen growing use in industry. The motion-tracking data that is intrinsic to immersive VR applications is rich and can be used to improve learning beyond standard training interfaces. In this paper, we present machine learning (ML) classifiers that predict outcomes from a VR training application. Our approach makes use of the data from the tracked head-mounted display (HMD) and handheld controllers during VR training to predict whether a user will exhibit high or low knowledge acquisition, knowledge retention, and performance retention. We evaluated six different sets of input features and found varying degrees of accuracy depending on the predicted outcome. By visualizing the tracking data, we determined that users with higher acquisition and retention outcomes made movements with more certainty and with greater velocities than users with lower outcomes. Our results demonstrate that it is feasible to develop VR training applications that dynamically adapt to a user by using commonly available tracking data to predict learning and retention outcomes. Full article
(This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction)
Show Figures

Figure 1

Review

Jump to: Research, Other

17 pages, 337 KiB  
Review
Scalable Extended Reality: A Future Research Agenda
by Vera Marie Memmesheimer and Achim Ebert
Big Data Cogn. Comput. 2022, 6(1), 12; https://0-doi-org.brum.beds.ac.uk/10.3390/bdcc6010012 - 26 Jan 2022
Cited by 7 | Viewed by 4467
Abstract
Extensive research has outlined the potential of augmented, mixed, and virtual reality applications. However, little attention has been paid to scalability enhancements fostering practical adoption. In this paper, we introduce the concept of scalable extended reality (XRS), i.e., spaces scaling between [...] Read more.
Extensive research has outlined the potential of augmented, mixed, and virtual reality applications. However, little attention has been paid to scalability enhancements fostering practical adoption. In this paper, we introduce the concept of scalable extended reality (XRS), i.e., spaces scaling between different displays and degrees of virtuality that can be entered by multiple, possibly distributed users. The development of such XRS spaces concerns several research fields. To provide bidirectional interaction and maintain consistency with the real environment, virtual reconstructions of physical scenes need to be segmented semantically and adapted dynamically. Moreover, scalable interaction techniques for selection, manipulation, and navigation as well as a world-stabilized rendering of 2D annotations in 3D space are needed to let users intuitively switch between handheld and head-mounted displays. Collaborative settings should further integrate access control and awareness cues indicating the collaborators’ locations and actions. While many of these topics were investigated by previous research, very few have considered their integration to enhance scalability. Addressing this gap, we review related previous research, list current barriers to the development of XRS spaces, and highlight dependencies between them. Full article
(This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction)
Show Figures

Figure 1

Other

Jump to: Research, Review

22 pages, 2156 KiB  
Brief Report
A Dataset for Emotion Recognition Using Virtual Reality and EEG (DER-VREEG): Emotional State Classification Using Low-Cost Wearable VR-EEG Headsets
by Nazmi Sofian Suhaimi, James Mountstephens and Jason Teo
Big Data Cogn. Comput. 2022, 6(1), 16; https://0-doi-org.brum.beds.ac.uk/10.3390/bdcc6010016 - 28 Jan 2022
Cited by 29 | Viewed by 7391
Abstract
Emotions are viewed as an important aspect of human interactions and conversations, and allow effective and logical decision making. Emotion recognition uses low-cost wearable electroencephalography (EEG) headsets to collect brainwave signals and interpret these signals to provide information on the mental state of [...] Read more.
Emotions are viewed as an important aspect of human interactions and conversations, and allow effective and logical decision making. Emotion recognition uses low-cost wearable electroencephalography (EEG) headsets to collect brainwave signals and interpret these signals to provide information on the mental state of a person, with the implementation of a virtual reality environment in different applications; the gap between human and computer interaction, as well as the understanding process, would shorten, providing an immediate response to an individual’s mental health. This study aims to use a virtual reality (VR) headset to induce four classes of emotions (happy, scared, calm, and bored), to collect brainwave samples using a low-cost wearable EEG headset, and to run popular classifiers to compare the most feasible ones that can be used for this particular setup. Firstly, we attempt to build an immersive VR database that is accessible to the public and that can potentially assist with emotion recognition studies using virtual reality stimuli. Secondly, we use a low-cost wearable EEG headset that is both compact and small, and can be attached to the scalp without any hindrance, allowing freedom of movement for participants to view their surroundings inside the immersive VR stimulus. Finally, we evaluate the emotion recognition system by using popular machine learning algorithms and compare them for both intra-subject and inter-subject classification. The results obtained here show that the prediction model for the four-class emotion classification performed well, including the more challenging inter-subject classification, with the support vector machine (SVM Class Weight kernel) obtaining 85.01% classification accuracy. This shows that using less electrode channels but with proper parameter tuning and selection features affects the performance of the classifications. Full article
(This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction)
Show Figures

Figure 1

Back to TopTop