Social Interaction and Psychology in XR

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (30 April 2021) | Viewed by 17664

Special Issue Editors


E-Mail Website
Guest Editor
University of Central Florida, 4000 Central Florida Blvd., Orlando, FL 32816, USA
Interests: extended reality (XR)—virtual/augmented/mixed Reality (VR/AR/MR); interactive virtual humans (avatars/agents); social Interaction and psychology in XR; perception and cognition in XR; pervasive and distributed XR
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Center for Health and Risk Communications and Department of Advertising and Public Relations, Grady College of Journalism and Mass Communication, University of Georgia, Athens, GA 30602, USA

E-Mail Website
Guest Editor
Institute for Simulation and Training, University of Central Florida, Orlando, FL, USA
Interests: virtual and augmented reality; surrogates and social interaction; perception and cognition; 3D user interfaces

Special Issue Information

Dear Colleagues,

Recent dramatic advances in eXtended Reality (XR) technology, covering the entire reality–virtuality continuum, enable us to create immersive experiences with intelligent and interactive virtual entities in physical or virtual environments. The convergence with advanced artificial intelligence enables users to experience realistic social interactions with intelligent virtual entities in XR, and such interactions can exert a high level of social influence over users while affecting their thought processes and behaviors during or after the experience. Moreover, social interactions among remote or co-located users can be enhanced perceptually and cognitively through additional multimodal social cues in XR—for example, understanding collaboration partners’ intentions through augmented facial or gestural information. XR technology provides a powerful platform to study social psychology and extended embodied cues. Understanding how people use and perceive this new technology in social contexts is timely and important for the development of interactive XR systems and to foster acceptance of the technology in our lives.

This Special Issue aims to provide a collection of high-quality research manuscripts that introduce interesting findings in social interaction and psychology in XR, and aims to address novel ways to achieve socially influential interactions among users and virtual/physical entities in XR. The scope of this Special Issue includes (but is not limited to):

  • Social (Conversational) agents and avatars in XR;
  • Emotion, personality, and cultural differences in XR;
  • Virtual humans and non-human characters in XR;
  • Inter-agent communication in XR;
  • Social perception, cognition, and behavior in XR;
  • Crowd interaction in XR;
  • (Tele-/social-/co-)presence and engagement in XR;
  • Verbal and nonverbal behavior in XR;
  • Theoretical and empirical research in social XR;
  • (Enhanced) social cues and signals in XR;
  • Multi-user or multi-agent interactions in XR;
  • Multimodal social interactions in XR;
  • Social applications in XR;
  • Ethics in social XR.

Dr. Kangsoo Kim
Prof. Dr. Sun Joo (Grace) Ahn
Prof. Dr. Gerd Bruder
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 2207 KiB  
Article
Perspective-Taking in Virtual Reality and Reduction of Biases against Minorities
by Vivian Hsueh Hua Chen, Sarah Hian May Chan and Yong Ching Tan
Multimodal Technol. Interact. 2021, 5(8), 42; https://0-doi-org.brum.beds.ac.uk/10.3390/mti5080042 - 31 Jul 2021
Cited by 8 | Viewed by 4585
Abstract
This study examines the effect of perspective-taking via embodiment in virtual reality (VR) in improving biases against minorities. It tests theoretical arguments about the affective and cognitive routes underlying perspective-taking and examines the moderating role of self-presence in VR through experiments. In Study [...] Read more.
This study examines the effect of perspective-taking via embodiment in virtual reality (VR) in improving biases against minorities. It tests theoretical arguments about the affective and cognitive routes underlying perspective-taking and examines the moderating role of self-presence in VR through experiments. In Study 1, participants embodied an ethnic minority avatar and experienced workplace microaggression from a first-person perspective in VR. They were randomly assigned to affective (focus on emotions) vs. cognitive (focus on thoughts) perspective-taking conditions. Results showed that ingroup bias improved comparably across both conditions and that this effect was driven by more negative perceptions of the majority instead of more positive perceptions of minorities. In Study 2, participants experienced the same VR scenario from the third-person perspective. Results replicated those from Study 1 and extended them by showing that the effect of condition on ingroup bias was moderated by self-presence. At high self-presence, participants in the affective condition reported higher ingroup bias than those in the cognitive condition. The study showed that in VR, the embodiment of an ethnic minority is somewhat effective in improving perceptions towards minority groups. It is difficult to clearly distinguish between the effect of affective and cognitive routes underlying the process of perspective-taking. Full article
(This article belongs to the Special Issue Social Interaction and Psychology in XR)
Show Figures

Figure 1

26 pages, 8602 KiB  
Article
Building an Emotionally Responsive Avatar with Dynamic Facial Expressions in Human—Computer Interactions
by Heting Wang, Vidya Gaddy, James Ross Beveridge and Francisco R. Ortega
Multimodal Technol. Interact. 2021, 5(3), 13; https://0-doi-org.brum.beds.ac.uk/10.3390/mti5030013 - 20 Mar 2021
Cited by 8 | Viewed by 6253
Abstract
The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, [...] Read more.
The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect. Full article
(This article belongs to the Special Issue Social Interaction and Psychology in XR)
Show Figures

Figure 1

18 pages, 16532 KiB  
Article
Intelligent Blended Agents: Reality–Virtuality Interaction with Artificially Intelligent Embodied Virtual Humans
by Susanne Schmidt, Oscar Ariza and Frank Steinicke
Multimodal Technol. Interact. 2020, 4(4), 85; https://0-doi-org.brum.beds.ac.uk/10.3390/mti4040085 - 27 Nov 2020
Cited by 8 | Viewed by 5252
Abstract
Intelligent virtual agents (VAs) already support us in a variety of everyday tasks such as setting up appointments, monitoring our fitness, and organizing messages. Adding a humanoid body representation to these mostly voice-based VAs has enormous potential to enrich the human–agent communication process [...] Read more.
Intelligent virtual agents (VAs) already support us in a variety of everyday tasks such as setting up appointments, monitoring our fitness, and organizing messages. Adding a humanoid body representation to these mostly voice-based VAs has enormous potential to enrich the human–agent communication process but, at the same time, raises expectations regarding the agent’s social, spatial, and intelligent behavior. Embodied VAs may be perceived as less human-like if they, for example, do not return eye contact, or do not show a plausible collision behavior with the physical surroundings. In this article, we introduce a new model that extends human-to-human interaction to interaction with intelligent agents and covers different multi-modal and multi-sensory channels that are required to create believable embodied VAs. Theoretical considerations of the different aspects of human–agent interaction are complemented by implementation guidelines to support the practical development of such agents. In this context, we particularly emphasize one aspect that is distinctive of embodied agents, i.e., interaction with the physical world. Since previous studies indicated negative effects of implausible physical behavior of VAs, we were interested in the initial responses of users when interacting with a VA with virtual–physical capabilities for the first time. We conducted a pilot study to collect subjective feedback regarding two forms of virtual–physical interactions. Both were designed and implemented in preparation of the user study, and represent two different approaches to virtual–physical manipulations: (i) displacement of a robotic object, and (ii) writing on a physical sheet of paper with thermochromic ink. The qualitative results of the study indicate positive effects of agents with virtual–physical capabilities in terms of their perceived realism as well as evoked emotional responses of the users. We conclude with an outlook on possible future developments of different aspects of human–agent interaction in general and the physical simulation in particular. Full article
(This article belongs to the Special Issue Social Interaction and Psychology in XR)
Show Figures

Figure 1

Back to TopTop