3D Vision, Virtual Reality and Serious Games

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 June 2022) | Viewed by 12513

Special Issue Editors


E-Mail Website
Guest Editor
Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece
Interests: computer vision; machine learning and artificial intelligence; multi-dimensional signal processing; intelligent systems and applications; environmental informatics and remote sensing; ICT for civil protection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

In recent years, there has been enormous progress in 3D vision for 3D scene understanding, such as scene segmentation, 3D reconstruction, human motion analysis, and 3D object detection and tracking. Furthermore, Virtual reality technologies have attracted a lot of attention, and they have been applied to a wide variety of fields, such as entertainment, education, medicine, architectural and urban design, engineering and robotics, fine arts, and cultural heritage. The combination of virtual reality with game-based approaches has led to the development of serious games for purposes other than entertainment. Serious games focus mainly on developing the skills and knowledge of their players and can provide educational content along with interactive, engaging, and immersive gaming experiences.

There is a need to highlight the latest exciting developments in these areas to promote the creation of realistic, intelligent, and sophisticated 3D interactive environments and serious games applications. This Special Issue aims to bring together researchers in these three fields, i.e., 3D computer vision, virtual reality, and serious games, to discuss the unique challenges and opportunities for synergies that can lead to new achievements in these areas.

Dr. Kosmas Dimitropoulos
Dr. Nikos Grammalidis
Dr. Nikolaos Doulamis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • virtual environments
  • 3D vision
  • serious games
  • 3D interactive environments
  • immersive environments
  • AI game adaptation algorithms
  • 3D user interaction
  • multimodal capturing and reconstruction
  • computer graphics and reality
  • human factors and ergonomics
  • data visualization
  • novel human–computer interaction techniques
  • user-centered computing
  • affective gaming

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 593 KiB  
Article
Virtual Reality Retooling Humanities Courses: Finance and Marketing Experience at a Czech University
by Lilla Koreňová, Petr Gurný, Jozef Hvorecký, Petr Lůžek and Petr Rozehnal
Appl. Sci. 2022, 12(19), 10170; https://0-doi-org.brum.beds.ac.uk/10.3390/app121910170 - 10 Oct 2022
Cited by 4 | Viewed by 1763
Abstract
Virtual reality environments (VRE) allow users to visualize both real-life and imaginary activities. For this reason, they make appropriate training fields at universities, too. However, the positive or negative effects of VRE are still a subject of research. There is a need to [...] Read more.
Virtual reality environments (VRE) allow users to visualize both real-life and imaginary activities. For this reason, they make appropriate training fields at universities, too. However, the positive or negative effects of VRE are still a subject of research. There is a need to verify methods of their deployment, student responses and the impact of VRE implementation. Science and medicine courses are frequently exploiting VRE, while their exploitation in humanities is much less frequent. In our paper, we describe and evaluate their application in finance and marketing courses. Both courses were designed and developed as part of a larger, potentially university-wide project. The courses were enriched by mazes including 3-D rooms with course content elements. Students could explore them and communicate with their lecturers and classmates. To allow anytime/anywhere access, the VRE does not require using any special interface. The finance course was organized as a pedagogical experiment with test and control groups. Due to organizational and scheduling reasons, the VRE in marketing served just as enrichment. At the end of the term, all students using VRE were given a questionnaire assessing their satisfaction. The majority expressed satisfaction. In the finance course, positive opinion was also supported by students’ improved grades. In total, 87.5% of students agreed that the application of VRE contributed to gaining knowledge. Based on the positive experience and outcomes, the university plans to expand and to intensify its VRE-supported education. Full article
(This article belongs to the Special Issue 3D Vision, Virtual Reality and Serious Games)
Show Figures

Figure 1

17 pages, 29509 KiB  
Article
The Cube Surface Light Field for Interactive Free-Viewpoint Rendering
by Xiaofei Ai and Yigang Wang
Appl. Sci. 2022, 12(14), 7212; https://0-doi-org.brum.beds.ac.uk/10.3390/app12147212 - 18 Jul 2022
Cited by 2 | Viewed by 1910
Abstract
Free-viewpoint rendering has always been one of the key motivations of image-based rendering and has broad application prospects in the field of virtual reality and augmented reality (VR/AR). The existing methods mainly adopt the traditional image-based rendering or learning-based frameworks, which have limited [...] Read more.
Free-viewpoint rendering has always been one of the key motivations of image-based rendering and has broad application prospects in the field of virtual reality and augmented reality (VR/AR). The existing methods mainly adopt the traditional image-based rendering or learning-based frameworks, which have limited viewpoint freedom and poor time performance. In this paper, the cube surface light field is utilized to encode the scenes implicitly, and an interactive free-viewpoint rendering method is proposed to solve the above two problems simultaneously. The core of this method is a pure light ray-based representation using the cube surface light field. Using a fast single-layer ray casting algorithm to compute the light ray’s parameters, the rendering is achieved by a GPU-based three-dimensional (3D) compressed texture mapping that converts the corresponding light rays to the desired image. Experimental results show that the proposed method can real-time render the novel views at arbitrary viewpoints outside the cube surface, and the rendering results preserve high image quality. This research provides a valid experimental basis for the potential application value of content generation in VR/AR. Full article
(This article belongs to the Special Issue 3D Vision, Virtual Reality and Serious Games)
Show Figures

Figure 1

10 pages, 877 KiB  
Article
Affinity-Point Graph Convolutional Network for 3D Point Cloud Analysis
by Yang Wang and Shunping Xiao
Appl. Sci. 2022, 12(11), 5328; https://0-doi-org.brum.beds.ac.uk/10.3390/app12115328 - 25 May 2022
Cited by 1 | Viewed by 1491
Abstract
Efficient learning of 3D shape representation from point cloud is one of the biggest requirements in 3D computer vision. In recent years, convolutional neural networks have achieved great success in 2D image representation learning. However, unlike images that have a Euclidean structure, 3D [...] Read more.
Efficient learning of 3D shape representation from point cloud is one of the biggest requirements in 3D computer vision. In recent years, convolutional neural networks have achieved great success in 2D image representation learning. However, unlike images that have a Euclidean structure, 3D point clouds are irregular since the neighbors of each node are inconsistent. Many studies have tried to develop various convolutional graph neural networks to overcome this problem and to achieve great results. Nevertheless, these studies simply took the centroid point and its corresponding neighbors as the graph structure, thus ignoring the structural information. In this paper, an Affinity-Point Graph Convolutional Network (AP-GCN) is proposed to learn the graph structure for each reference point. In this method, the affinity between points is first defined using the feature of each point feature. Then, a graph with affinity information is built. After that, the edge-conditioned convolution is performed between the graph vertices and edges to obtain stronger neighborhood information. Finally, the learned information is used for recognition and segmentation tasks. Comprehensive experiments demonstrate that AP-GCN learned much more reasonable features and achieved significant improvements in 3D computer vision tasks such as object classification and segmentation. Full article
(This article belongs to the Special Issue 3D Vision, Virtual Reality and Serious Games)
Show Figures

Figure 1

19 pages, 924 KiB  
Article
A Fully-Automatic Gap Filling Approach for Motion Capture Trajectories
by Diana Gomes, Vânia Guimarães and Joana Silva
Appl. Sci. 2021, 11(21), 9847; https://0-doi-org.brum.beds.ac.uk/10.3390/app11219847 - 21 Oct 2021
Cited by 1 | Viewed by 2328
Abstract
Missing marker information is a common problem in Motion Capture (MoCap) systems. Commercial MoCap software provides several methods for reconstructing incomplete marker trajectories; however, these methods still rely on manual intervention. Current alternatives proposed in the literature still present drawbacks that prevent their [...] Read more.
Missing marker information is a common problem in Motion Capture (MoCap) systems. Commercial MoCap software provides several methods for reconstructing incomplete marker trajectories; however, these methods still rely on manual intervention. Current alternatives proposed in the literature still present drawbacks that prevent their widespread adoption. The lack of fully automated and universal solutions for gap filling is still a reality. We propose an automatic frame-wise gap filling routine that simultaneously explores restrictions between markers’ distance and markers’ dynamics in a least-squares minimization problem. This algorithm constitutes the main contribution of our work by simultaneously overcoming several limitations of previous methods that include not requiring manual intervention, prior training or training data; not requiring information about the skeleton or a dedicated calibration trial and by being able to reconstruct all gaps, even if these are located in the initial and final frames of a trajectory. We tested our approach in a set of artificially generated gaps, using the full body marker set, and compared the results with three methods available in commercial MoCap software: spline, pattern and rigid body fill. Our method achieved the best overall performance, presenting lower reconstruction errors in all tested conditions. Full article
(This article belongs to the Special Issue 3D Vision, Virtual Reality and Serious Games)
Show Figures

Figure 1

16 pages, 3597 KiB  
Article
Scrum VR: Virtual Reality Serious Video Game to Learn Scrum
by Jesus Mayor and Daniel López-Fernández
Appl. Sci. 2021, 11(19), 9015; https://0-doi-org.brum.beds.ac.uk/10.3390/app11199015 - 28 Sep 2021
Cited by 13 | Viewed by 3611
Abstract
Education is crucial for the growth of society, and the usage of effective learning methods is key to transmit knowledge to young students. Some initiatives present Virtual Reality technologies as a promising medium to provide active, effective, and innovative teaching. In turn, the [...] Read more.
Education is crucial for the growth of society, and the usage of effective learning methods is key to transmit knowledge to young students. Some initiatives present Virtual Reality technologies as a promising medium to provide active, effective, and innovative teaching. In turn, the use of this technology seems to be very attractive to students, making it possible to acquire knowledge through it. On the other hand, agile methodologies have taken an essential role within information technologies and they are key in Software Engineering education. This paper combines both areas and presents prior research about Virtual Reality experiences with educational purposes and introduces a serious VR video game that aims to promote the learning of agile methodologies in Software Engineering education, specifically the Scrum methodology. This application tries to bring students closer to their first days of work within a software development team that uses the Scrum methodology. Two evaluation processes performed with university teachers and students indicate that the developed video game meets the proposed objectives and looks promising. Full article
(This article belongs to the Special Issue 3D Vision, Virtual Reality and Serious Games)
Show Figures

Figure 1

Back to TopTop