State-of-the-Art Virtual/Augmented Reality and 3D Modeling Techniques for Virtual Urban Geographic Experiments

A special issue of ISPRS International Journal of Geo-Information (ISSN 2220-9964).

Deadline for manuscript submissions: closed (30 April 2018) | Viewed by 100333

Special Issue Editors

Environmental Systems Research Institute, 380 New York Street, Redlands, CA 92373, USA
Interests: urban environment; anthropogenic carbon emissions; urban climate; sustainable energy; GIScience
Special Issues, Collections and Topics in MDPI journals
State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, No. 20 Datun Road, Chaoyang District, Beijing 100101, China
Interests: virtual geographic environment; virtual reality; 3D GIS
Institute of Remote Sensing and Geographical Information Systems, Peking University, Beijing 100871, China
Interests: GIScience; spatiotemporal big data; human geography
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Computer-aided geographic experiments (CAGEs) rely on virtual geographic environments (VGEs) to assist in solving deep-level geographic problems and contribute to understanding of complex geographic scenarios. We define virtual urban geographic experiment (VUGE) as a special type of CAGE with a focus on urban geography.

Recent breakthroughs in virtual reality (VR), augmented reality (AR), artificial intelligence (AI), and remote sensing (RS) have demonstrated great promise for delivering more advanced urban VGEs. Rapid 3D modeling of indoor and outdoor urban environments can be achieved with UAV imagery, LiDAR point cloud and panoramic photography. New forms of sensing techniques, such as social sensing, provide unprecedented level of access to spatiotemporal big data. Recently developed VR and AR solutions, such as Oculus Rift, HTC Vive and Microsoft HoloLens, can greatly improve human-machine communication and human perception of virtual geographic space. Furthermore, AI-based human behavior modeling can ultimately transform urban VGEs into populated and intelligent social systems. With these technological advancements, we believe that VUGE can effectively help solve urban geographic problems at a higher level of physical and human complexity.  

This Special Issue invites submissions that focus on integrating a combination of these state-of-the-art techniques to better serve knowledge extraction, discovery and sharing in urban geography. We encourage contributions on (but not limited to) the following themes:

  • New theories, conceptual frameworks and paradigms of VUGE
  • Novel applications of VR/AR, urban sensing and 3D modeling
  • New theories and practices of VR/AR-based cartography and geovisualization
  • Spatial cognition in urban spaces
  • Modeling and simulation of human spatial behavior
  • Digital humanities for exploring and understanding of urban historical geography
  • Places in virtual space
  • Remote sensing and 3D modeling for better understanding of urban climate, ecosystems and energy balance

Dr. Jianming Liang
Prof. Jianhua Gong
Prof. Yu Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. ISPRS International Journal of Geo-Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

16 pages, 2243 KiB  
Article
Construction and Optimization of Three-Dimensional Disaster Scenes within Mobile Virtual Reality
by Ya Hu, Jun Zhu, Weilian Li, Yunhao Zhang, Qing Zhu, Hua Qi, Huixin Zhang, Zhenyu Cao, Weijun Yang and Pengcheng Zhang
ISPRS Int. J. Geo-Inf. 2018, 7(6), 215; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7060215 - 14 Jun 2018
Cited by 25 | Viewed by 4901
Abstract
Because mobile virtual reality (VR) is both mobile and immersive, three-dimensional (3D) visualizations of disaster scenes based in mobile VR enable users to perceive and recognize disaster environments faster and better than is possible with other methods. To achieve immersion and prevent users [...] Read more.
Because mobile virtual reality (VR) is both mobile and immersive, three-dimensional (3D) visualizations of disaster scenes based in mobile VR enable users to perceive and recognize disaster environments faster and better than is possible with other methods. To achieve immersion and prevent users from feeling dizzy, such visualizations require a high scene-rendering frame rate. However, the existing related visualization work cannot provide a sufficient solution for this purpose. This study focuses on the construction and optimization of a 3D disaster scene in order to satisfy the high frame-rate requirements for the rendering of 3D disaster scenes in mobile VR. First, the design of a plugin-free browser/server (B/S) architecture for 3D disaster scene construction and visualization based in mobile VR is presented. Second, certain key technologies for scene optimization are discussed, including diverse modes of scene data representation, representation optimization of mobile scenes, and adaptive scheduling of mobile scenes. By means of these technologies, smartphones with various performance levels can achieve higher scene-rendering frame rates and improved visual quality. Finally, using a flood disaster as an example, a plugin-free prototype system was developed, and experiments were conducted. The experimental results demonstrate that a 3D disaster scene constructed via the methods addressed in this study has a sufficiently high scene-rendering frame rate to satisfy the requirements for rendering a 3D disaster scene in mobile VR. Full article
Show Figures

Figure 1

13 pages, 23587 KiB  
Article
Validity of VR Technology on the Smartphone for the Study of Wind Park Soundscapes
by Tianhong YU, Holger Behm, Ralf Bill and Jian Kang
ISPRS Int. J. Geo-Inf. 2018, 7(4), 152; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7040152 - 18 Apr 2018
Cited by 14 | Viewed by 3879
Abstract
The virtual reality of the landscape environment supplies a high level of realism of the real environment, and may improve the public awareness and acceptance of wind park projects. The soundscape around wind parks could have a strong influence on the acceptance and [...] Read more.
The virtual reality of the landscape environment supplies a high level of realism of the real environment, and may improve the public awareness and acceptance of wind park projects. The soundscape around wind parks could have a strong influence on the acceptance and annoyance of wind parks. To explore this VR technology on realism and subjective responses toward different soundscapes of ambient wind parks, three different types of virtual reality on the smartphone tests were performed: aural only, visual only, and aural–visual combined. In total, 21 aural and visual combinations were presented to 40 participants. The aural and visual information used were of near wind park settings and rural spaces. Perceived annoyance levels and realism of the wind park environment were measured. Results indicated that most simulations were rated with relatively strong realism. Perceived realism was strongly correlated with light, color, and vegetation of the simulation. Most wind park landscapes were enthusiastically accepted by the participants. The addition of aural information was found to have a strong impact on whether the participant was annoyed. Furthermore, evaluation of the soundscape on a multidimensional scale revealed the key components influencing the individual’s annoyance by wind parks were the factors of “calmness/relaxation” and “naturality/pleasantness”. “Diversity” of the soundscape might correlate with perceived realism. Finally, the dynamic aural–visual stimuli using virtual reality technology could improve the environmental assessment of the wind park landscapes, and thus, provide a more comprehensible scientific decision than conventional tools. In addition, this study could improve the participatory planning process for more acceptable wind park landscapes. Full article
Show Figures

Figure 1

15 pages, 3787 KiB  
Article
An Indoor Scene Recognition-Based 3D Registration Mechanism for Real-Time AR-GIS Visualization in Mobile Applications
by Wei Ma, Hanjiang Xiong, Xuefeng Dai, Xianwei Zheng and Yan Zhou
ISPRS Int. J. Geo-Inf. 2018, 7(3), 112; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7030112 - 15 Mar 2018
Cited by 18 | Viewed by 6232
Abstract
Mobile Augmented Reality (MAR) systems are becoming ideal platforms for visualization, permitting users to better comprehend and interact with spatial information. Subsequently, this technological development, in turn, has prompted efforts to enhance mechanisms for registering virtual objects in real world contexts. Most existing [...] Read more.
Mobile Augmented Reality (MAR) systems are becoming ideal platforms for visualization, permitting users to better comprehend and interact with spatial information. Subsequently, this technological development, in turn, has prompted efforts to enhance mechanisms for registering virtual objects in real world contexts. Most existing AR 3D Registration techniques lack the scene recognition capabilities needed to describe accurately the positioning of virtual objects in scenes representing reality. Moreover, the application of such registration methods in indoor AR-GIS systems is further impeded by the limited capacity of these systems to detect the geometry and semantic information in indoor environments. In this paper, we propose a novel method for fusing virtual objects and indoor scenes, based on indoor scene recognition technology. To accomplish scene fusion in AR-GIS, we first detect key points in reference images. Then, we perform interior layout extraction using a Fully Connected Networks (FCN) algorithm to acquire layout coordinate points for the tracking targets. We detect and recognize the target scene in a video frame image to track targets and estimate the camera pose. In this method, virtual 3D objects are fused precisely to a real scene, according to the camera pose and the previously extracted layout coordinate points. Our results demonstrate that this approach enables accurate fusion of virtual objects with representations of real world indoor environments. Based on this fusion technique, users can better grasp virtual three-dimensional representations on an AR-GIS platform. Full article
Show Figures

Figure 1

20 pages, 15083 KiB  
Article
Social Force Model-Based Group Behavior Simulation in Virtual Geographic Environments
by Lin Huang, Jianhua Gong, Wenhang Li, Tao Xu, Shen Shen, Jianming Liang, Quanlong Feng, Dong Zhang and Jun Sun
ISPRS Int. J. Geo-Inf. 2018, 7(2), 79; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7020079 - 24 Feb 2018
Cited by 45 | Viewed by 7551
Abstract
Virtual geographic environments (VGEs) are extensively used to explore the relationship between humans and environments. Crowd simulation provides a method for VGEs to represent crowd behaviors that are observed in the real world. The social force model (SFM) can simulate interactions among individuals, [...] Read more.
Virtual geographic environments (VGEs) are extensively used to explore the relationship between humans and environments. Crowd simulation provides a method for VGEs to represent crowd behaviors that are observed in the real world. The social force model (SFM) can simulate interactions among individuals, but it has not sufficiently accounted for inter-group and intra-group behaviors which are important components of crowd dynamics. We present the social group force model (SGFM), based on an extended SFM, to simulate group behaviors in VGEs with focuses on the avoiding behaviors among different social groups and the coordinate behaviors among subgroups that belong to one social group. In our model, psychological repulsions between social groups make them avoid with the whole group and group members can stick together as much as possible; while social groups are separated into several subgroups, the rear subgroups try to catch up and keep the whole group cohesive. We compare the simulation results of the SGFM with the extended SFM and the phenomena in videos. Then we discuss the function of Virtual Reality (VR) in crowd simulation visualization. The results indicate that the SGFM can enhance social group behaviors in crowd dynamics. Full article
Show Figures

Figure 1

21 pages, 3638 KiB  
Article
A Heterogeneous Distributed Virtual Geographic Environment—Potential Application in Spatiotemporal Behavior Experiments
by Shen Shen, Jianhua Gong, Jianming Liang, Wenhang Li, Dong Zhang, Lin Huang and Guoyong Zhang
ISPRS Int. J. Geo-Inf. 2018, 7(2), 54; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7020054 - 07 Feb 2018
Cited by 7 | Viewed by 5873
Abstract
Due to their strong immersion and real-time interactivity, helmet-mounted virtual reality (VR) devices are becoming increasingly popular. Based on these devices, an immersive virtual geographic environment (VGE) provides a promising method for research into crowd behavior in an emergency. However, the current cheaper [...] Read more.
Due to their strong immersion and real-time interactivity, helmet-mounted virtual reality (VR) devices are becoming increasingly popular. Based on these devices, an immersive virtual geographic environment (VGE) provides a promising method for research into crowd behavior in an emergency. However, the current cheaper helmet-mounted VR devices are not popular enough, and will continue to coexist with personal computer (PC)-based systems for a long time. Therefore, a heterogeneous distributed virtual geographic environment (HDVGE) could be a feasible solution to the heterogeneous problems caused by various types of clients, and support the implementation of spatiotemporal crowd behavior experiments with large numbers of concurrent participants. In this study, we developed an HDVGE framework, and put forward a set of design principles to define the similarities between the real world and the VGE. We discussed the HDVGE architecture, and proposed an abstract interaction layer, a protocol-based interaction algorithm, and an adjusted dead reckoning algorithm to solve the heterogeneous distributed problems. We then implemented an HDVGE prototype system focusing on subway fire evacuation experiments. Two types of clients are considered in the system: PC, and all-in-one VR. Finally, we evaluated the performances of the prototype system and the key algorithms. The results showed that in a low-latency local area network (LAN) environment, the prototype system can smoothly support 90 concurrent users consisting of PC and all-in-one VR clients. HDVGE provides a feasible solution for studying not only spatiotemporal crowd behaviors in normal conditions, but also evacuation behaviors in emergency conditions such as fires and earthquakes. HDVGE could also serve as a new means of obtaining observational data about individual and group behavior in support of human geography research. Full article
Show Figures

Figure 1

16 pages, 756 KiB  
Article
Survey on Urban Warfare Augmented Reality
by Xiong You, Weiwei Zhang, Meng Ma, Chen Deng and Jian Yang
ISPRS Int. J. Geo-Inf. 2018, 7(2), 46; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7020046 - 31 Jan 2018
Cited by 22 | Viewed by 6288
Abstract
Urban warfare has become one of the main forms of modern combat in the twenty-first century. The main reason why urban warfare results in hundreds of casualties is that the situational information of the combatant is insufficient. Accessing information via an Augmented Reality [...] Read more.
Urban warfare has become one of the main forms of modern combat in the twenty-first century. The main reason why urban warfare results in hundreds of casualties is that the situational information of the combatant is insufficient. Accessing information via an Augmented Reality system can elevate combatants’ situational awareness to effectively improve the efficiency of decision-making and reduce the injuries. This paper begins with the concept of Urban Warfare Augmented Reality (UWAR) and illuminates the objectives of developing UWAR, i.e., transparent battlefield, intuitional perception and natural interaction. Real-time outdoor registration, information presentation and natural interaction are presented as key technologies of a practical UWAR system. Then, the history and current research state of these technologies are summarized and their future developments are highlighted from three perspectives, i.e., (1) Better integration with Geographic Information System and Virtual Geographic Environment; (2) More intelligent software; (3) More powerful hardware. Full article
Show Figures

Figure 1

14 pages, 4617 KiB  
Article
Traffic Command Gesture Recognition for Virtual Urban Scenes Based on a Spatiotemporal Convolution Neural Network
by Chunyong Ma, Yu Zhang, Anni Wang, Yuan Wang and Ge Chen
ISPRS Int. J. Geo-Inf. 2018, 7(1), 37; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7010037 - 22 Jan 2018
Cited by 30 | Viewed by 5773
Abstract
Intelligent recognition of traffic police command gestures increases authenticity and interactivity in virtual urban scenes. To actualize real-time traffic gesture recognition, a novel spatiotemporal convolution neural network (ST-CNN) model is presented. We utilized Kinect 2.0 to construct a traffic police command gesture skeleton [...] Read more.
Intelligent recognition of traffic police command gestures increases authenticity and interactivity in virtual urban scenes. To actualize real-time traffic gesture recognition, a novel spatiotemporal convolution neural network (ST-CNN) model is presented. We utilized Kinect 2.0 to construct a traffic police command gesture skeleton (TPCGS) dataset collected from 10 volunteers. Subsequently, convolution operations on the locational change of each skeletal point were performed to extract temporal features, analyze the relative positions of skeletal points, and extract spatial features. After temporal and spatial features based on the three-dimensional positional information of traffic police skeleton points were extracted, the ST-CNN model classified positional information into eight types of Chinese traffic police gestures. The test accuracy of the ST-CNN model was 96.67%. In addition, a virtual urban traffic scene in which real-time command tests were carried out was set up, and a real-time test accuracy rate of 93.0% was achieved. The proposed ST-CNN model ensured a high level of accuracy and robustness. The ST-CNN model recognized traffic command gestures, and such recognition was found to control vehicles in virtual traffic environments, which enriches the interactive mode of the virtual city scene. Traffic command gesture recognition contributes to smart city construction. Full article
Show Figures

Figure 1

18 pages, 3473 KiB  
Article
Framework for Virtual Cognitive Experiment in Virtual Geographic Environments
by Fan Zhang, Mingyuan Hu, Weitao Che, Hui Lin and Chaoyang Fang
ISPRS Int. J. Geo-Inf. 2018, 7(1), 36; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7010036 - 22 Jan 2018
Cited by 31 | Viewed by 6180
Abstract
Virtual Geographic Environment Cognition is the attempt to understand the human cognition of surface features, geographic processes, and human behaviour, as well as their relationships in the real world. From the perspective of human cognition behaviour analysis and simulation, previous work in Virtual [...] Read more.
Virtual Geographic Environment Cognition is the attempt to understand the human cognition of surface features, geographic processes, and human behaviour, as well as their relationships in the real world. From the perspective of human cognition behaviour analysis and simulation, previous work in Virtual Geographic Environments (VGEs) has focused mostly on representing and simulating the real world to create an ‘interpretive’ virtual world and improve an individual’s active cognition. In terms of reactive cognition, building a user ‘evaluative’ environment in a complex virtual experiment is a necessary yet challenging task. This paper discusses the outlook of VGEs and proposes a framework for virtual cognitive experiments. The framework not only employs immersive virtual environment technology to create a realistic virtual world but also involves a responsive mechanism to record the user’s cognitive activities during the experiment. Based on the framework, this paper presents two potential implementation methods: first, training a deep learning model with several hundred thousand street view images scored by online volunteers, with further analysis of which visual factors produce a sense of safety for the individual, and second, creating an immersive virtual environment and Electroencephalogram (EEG)-based experimental paradigm to both record and analyse the brain activity of a user and explore what type of virtual environment is more suitable and comfortable. Finally, we present some preliminary findings based on the first method. Full article
Show Figures

Figure 1

17 pages, 5532 KiB  
Article
Real-Time Location-Based Rendering of Urban Underground Pipelines
by Wei Li, Yong Han, Yu Liu, Chenrong Zhu, Yibin Ren, Yanjie Wang and Ge Chen
ISPRS Int. J. Geo-Inf. 2018, 7(1), 32; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7010032 - 21 Jan 2018
Cited by 20 | Viewed by 6105
Abstract
The concealment and complex spatial relationships of urban underground pipelines present challenges in managing them. Recently, augmented reality (AR) has been a hot topic around the world, because it can enhance our perception of reality by overlaying information about the environment and its [...] Read more.
The concealment and complex spatial relationships of urban underground pipelines present challenges in managing them. Recently, augmented reality (AR) has been a hot topic around the world, because it can enhance our perception of reality by overlaying information about the environment and its objects onto the real world. Using AR, underground pipelines can be displayed accurately, intuitively, and in real time. We analyzed the characteristics of AR and their application in underground pipeline management. We mainly focused on the AR pipeline rendering procedure based on the BeiDou Navigation Satellite System (BDS) and simultaneous localization and mapping (SLAM) technology. First, in aiming to improve the spatial accuracy of pipeline rendering, we used differential corrections received from the Ground-Based Augmentation System to compute the precise coordinates of users in real time, which helped us accurately retrieve and draw pipelines near the users, and by scene recognition the accuracy can be further improved. Second, in terms of pipeline rendering, we used Visual-Inertial Odometry (VIO) to track the rendered objects and made some improvements to visual effects, which can provide steady dynamic tracking of pipelines even in relatively markerless environments and outdoors. Finally, we used the occlusion method based on real-time 3D reconstruction to realistically express the immersion effect of underground pipelines. We compared our methods to the existing methods and concluded that the method proposed in this research improves the spatial accuracy of pipeline rendering and the portability of the equipment. Moreover, the updating of our rendering procedure corresponded with the moving of the user’s location, thus we achieved a dynamic rendering of pipelines in the real environment. Full article
Show Figures

Figure 1

27 pages, 7756 KiB  
Article
A Knowledge Base for Automatic Feature Recognition from Point Clouds in an Urban Scene
by Xu-Feng Xing, Mir-Abolfazl Mostafavi and Seyed Hossein Chavoshi
ISPRS Int. J. Geo-Inf. 2018, 7(1), 28; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7010028 - 16 Jan 2018
Cited by 12 | Viewed by 5241
Abstract
LiDAR technology can provide very detailed and highly accurate geospatial information on an urban scene for the creation of Virtual Geographic Environments (VGEs) for different applications. However, automatic 3D modeling and feature recognition from LiDAR point clouds are very complex tasks. This becomes [...] Read more.
LiDAR technology can provide very detailed and highly accurate geospatial information on an urban scene for the creation of Virtual Geographic Environments (VGEs) for different applications. However, automatic 3D modeling and feature recognition from LiDAR point clouds are very complex tasks. This becomes even more complex when the data is incomplete (occlusion problem) or uncertain. In this paper, we propose to build a knowledge base comprising of ontology and semantic rules aiming at automatic feature recognition from point clouds in support of 3D modeling. First, several modules for ontology are defined from different perspectives to describe an urban scene. For instance, the spatial relations module allows the formalized representation of possible topological relations extracted from point clouds. Then, a knowledge base is proposed that contains different concepts, their properties and their relations, together with constraints and semantic rules. Then, instances and their specific relations form an urban scene and are added to the knowledge base as facts. Based on the knowledge and semantic rules, a reasoning process is carried out to extract semantic features of the objects and their components in the urban scene. Finally, several experiments are presented to show the validity of our approach to recognize different semantic features of buildings from LiDAR point clouds. Full article
Show Figures

Figure 1

8840 KiB  
Article
Virtual Geographic Simulation of Light Distribution within Three-Dimensional Plant Canopy Models
by Liyu Tang, Dan Yin, Shuwei Chen, Chongcheng Chen, Hongyu Huang and Ding Lin
ISPRS Int. J. Geo-Inf. 2017, 6(12), 405; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi6120405 - 19 Dec 2017
Cited by 5 | Viewed by 4833
Abstract
Virtual geographic environments (VGEs) have been regarded as an important new means of simulating, analyzing, and understanding complex geological processes. Plants and light are major components of the geographic environment. Light is a critical factor that affects ecological systems. In this study, we [...] Read more.
Virtual geographic environments (VGEs) have been regarded as an important new means of simulating, analyzing, and understanding complex geological processes. Plants and light are major components of the geographic environment. Light is a critical factor that affects ecological systems. In this study, we focused on simulating light transmission and distribution within a three-dimensional plant canopy model. A progressive refinement radiosity algorithm was applied to simulate the transmission and distribution of solar light within a detailed, three-dimensional (3D) loquat (Eriobotrya japonica Lindl.) canopy model. The canopy was described in three dimensions, and each organ surface was represented by a set of triangular facets. The form factors in radiosity were calculated using a hemi-cube algorithm. We developed a module for simulating the instantaneous light distribution within a virtual canopy, which was integrated into ParaTree. We simulated the distribution of photosynthetically active radiation (PAR) within a loquat canopy, and calculated the total PAR intercepted at the whole canopy scale, as well as the mean PAR interception per unit leaf area. The ParaTree-integrated radiosity model simulates the uncollided propagation of direct solar and diffuse sky light and the light-scattering effect of foliage. The PAR captured by the whole canopy based on the radiosity is approximately 9.4% greater than that obtained using ray tracing and TURTLE methods. The latter methods do not account for the scattering among leaves in the canopy in the study, and therefore, the difference might be due to the contribution of light scattering in the foliage. The simulation result is close to Myneni’s findings, in which the light scattering within a canopy is less than 10% of the incident PAR. Our method can be employed for visualizing and analyzing the spatial distribution of light within a canopy, and for estimating the PAR interception at the organ and canopy levels. It is useful for designing plant canopy architecture (e.g., fruit trees and plants in urban greening) and planting planning. Full article
Show Figures

Figure 1

4763 KiB  
Article
A Virtual Geographic Environment for Debris Flow Risk Analysis in Residential Areas
by Lingzhi Yin, Jun Zhu, Yi Li, Chao Zeng, Qing Zhu, Hua Qi, Mingwei Liu, Weilian Li, Zhenyu Cao, Weijun Yang and Pengcheng Zhang
ISPRS Int. J. Geo-Inf. 2017, 6(11), 377; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi6110377 - 22 Nov 2017
Cited by 26 | Viewed by 4721
Abstract
Emergency risk assessment of debris flows in residential areas is of great significance for disaster prevention and reduction, but the assessment has disadvantages, such as a low numerical simulation efficiency and poor capabilities of risk assessment and geographic knowledge sharing. Thus, this paper [...] Read more.
Emergency risk assessment of debris flows in residential areas is of great significance for disaster prevention and reduction, but the assessment has disadvantages, such as a low numerical simulation efficiency and poor capabilities of risk assessment and geographic knowledge sharing. Thus, this paper focuses on the construction of a VGE (virtual geographic environment) system that provides an efficient tool to support the rapid risk analysis of debris flow disasters. The numerical simulation, risk analysis, and 3D (three-dimensional) dynamic visualization of debris flow disasters were tightly integrated into the VGE system. Key technologies, including quantitative risk assessment, multiscale parallel optimization, and visual representation of disaster information, were discussed in detail. The Qipan gully in Wenchuan County, Sichuan Province, China, was selected as the case area, and a prototype system was developed. According to the multiscale parallel optimization experiments, a suitable scale was chosen for the numerical simulation of debris flow disasters. The computational efficiency of one simulation step was 5 ms (milliseconds), and the rendering efficiency was approximately 40 fps (frames per second). Information about the risk area, risk population, and risk roads under different conditions can be quickly obtained. The experimental results show that our approach can support real-time interactive analyses and can be used to share and publish geographic knowledge. Full article
Show Figures

Figure 1

14366 KiB  
Article
A Representation Method for Complex Road Networks in Virtual Geographic Environments
by Peibei Zheng, Hong Tao, Songshan Yue, Mingguang Wu, Guonian Lv and Chuanlong Zhou
ISPRS Int. J. Geo-Inf. 2017, 6(11), 372; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi6110372 - 18 Nov 2017
Cited by 1 | Viewed by 4252
Abstract
Road networks are important for modelling the urban geographic environment. It is necessary to determine the spatial relationships of road intersections when using maps to help researchers conduct virtual urban geographic experiments (because a road intersection might occur as a connected cross or [...] Read more.
Road networks are important for modelling the urban geographic environment. It is necessary to determine the spatial relationships of road intersections when using maps to help researchers conduct virtual urban geographic experiments (because a road intersection might occur as a connected cross or as an unconnected bridge overpass). Based on the concept of using different map layers to organize the render order of each road segment, three methods (manual, semi-automatic and mask-based automatic) are available to help map designers arrange the rendering order. However, significant efforts are still needed, and rendering efficiency remains problematic with these methods. This paper considers the Discrete, Crossing, Overpass, Underpass, Conjunction, Up-overlap and Down-overlap spatial relationships of road intersections. An automatic method is proposed to represent these spatial relationships when drawing road networks on a map. The data-layer organization method (reflecting road grade and elevation-level information) and the symbol-layer decomposition method (reflecting road covering order in the vertical direction) are designed to determine the rendering order of each road element when rendering a map. In addition, an “auxiliary-drawing-action” (for drawing road segments belonging to different grades and elevations) is proposed to adjust the rendering sequences automatically. Two experiments are conducted to demonstrate the feasibility and efficiency of the method, and the results demonstrate that it can effectively handle spatial relationships of road networks in map representations. Using the proposed method, the difficulty of rendering complex road networks can be reduced. Full article
Show Figures

Figure 1

8097 KiB  
Article
Surveillance Video Synopsis in GIS
by Yujia Xie, Meizhen Wang, Xuejun Liu and Yiguang Wu
ISPRS Int. J. Geo-Inf. 2017, 6(11), 333; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi6110333 - 31 Oct 2017
Cited by 15 | Viewed by 4713
Abstract
Surveillance videos contain a considerable amount of data, wherein interesting information to the user is sparsely distributed. Researchers construct video synopsis that contain key information extracted from a surveillance video for efficient browsing and analysis. Geospatial–temporal information of a surveillance video plays an [...] Read more.
Surveillance videos contain a considerable amount of data, wherein interesting information to the user is sparsely distributed. Researchers construct video synopsis that contain key information extracted from a surveillance video for efficient browsing and analysis. Geospatial–temporal information of a surveillance video plays an important role in the efficient description of video content. Meanwhile, current approaches of video synopsis lack the introduction and analysis of geospatial-temporal information. Owing to the preceding problems mentioned, this paper proposes an approach called “surveillance video synopsis in GIS”. Based on an integration model of video moving objects and GIS, the virtual visual field and the expression model of the moving object are constructed by spatially locating and clustering the trajectory of the moving object. The subgraphs of the moving object are reconstructed frame by frame in a virtual scene. Results show that the approach described in this paper comprehensively analyzed and created fusion expression patterns between video dynamic information and geospatial–temporal information in GIS and reduced the playback time of video content. Full article
Show Figures

Figure 1

9577 KiB  
Article
Overview of the OGC CDB Standard for 3D Synthetic Environment Modeling and Simulation
by Sara Saeedi, Steve Liang, David Graham, Michael F. Lokuta and Mir Abolfazl Mostafavi
ISPRS Int. J. Geo-Inf. 2017, 6(10), 306; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi6100306 - 17 Oct 2017
Cited by 9 | Viewed by 7792
Abstract
Recent advances in sensor and platform technologies, such as satellite systems, unmanned aerial vehicles (UAV), manned aerial platforms, and ground-based sensor networks have resulted in massive volumes of data being produced and collected about the earth. Processing, managing, and analyzing these data is [...] Read more.
Recent advances in sensor and platform technologies, such as satellite systems, unmanned aerial vehicles (UAV), manned aerial platforms, and ground-based sensor networks have resulted in massive volumes of data being produced and collected about the earth. Processing, managing, and analyzing these data is one of the main challenges in 3D synthetic representation used in modeling and simulation (M&S) of the natural environment. M&S devices, such as flight simulators, traditionally require a variety of different databases to provide a synthetic representation of the world. M&S often requires integration of data from a variety of sources stored in different formats. Thus, for simulation of a complex synthetic environment, such as a 3D terrain model, tackling interoperability among its components (geospatial data, natural and man-made objects, dynamic and static models) is a critical challenge. Conventional approaches used local proprietary data models and formats. These approaches often lacked interoperability and created silos of content within the simulation community. Therefore, open geospatial standards are increasingly perceived as a means to promote interoperability and reusability for 3D M&S. In this paper, the Open Geospatial Consortium (OGC) CDB Standard is introduced. “CDB” originally referred to Common DataBase, which is currently considered as a name with no abbreviation in the OGC community. The OGC CDB is an international standard for structuring, modeling, and storing geospatial information required in high-performance modeling and simulation applications. CDB defines the core conceptual models, use cases, requirements, and specifications for employing geospatial data in 3D M&S. The main features of the OGC CDB Standard are described as the run-time performance, full plug-and-play interoperable geospatial data store, usefulness in 3D and dynamic simulation environment, ability to integrate proprietary and open-source data formats. Furthermore, compatibility with the OGC standards baseline reduces the complexity of discovering, transforming, and streaming geospatial data into the synthetic environment and makes them more widely acceptable to major geospatial data/software producers. This paper includes an overview of OGC CDB version 1.0, which defines a conceptual model and file structure for the storage, access, and modification of a multi-resolution 3D synthetic environment data store. Finally, this paper presents a perspective of future versions of the OGC CDB and what the steps are for humanizing the OGC CDB standard with the other OGC/ISO standards baseline. Full article
Show Figures

Figure 1

12107 KiB  
Article
Consistent Roof Geometry Encoding for 3D Building Model Retrieval Using Airborne LiDAR Point Clouds
by Yi-Chen Chen, Bo-Yi Lin and Chao-Hung Lin
ISPRS Int. J. Geo-Inf. 2017, 6(9), 269; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi6090269 - 28 Aug 2017
Cited by 5 | Viewed by 5029
Abstract
A 3D building model retrieval method using airborne LiDAR point clouds as input queries is introduced. Based on the concept of data reuse, available building models in the Internet that have geometric shapes similar to a user-specified point cloud query are retrieved and [...] Read more.
A 3D building model retrieval method using airborne LiDAR point clouds as input queries is introduced. Based on the concept of data reuse, available building models in the Internet that have geometric shapes similar to a user-specified point cloud query are retrieved and reused for the purpose of data extraction and building modeling. To retrieve models efficiently, point cloud queries and building models are consistently and compactly encoded by the proposed method. The encoding focuses on the geometries of building roofs, which are the most informative part of a building in airborne LiDAR acquisitions. Spatial histograms of geometric features that describe shapes of building roofs are utilized as shape descriptor, which introduces the properties of shape distinguishability, encoding compactness, rotation invariance, and noise insensitivity. These properties facilitate the feasibility of the proposed approaches for efficient and accurate model retrieval. Analyses on LiDAR data and building model databases and the implementation of web-based retrieval system, which is available at http://pcretrieval.dgl.xyz, demonstrate the feasibility of the proposed method to retrieve polygon models using point clouds. Full article
Show Figures

Figure 1

12543 KiB  
Article
Texture-Cognition-Based 3D Building Model Generalization
by Po Liu, Chengming Li and Fei Li
ISPRS Int. J. Geo-Inf. 2017, 6(9), 260; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi6090260 - 23 Aug 2017
Cited by 3 | Viewed by 4436
Abstract
Three-dimensional (3D) building models have been widely used in the fields of urban planning, navigation and virtual geographic environments. These models incorporate many details to address the complexities of urban environments. Level-of-detail (LOD) technology is commonly used to model progressive transmission and visualization. [...] Read more.
Three-dimensional (3D) building models have been widely used in the fields of urban planning, navigation and virtual geographic environments. These models incorporate many details to address the complexities of urban environments. Level-of-detail (LOD) technology is commonly used to model progressive transmission and visualization. These detailed groups of models can be replaced by a single model using generalization. In this paper, the texture features are first introduced into the generalization process, and a self-organizing mapping (SOM)-based algorithm is used for texture classification. In addition, a new cognition-based hierarchical algorithm is proposed for model-group clustering. First, a constrained Delaunay triangulation (CDT) is constructed using the footprints of building models that are segmented by a road network, and a preliminary proximity graph is extracted from the CDT by visibility analysis. Second, the graph is further segmented by the texture–feature and landmark models. Third, a minimum support tree (MST) is created from the segmented graph, and the final groups are obtained by linear detection and discrete-model conflation. Finally, these groups are conflated using small-triangle removal while preserving the original textures. The experimental results demonstrate the effectiveness of this algorithm. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

1412 KiB  
Case Report
Three-Dimensional Modeling and Indoor Positioning for Urban Emergency Response
by Xin Zhang, Yongxin Chen, Linjun Yu, Weisheng Wang and Qianyu Wu
ISPRS Int. J. Geo-Inf. 2017, 6(7), 214; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi6070214 - 12 Jul 2017
Cited by 8 | Viewed by 5158
Abstract
Three-dimensional modeling of building environments and indoor positioning is essential for emergency response in cities. Traditional ground-based measurement methods, such as geodetic astronomy, total stations, and global positioning system (GPS) receivers, cannot meet the demand for high precision positioning and it is therefore [...] Read more.
Three-dimensional modeling of building environments and indoor positioning is essential for emergency response in cities. Traditional ground-based measurement methods, such as geodetic astronomy, total stations, and global positioning system (GPS) receivers, cannot meet the demand for high precision positioning and it is therefore essential to conduct multiple-angle data-acquisition and establish three-dimensional spatial models. In this paper, a rapid modeling technology is introduced, which includes multiple-angle remote sensing image acquisition based on unmanned aerial vehicles (UAVs), an algorithm to remove linear and planar foregrounds before reconstructing the backgrounds, and a three-dimensional modeling (3DM) framework. Additionally, an indoor 3DM technology is introduced based on building design drawings, and an indoor positioning technology is developed using iBeacon technology. Finally, a prototype system of the indoor and outdoor positioning-service system in an urban firefighting rescue scenario is introduced to demonstrate the value of the method proposed in this paper. Full article
Show Figures

Figure 1

Back to TopTop