Virtual 3D City Models

A special issue of ISPRS International Journal of Geo-Information (ISSN 2220-9964).

Deadline for manuscript submissions: closed (31 March 2021) | Viewed by 35481

Special Issue Editor

Department of Architecture, School of Design and Environment, National University of Singapore, Singapore 117566, Singapore
Interests: 3D city models; digital twins; geoBIM; data integration; city information modeling; generative design; rule-based design; shape recognition

Special Issue Information

Dear Colleagues,

Virtual 3D City Models, in varying forms of extent and detail, are becoming more common, yet their usage might still be limited. While virtual 3D city models have great potential in supporting the planning, simulation, and operation of cities, districts, and neighborhoods, there are still many obstacles to the use of virtual 3D city models as "digital twins". In this Special Issue, we are less concerned with what may strictly define a digital twin, or what technical challenges may exist with respect to the development of such a digital twin. Instead, we would like to share knowledge on innovative uses of virtual 3D city models for planning, simulation, and operation. We are especially interested in use cases that have surpassed the conceptual and hypothetical realm and have seen some accomplishment in practice. We want to learn from both successful and less successful demonstrations of the use of virtual 3D city models to plan, simulate, and operate our urban environments. We are interested in potential best practices, as well as lessons learned, that can inspire others to follow up and explore similar or complimentary applications of virtual 3D city models. Obviously, no one size does not fit all and, as such, comparing successful practices and lessons learned is the best way forward to ensure a reasonable perspective of a complete, successful, and effective use of the potential of 3D virtual city models. This Special Issue aims to make an important step toward this ultimate objective.

Dr. Rudi Stouffs
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. ISPRS International Journal of Geo-Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 3D city models
  • digital twins
  • city information modeling
  • urban simulation
  • operational models
  • urban environments

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 174 KiB  
Editorial
Virtual 3D City Models
by Rudi Stouffs
ISPRS Int. J. Geo-Inf. 2022, 11(4), 240; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi11040240 - 06 Apr 2022
Cited by 1 | Viewed by 1877
Abstract
Virtual 3D city models, in varying forms of extent and detail, are becoming more common, yet their usage might still be limited [...] Full article
(This article belongs to the Special Issue Virtual 3D City Models)

Research

Jump to: Editorial

11 pages, 21401 KiB  
Article
Three-Dimensional Measurement and Three-Dimensional Printing of Giant Coastal Rocks
by Zhiyi Gao, Akio Doi, Kenji Sakakibara, Tomonaru Hosokawa and Masahiro Harata
ISPRS Int. J. Geo-Inf. 2021, 10(6), 404; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10060404 - 11 Jun 2021
Cited by 2 | Viewed by 1921
Abstract
In recent years, the use of three-dimensional (3D) measurement and printing technologies has become an effective means of analyzing and reproducing both physical and natural objects, regardless of size. However, in some complex environments, such as coastal environments, it is difficult to obtain [...] Read more.
In recent years, the use of three-dimensional (3D) measurement and printing technologies has become an effective means of analyzing and reproducing both physical and natural objects, regardless of size. However, in some complex environments, such as coastal environments, it is difficult to obtain the required data by conventional measurement methods. In this paper, we describe our efforts to archive and digitally reproduce a giant coastal rock formation known as Sanouiwa, a famous site off the coast of Miyako City, Iwate Prefecture, Japan. We used two different 3D measurement techniques. The first involved taking pictures using a drone-mounted camera, and the second involved the use of global navigation satellite system data. The point cloud data generated from the high-resolution camera images were integrated using 3D shape reconstruction software, and 3D digital models were created for use in tourism promotion and environmental protection awareness initiatives. Finally, we fabricated the 3D digital models of the rocks with 3D printers for use as museum exhibitions, school curriculum materials, and related applications. Full article
(This article belongs to the Special Issue Virtual 3D City Models)
Show Figures

Figure 1

12 pages, 5238 KiB  
Article
Virtual 3D Campus for Universiti Teknologi Malaysia (UTM)
by Syahiirah Salleh, Uznir Ujang and Suhaibah Azri
ISPRS Int. J. Geo-Inf. 2021, 10(6), 356; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10060356 - 22 May 2021
Cited by 14 | Viewed by 3808
Abstract
University campuses consists of many buildings within a large area managed by a single organization. Like 3D city modeling, a 3D model of campuses can be utilized to provide a better foundation for planning, navigation and management of buildings. This study approaches 3D [...] Read more.
University campuses consists of many buildings within a large area managed by a single organization. Like 3D city modeling, a 3D model of campuses can be utilized to provide a better foundation for planning, navigation and management of buildings. This study approaches 3D modeling of the UTM campus by utilizing data from aerial photos and site observations. The 3D models of buildings were drawn from building footprints in SketchUp and converted to CityGML using FME software. The CityGML models were imported into a geodatabase using 3DCityDB and visualized in Cesium. The resulting 3D model of buildings was in CityGML format level of detail 2, consisting of ground, wall and roof surfaces. The 3D models were positioned with real-world coordinates using the geolocation function in SketchUp. The non-spatial attributes of the 3D models were also stored in a database managed by PostgreSQL. While the methodology demonstrated in this study was found to be able to create LoD2 building models. However, issues of accuracy arose in terms of building details and positioning. Therefore, higher accuracy data, such as point cloud data, should produce higher LoD models and accurate positioning. Full article
(This article belongs to the Special Issue Virtual 3D City Models)
Show Figures

Figure 1

23 pages, 9574 KiB  
Article
Near Real-Time Semantic View Analysis of 3D City Models in Web Browser
by Juho-Pekka Virtanen, Kaisa Jaalama, Tuulia Puustinen, Arttu Julin, Juha Hyyppä and Hannu Hyyppä
ISPRS Int. J. Geo-Inf. 2021, 10(3), 138; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10030138 - 04 Mar 2021
Cited by 16 | Viewed by 4762
Abstract
3D city models and their browser-based applications have become an increasingly applied tool in the cities. One of their applications is the analysis views and visibility, applicable to property valuation and evaluation of urban green infrastructure. We present a near real-time semantic view [...] Read more.
3D city models and their browser-based applications have become an increasingly applied tool in the cities. One of their applications is the analysis views and visibility, applicable to property valuation and evaluation of urban green infrastructure. We present a near real-time semantic view analysis relying on a 3D city model, implemented in a web browser. The analysis is tested in two alternative use cases: property valuation and evaluation of the urban green infrastructure. The results describe the elements visible from a given location, and can also be applied to object type specific analysis, such as green view index estimation, with the main benefit being the freedom of choosing the point-of-view obtained with the 3D model. Several promising development directions can be identified based on the current implementation and experiment results, including the integration of the semantic view analysis with virtual reality immersive visualization or 3D city model application development platforms. Full article
(This article belongs to the Special Issue Virtual 3D City Models)
Show Figures

Figure 1

18 pages, 2515 KiB  
Article
Automatic Workflow for Roof Extraction and Generation of 3D CityGML Models from Low-Cost UAV Image-Derived Point Clouds
by Arnadi Murtiyoso, Mirza Veriandi, Deni Suwardhi, Budhy Soeksmantono and Agung Budi Harto
ISPRS Int. J. Geo-Inf. 2020, 9(12), 743; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9120743 - 12 Dec 2020
Cited by 15 | Viewed by 4250
Abstract
Developments in UAV sensors and platforms in recent decades have stimulated an upsurge in its application for 3D mapping. The relatively low-cost nature of UAVs combined with the use of revolutionary photogrammetric algorithms, such as dense image matching, has made it a strong [...] Read more.
Developments in UAV sensors and platforms in recent decades have stimulated an upsurge in its application for 3D mapping. The relatively low-cost nature of UAVs combined with the use of revolutionary photogrammetric algorithms, such as dense image matching, has made it a strong competitor to aerial lidar mapping. However, in the context of 3D city mapping, further 3D modeling is required to generate 3D city models which is often performed manually using, e.g., photogrammetric stereoplotting. The aim of the paper was to try to implement an algorithmic approach to building point cloud segmentation, from which an automated workflow for the generation of roof planes will also be presented. 3D models of buildings are then created using the roofs’ planes as a base, therefore satisfying the requirements for a Level of Detail (LoD) 2 in the CityGML paradigm. Consequently, the paper attempts to create an automated workflow starting from UAV-derived point clouds to LoD 2-compatible 3D model. Results show that the rule-based segmentation approach presented in this paper works well with the additional advantage of instance segmentation and automatic semantic attribute annotation, while the 3D modeling algorithm performs well for low to medium complexity roofs. The proposed workflow can therefore be implemented for simple roofs with a relatively low number of planar surfaces. Furthermore, the automated approach to the 3D modeling process also helps to maintain the geometric requirements of CityGML such as 3D polygon coplanarity vis-à-vis manual stereoplotting. Full article
(This article belongs to the Special Issue Virtual 3D City Models)
Show Figures

Figure 1

12 pages, 4127 KiB  
Article
Large Common Plansets-4-Points Congruent Sets for Point Cloud Registration
by Cedrique Fotsing, Nafissetou Nziengam and Christophe Bobda
ISPRS Int. J. Geo-Inf. 2020, 9(11), 647; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9110647 - 29 Oct 2020
Cited by 14 | Viewed by 2175
Abstract
Point cloud registration combines multiple point cloud data sets collected from different positions using the same or different devices to form a single point cloud within a single coordinate system. Point cloud registration is usually achieved through spatial transformations that align and merge [...] Read more.
Point cloud registration combines multiple point cloud data sets collected from different positions using the same or different devices to form a single point cloud within a single coordinate system. Point cloud registration is usually achieved through spatial transformations that align and merge multiple point clouds into a single globally consistent model. In this paper, we present a new segmentation-based approach for point cloud registration. Our method consists of extracting plane structures from point clouds and then, using the 4-Point Congruent Sets (4PCS) technique, we estimate transformations that align the plane structures. Instead of a global alignment using all the points in the dataset, our method aligns 2-point clouds using their local plane structures. This considerably reduces the data size, computational workload, and execution time. Unlike conventional methods that seek to align the largest number of common points between entities, the new method aims to align the largest number of planes. Using partial point clouds of multiple real-world scenes, we demonstrate the superiority of our method compared to raw 4PCS in terms of quality of result (QoS) and execution time. Our method requires about half the execution time of 4PCS in all the tested datasets and produces better alignment of the point clouds. Full article
(This article belongs to the Special Issue Virtual 3D City Models)
Show Figures

Figure 1

12 pages, 2836 KiB  
Article
Solar3D: An Open-Source Tool for Estimating Solar Radiation in Urban Environments
by Jianming Liang, Jianhua Gong, Xiuping Xie and Jun Sun
ISPRS Int. J. Geo-Inf. 2020, 9(9), 524; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9090524 - 01 Sep 2020
Cited by 8 | Viewed by 4834
Abstract
Solar3D is an open-source software application designed to interactively calculate solar irradiation on three-dimensional (3D) surfaces in a virtual environment constructed with combinations of 3D-city models, digital elevation models (DEMs), digital surface models (DSMs) and feature layers. The GRASS GIS r.sun solar radiation [...] Read more.
Solar3D is an open-source software application designed to interactively calculate solar irradiation on three-dimensional (3D) surfaces in a virtual environment constructed with combinations of 3D-city models, digital elevation models (DEMs), digital surface models (DSMs) and feature layers. The GRASS GIS r.sun solar radiation model computes solar irradiation based on two-dimensional (2D) raster maps for a given day, latitude, surface and atmospheric conditions. With the increasing availability of 3D-city models and demand for solar energy, there is an urgent need for better tools to computes solar radiation directly with 3D-city models. Solar3D extends the GRASS GIS r.sun model from 2D to 3D by feeding the model with input, including surface slope, aspect and time-resolved shading, which is derived directly from the 3D scene using computer graphics techniques. To summarize, Solar3D offers several new features that—as a whole—distinguish this novel approach from existing 3D solar irradiation tools in the following ways. (1) Solar3D can consume massive heterogeneous 3D-city models, including massive 3D-city models such as oblique airborne photogrammetry-based 3D-city models (OAP3Ds or integrated meshes); (2) Solar3D can perform near real-time pointwise calculation for duration from daily to annual; (3) Solar3D can integrate and interactively explore large-scale heterogeneous geospatial data; (4) Solar3D can calculate solar irradiation at arbitrary surface positions including on rooftops, facades and the ground. Full article
(This article belongs to the Special Issue Virtual 3D City Models)
Show Figures

Figure 1

25 pages, 7627 KiB  
Article
Building Virtual 3D City Model for Smart Cities Applications: A Case Study on Campus Area of the University of Novi Sad
by Dušan Jovanović, Stevan Milovanov, Igor Ruskovski, Miro Govedarica, Dubravka Sladić, Aleksandra Radulović and Vladimir Pajić
ISPRS Int. J. Geo-Inf. 2020, 9(8), 476; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9080476 - 30 Jul 2020
Cited by 52 | Viewed by 7103
Abstract
The Smart Cities data and applications need to replicate, as faithfully as possible, the state of the city and to simulate possible alternative futures. In order to do this, the modelling of the city should cover all aspects of the city that are [...] Read more.
The Smart Cities data and applications need to replicate, as faithfully as possible, the state of the city and to simulate possible alternative futures. In order to do this, the modelling of the city should cover all aspects of the city that are relevant to the problems that require smart solutions. In this context, 2D and 3D spatial data play a key role, in particular 3D city models. One of the methods for collecting data that can be used for developing such 3D city models is Light Detection and Ranging (LiDAR), a technology that has provided opportunities to generate large-scale 3D city models at relatively low cost. The collected data is further processed to obtain fully developed photorealistic virtual 3D city models. The goal of this research is to develop virtual 3D city model based on airborne LiDAR surveying and to analyze its applicability toward Smart Cities applications. It this paper, we present workflow that goes from data collection by LiDAR, through extract, transform, load (ETL) transformations and data processing to developing 3D virtual city model and finally discuss its future potential usage scenarios in various fields of application such as modern ICT-based urban planning and 3D cadaster. The results are presented on the case study of campus area of the University of Novi Sad. Full article
(This article belongs to the Special Issue Virtual 3D City Models)
Show Figures

Figure 1

19 pages, 9109 KiB  
Article
Hierarchical Point Matching Method Based on Triangulation Constraint and Propagation
by Jingxue Wang, Ning Zhang, Xiangqian Wu and Weixi Wang
ISPRS Int. J. Geo-Inf. 2020, 9(6), 347; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9060347 - 26 May 2020
Cited by 3 | Viewed by 2778
Abstract
Reliable image matching is the basis of image-based three-dimensional (3D) reconstruction. This study presents a quasi-dense matching method based on triangulation constraint and propagation as applied to different types of close-range image matching, such as illumination change, large viewpoint, and scale change. The [...] Read more.
Reliable image matching is the basis of image-based three-dimensional (3D) reconstruction. This study presents a quasi-dense matching method based on triangulation constraint and propagation as applied to different types of close-range image matching, such as illumination change, large viewpoint, and scale change. The method begins from a set of sparse matched points that are used to construct an initial Delaunay triangulation. Edge-to-edge matching propagation is then conducted for the point matching. Two types of matching primitives from the edges of triangles with areas larger than a given threshold in the reference image, that is, the midpoints of edges and the intersections between the edges and extracted line segments, are used for the matching. A hierarchical matching strategy is adopted for the above-mentioned primitive matching. The points that cannot be matched in the first stage, specifically those that failed in a gradient orientation descriptor similarity constraint, are further matched in the second stage. The second stage combines the descriptor and the Mahalanobis distance constraints, and the optimal matching subpixel is determined according to an overall similarity score defined for the multiple constraints with different weights. Subsequently, the triangulation is updated using the newly matched points, and the aforementioned matching is repeated iteratively until no new matching points are generated. Twelve sets of close-range images are considered for the experiment. Results reveal that the proposed method has high robustness for different images and can obtain reliable matching results. Full article
(This article belongs to the Special Issue Virtual 3D City Models)
Show Figures

Figure 1

Back to TopTop