remotesensing-logo

Journal Browser

Journal Browser

2nd Edition Instrumenting Smart City Applications with Big Sensing and Earth Observatory Data: Tools, Methods and Techniques

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (29 May 2020) | Viewed by 26718

Special Issue Editors


E-Mail Website
Guest Editor
Department of Civil, Chemical, Environmental, and Materials Engineering, Alma Mater Studiorum—University of Bologna, 40136 Bologna, Italy
Interests: geomatics; remote sensing; geospatial data processing; change detection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Civil, Chemical, Environmental and Materials Engineering (DICAM), University of Bologna, Bologna, Italy
Interests: geomatics; remote sensing; change detection; thermography; radiometric calibration; environmental monitoring
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The exponential growth in the volume of remote sensing data and the increasing quality and availability of high-resolution imagery are making more and more applications of RS data possible in urban environments. In particular, RS information, especially when combined with location-specific data collected locally or through connected devices, presents exciting opportunities for smart city applications, such as risk analysis and mitigation, climate prediction, and remote surveillance. On the other hand, the exploitation of this great amount of data poses new challenges for big data analysis models and requires new spatial information frameworks capable of integrating imagery, sensor observations, and social media in geographic information systems (GIS).

This Special Issue aims to collect high-quality contributions toward the development of new algorithms, applications, and interpretative models for the urban environment, in order to fill the gap between the impressive mass of available RS data and their effective usability by stakeholders. Therefore, this issue welcomes papers addressing (but not limited to) the following topics:

  • Long time series analyses;
  • Calibration and pre-processing in a big sensing perspective;
  • VHR image classification;
  • Change detection;
  • Multisensor data integration;
  • Photogrammetric 3D modeling;
  • Risk analyses at city scale;
  • Urban heat island effect;
  • Monitoring and surveillance.

Prof. Gabriele Bitelli
Dr. Emanuele Mandanici
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Urban environment
  • Smart cities
  • Big data
  • Big sensing
  • 3D city modeling

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 177 KiB  
Editorial
2nd Edition of Instrumenting Smart City Applications with Big Sensing and Earth Observatory Data: Tools, Methods and Techniques
by Gabriele Bitelli and Emanuele Mandanici
Remote Sens. 2021, 13(7), 1310; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13071310 - 30 Mar 2021
Cited by 2 | Viewed by 1236
Abstract
The exponential growth in the volume of Earth observation data and the increasing quality and availability of high-resolution imagery are increasingly making more applications possible in urban environments [...] Full article

Research

Jump to: Editorial

19 pages, 2505 KiB  
Article
VIIRS Nighttime Light Data for Income Estimation at Local Level
by Kinga Ivan, Iulian-Horia Holobâcă, József Benedek and Ibolya Török
Remote Sens. 2020, 12(18), 2950; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12182950 - 11 Sep 2020
Cited by 16 | Viewed by 5277
Abstract
The aim of the paper is to develop a model for the real-time estimation of local level income data by combining machine learning, Earth Observation, and Geographic Information System. More exactly, we estimated the income per capita by help of a machine learning [...] Read more.
The aim of the paper is to develop a model for the real-time estimation of local level income data by combining machine learning, Earth Observation, and Geographic Information System. More exactly, we estimated the income per capita by help of a machine learning model for 46 cities with more than 50,000 inhabitants, based on the National Polar-orbiting Partnership–Visible Infrared Imaging Radiometer Suite (NPP-VIIRS) nighttime satellite images from 2012–2018. For the automation of calculation, a new ModelBuilder type tool was developed within the ArcGIS software called EO-Incity (Earth Observation–Income city). The sum of light (SOL) data extracted by means of the EO-Incity tool and the observed income data were integrated in an algorithm within the MATLAB software in order to calculate a transfer equation and the average error. The results achieved were subsequently reintegrated in EO-Incity and used for the estimation of the income value at local level. The regression analyses highlighted a stable and strong relationship between SOL and income for the analyzed cities. The EO-Incity tool and the machine learning model proved to be efficient in the real-time estimation of the income at local level. When integrated in the information systems specific for smart cities, they can serve as a support for decision-making in order to fight poverty and reduce social inequalities. Full article
Show Figures

Graphical abstract

19 pages, 4592 KiB  
Article
US EPA EnviroAtlas Meter-Scale Urban Land Cover (MULC): 1-m Pixel Land Cover Class Definitions and Guidance
by Andrew Pilant, Keith Endres, Daniel Rosenbaum and Gillian Gundersen
Remote Sens. 2020, 12(12), 1909; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12121909 - 12 Jun 2020
Cited by 14 | Viewed by 4141
Abstract
This article defines the land cover classes used in Meter-Scale Urban Land Cover (MULC), a unique, high resolution (one meter2 per pixel) land cover dataset developed for 30 US communities for the United States Environmental Protection Agency (US EPA) EnviroAtlas. MULC data [...] Read more.
This article defines the land cover classes used in Meter-Scale Urban Land Cover (MULC), a unique, high resolution (one meter2 per pixel) land cover dataset developed for 30 US communities for the United States Environmental Protection Agency (US EPA) EnviroAtlas. MULC data categorize the landscape into these land cover classes: impervious surface, tree, grass-herbaceous, shrub, soil-barren, water, wetland and agriculture. MULC data are used to calculate approximately 100 EnviroAtlas metrics that serve as indicators of nature’s benefits (ecosystem goods and services). MULC, a dataset for which development is ongoing, is produced by multiple classification methods using aerial photo and LiDAR datasets. The mean overall fuzzy accuracy across the EnviroAtlas communities is 88% and mean Kappa coefficient is 0.84. MULC is available in EnviroAtlas via web browser, web map service (WMS) in the user’s geographic information system (GIS), and as downloadable data at EPA Environmental Data Gateway. Fact sheets and metadata for each MULC community are available through EnviroAtlas. Some MULC applications include mapping green and grey infrastructure, connecting land cover with socioeconomic/demographic variables, street tree planting, urban heat island analysis, mosquito habitat risk mapping and bikeway planning. This article provides practical guidance for using MULC effectively and developing similar high resolution (HR) land cover data. Full article
Show Figures

Figure 1

41 pages, 27941 KiB  
Article
Intensity Thresholding and Deep Learning Based Lane Marking Extraction and Lane Width Estimation from Mobile Light Detection and Ranging (LiDAR) Point Clouds
by Yi-Ting Cheng, Ankit Patel, Chenglu Wen, Darcy Bullock and Ayman Habib
Remote Sens. 2020, 12(9), 1379; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12091379 - 27 Apr 2020
Cited by 34 | Viewed by 6247
Abstract
Lane markings are one of the essential elements of road information, which is useful for a wide range of transportation applications. Several studies have been conducted to extract lane markings through intensity thresholding of Light Detection and Ranging (LiDAR) point clouds acquired by [...] Read more.
Lane markings are one of the essential elements of road information, which is useful for a wide range of transportation applications. Several studies have been conducted to extract lane markings through intensity thresholding of Light Detection and Ranging (LiDAR) point clouds acquired by mobile mapping systems (MMS). This paper proposes an intensity thresholding strategy using unsupervised intensity normalization and a deep learning strategy using automatically labeled training data for lane marking extraction. For comparative evaluation, original intensity thresholding and deep learning using manually established labels strategies are also implemented. A pavement surface-based assessment of lane marking extraction by the four strategies is conducted in asphalt and concrete pavement areas covered by MMS equipped with multiple LiDAR scanners. Additionally, the extracted lane markings are used for lane width estimation and reporting lane marking gaps along various highways. The normalized intensity thresholding leads to a better lane marking extraction with an F1-score of 78.9% in comparison to the original intensity thresholding with an F1-score of 72.3%. On the other hand, the deep learning model trained with automatically generated labels achieves a higher F1-score of 85.9% than the one trained on manually established labels with an F1-score of 75.1%. In concrete pavement area, the normalized intensity thresholding and both deep learning strategies obtain better lane marking extraction (i.e., lane markings along longer segments of the highway have been extracted) than the original intensity thresholding approach. For the lane width results, more estimates are observed, especially in areas with poor edge lane marking, using the two deep learning models when compared with the intensity thresholding strategies due to the higher recall rates for the former. The outcome of the proposed strategies is used to develop a framework for reporting lane marking gap regions, which can be subsequently visualized in RGB imagery to identify their cause. Full article
Show Figures

Figure 1

19 pages, 7449 KiB  
Article
Quantitative Landscape Assessment Using LiDAR and Rendered 360° Panoramic Images
by Rafał Wróżyński, Krzysztof Pyszny and Mariusz Sojka
Remote Sens. 2020, 12(3), 386; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12030386 - 25 Jan 2020
Cited by 14 | Viewed by 4927
Abstract
The study presents a new method for quantitative landscape assessment. The method uses LiDAR data and combines the potential of GIS (ArcGIS) and 3D graphics software (Blender). The developed method allows one to create Classified Digital Surface Models (CDSM), which are then used [...] Read more.
The study presents a new method for quantitative landscape assessment. The method uses LiDAR data and combines the potential of GIS (ArcGIS) and 3D graphics software (Blender). The developed method allows one to create Classified Digital Surface Models (CDSM), which are then used to create 360° panoramic images from the point of view of the observer. In order to quantify the landscape, 360° panoramic images were transformed to the Interrupted Sinusoidal Projection using G.Projector software. A quantitative landscape assessment is carried out automatically with the following landscape classes: ground, low, medium, and high vegetation, buildings, water, and sky according to the LiDAR 1.2 standard. The results of the analysis are presented quantitatively—the percentage distribution of landscape classes in the 360° field of view. In order to fully describe the landscape around the observer, graphs of little planets have been proposed to interpret the obtained results. The usefulness of the developed methodology, together with examples of its application and the way of presenting the results, is described. The proposed Quantitative Landscape Assessment method (QLA360) allows quantitative landscape assessment to be performed in the 360° field of view without the need to carry out field surveys. The QLA360 uses LiDAR American Society of Photogrammetry and Remote Sensing (ASPRS) classification standards, which allows one to avoid differences resulting from the use of different algorithms for classifying images in semantic segmentation. The most important advantages of the method are as follows: observer-independent, 360° field of view which simulates human perspective, automatic operation, scalability, and easy presentation and interpretation of results. Full article
Show Figures

Graphical abstract

20 pages, 6761 KiB  
Article
Maintaining Semantic Information across Generic 3D Model Editing Operations
by Sidan Yao, Xiao Ling, Fiona Nueesch, Gerhard Schrotter, Simon Schubiger, Zheng Fang, Long Ma and Zhen Tian
Remote Sens. 2020, 12(2), 335; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12020335 - 20 Jan 2020
Cited by 8 | Viewed by 4079
Abstract
Many of today’s data models for 3D applications, such as City Geography Markup Language (CityGML) or Industry Foundation Classes (IFC) encode rich semantic information in addition to the traditional geometry and materials representation. However, 3D editing techniques fall short of maintaining the semantic [...] Read more.
Many of today’s data models for 3D applications, such as City Geography Markup Language (CityGML) or Industry Foundation Classes (IFC) encode rich semantic information in addition to the traditional geometry and materials representation. However, 3D editing techniques fall short of maintaining the semantic information across edit operations if they are not tailored to a specific data model. While semantic information is often lost during edit operations, geometry, UV mappings, and materials are usually maintained. This article presents a data model synchronization method that preserves semantic information across editing operation relying only on geometry, UV mappings, and materials. This enables easy integration of existing and future 3D editing techniques with rich data models. The method links the original data model to the edited geometry using point set registration, recovering the existing information based on spatial and UV search methods, and automatically labels the newly created geometry. An implementation of a Level of Detail 3 (LoD3) building editor for the Virtual Singapore project, based on interactive push-pull and procedural generation of façades, verified the method with 30 common editing tasks. The implementation synchronized changes in the 3D geometry with a CityGML data model and was applied to more than 100 test buildings. Full article
Show Figures

Graphical abstract

Back to TopTop