remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing Based Building Extraction II

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (30 June 2022) | Viewed by 44054

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Remote Sensing Technology Institute, German Aerospace Center (DLR), Muenchener Strasse 20, 82234 Wessling, Germany
Interests: forest remote sensing building extraction; 2D/3D change detection; data fusion; time-series image analysis; semantic 3D point cloud segmentation; computer vision; 3D reconstruction
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Chinese Academy of Surveying and Mapping, No.28 Lianhuachi west road, Haidian District, Beijing 100830, China
Interests: photogrammetry; remote sensing information extraction; natural resource monitoring; building extraction; data fusion

E-Mail Website
Guest Editor
Institute for Integrated and Intelligent Systems, Griffith University, Nathan, QLD 4111, Australia
Interests: deep learning; remote sensing image processing; point cloud processing; change detection; object recognition; object modelling; remote sensing data registration; remote sensing of environment
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Independent Researcher, Overijssel, The Netherlands
Interests: remote sensing; computer vision; AI/XAI; urban region monitoring; climate adaptation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Space Science and Technologies, Faculty of Science, Akdeniz University, Antalya, Turkey
Interests: LIDAR; RADAR/SAR; building detection; 3D reconstruction; image analysis; point cloud processing; machine learning; earth observation; capacity building orest remote sensing building extraction; 2D/3D change detection; data fusion; time-series image analysis; semantic 3D point cloud segmentation; computer vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Building extraction from remote sensing data plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. The rapid development of image processing techniques and easily available very-high-resolution multispectral, hyperspectral, LiDAR, and SAR remote sensing images have further boosted the research on building-extraction-related topics. Especially, in recent years, many research institutes and associations have provided open source datasets and annotated training data to meet the demand of advanced artificial intelligence models, which brings new opportunities to develop advanced approaches for building extraction and monitoring.

Hence, there are higher expectations on the efficiency, accuracy, and robustness of building extraction approaches. They should also meet the demand of processing large datasets at city, national, and global scale. Moreover, challenges remain on transform learning and dealing with imperfect training data, as well as unexpected objects in urban scenes such as trees, cloud, and shadows. Along with the building masks, more research arises to generate LoD2/3 building models from remote sensing data automatically.

The previous Special Issue ‘Remote Sensing based Building Extraction’ was a great success. This Special Issue aims to investigate the cutting-edge methodology and applications related to one or more of the following topics,

  • Advanced AI models for building detection and extraction;
  • Semantic remote sensing image segmentation;
  • 2D/3D change detection;
  • Disaster monitoring;
  • Rooftop modelling from remotely sensed data;
  • 3D point cloud segmentation;
  • Building boundary extraction and vectorization;
  • Large scale urban growth monitoring;
  • Weakly supervised urban classification;
  • Time-series remote sensing data analysis;
  • Roof-top modelling;
  • Urban object (vehicle, road, etc.) detection;
  • Multi-sensor, multi-resolution, and multi-modality data fusion;
  • Climate adaptation of smart cities;
  • Sustainable development.

Dr. Jiaojiao Tian
Prof. Dr. Qin Yan
Dr. Mohammad Awrangjeb
Dr. Beril Kallfelz-Sirmacek
Dr. Nusret Demir
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • building extraction
  • 3D urban modelling
  • urban classification
  • roof reconstruction
  • building change detection
  • large scale surveillance
  • disaster monitoring
  • LiDAR
  • optical stereo imagery
  • hyperspectral data
  • SAR
  • data fusion
  • time-series monitoring
  • climate adaptation
  • sustainable development

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

4 pages, 195 KiB  
Editorial
Editorial for Special Issue: “Remote Sensing Based Building Extraction II”
by Jiaojiao Tian, Qin Yan, Mohammad Awrangjeb, Beril Kallfelz (Sirmacek) and Nusret Demir
Remote Sens. 2023, 15(4), 998; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15040998 - 10 Feb 2023
Cited by 1 | Viewed by 1178
Abstract
Accurate building extraction from remotely sensed images is essential for topographic mapping, urban planning, disaster management, navigation, and many other applications [...] Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)

Research

Jump to: Editorial, Review

20 pages, 15251 KiB  
Article
Research on Self-Supervised Building Information Extraction with High-Resolution Remote Sensing Images for Photovoltaic Potential Evaluation
by De-Yue Chen, Ling Peng, Wen-Yue Zhang, Yin-Da Wang and Li-Na Yang
Remote Sens. 2022, 14(21), 5350; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14215350 - 25 Oct 2022
Cited by 4 | Viewed by 3067
Abstract
With the rapid development of the energy industry and the growth of the global energy demand in recent years, the development of the photovoltaic industry has become increasingly significant. However, the development of the PV industry is constrained by high land costs, and [...] Read more.
With the rapid development of the energy industry and the growth of the global energy demand in recent years, the development of the photovoltaic industry has become increasingly significant. However, the development of the PV industry is constrained by high land costs, and land in central cities and industrial areas is often very expensive and unsuitable for the installation of PV equipment in large areas. With this background knowledge, the key to evaluating the PV potential is by counting the rooftop information of buildings, and an ideal solution for extracting building rooftop information is from remote sensing satellite images using the deep learning method; however, the deep learning method often requires large-scale labeled samples, and the labeling of remote sensing images is often time-consuming and expensive. To reduce the burden of data labeling, models trained on large datasets can be used as pre-trained models (e.g., ImageNet) to provide prior knowledge for training. However, most of the existing pre-trained model parameters are not suitable for direct transfer to remote sensing tasks. In this paper, we design a pseudo-label-guided self-supervised learning (PGSSL) semantic segmentation network structure based on high-resolution remote sensing images to extract building information. The pseudo-label-guided learning method allows the feature results extracted by the pretext task to be more applicable to the target task and ultimately improves segmentation accuracy. Our proposed method achieves better results than current contrastive learning methods in most experiments and uses only about 20–50% of the labeled data to achieve comparable performance with random initialization. In addition, a more accurate statistical method for building density distribution is designed based on the semantic segmentation results. This method addresses the last step of the extraction results oriented to the PV potential assessment, and this paper is validated in Beijing, China, to demonstrate the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)
Show Figures

Figure 1

19 pages, 28930 KiB  
Article
Combining Deep Semantic Edge and Object Segmentation for Large-Scale Roof-Part Polygon Extraction from Ultrahigh-Resolution Aerial Imagery
by Wouter A. J. Van den Broeck and Toon Goedemé
Remote Sens. 2022, 14(19), 4722; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194722 - 21 Sep 2022
Cited by 3 | Viewed by 1807
Abstract
The roofscape plays a vital role in the support of sustainable urban planning and development. However, availability of detailed and up-to-date information on the level of individual roof-part topology remains a bottleneck for reliable assessment of its present status and future potential. Motivated [...] Read more.
The roofscape plays a vital role in the support of sustainable urban planning and development. However, availability of detailed and up-to-date information on the level of individual roof-part topology remains a bottleneck for reliable assessment of its present status and future potential. Motivated by the need for automation, the current state-of-the-art focuses on applying deep learning techniques for roof-plane segmentation from light-detection-and-ranging (LiDAR) point clouds, but fails to deliver on criteria such as scalability, spatial predictive continuity, and vectorization for use in geographic information systems (GISs). Therefore, this paper proposes a fully automated end-to-end workflow capable of extracting large-scale continuous polygon maps of roof-part instances from ultra-high-resolution (UHR) aerial imagery. In summary, the workflow consists of three main steps: (1) use a multitask fully convolutional network (FCN) to infer semantic roof-part edges and objects, (2) extract distinct closed shapes given the edges and objects, and (3) vectorize to obtain roof-part polygons. The methodology is trained and tested on a challenging dataset comprising of UHR aerial RGB orthoimagery (0.03 m GSD) and LiDAR-derived digital elevation models (DEMs) (0.25 m GSD) of three Belgian urban areas (including the famous touristic city of Bruges). We argue that UHR optical imagery may provide a competing alternative for this task over classically used LiDAR data, and investigate the added value of combining these two data sources. Further, we conduct an ablation study to optimize various components of the workflow, reaching a final panoptic quality of 54.8% (segmentation quality = 87.7%, recognition quality = 62.6%). In combination with human validation, our methodology can provide automated support for the efficient and detailed mapping of roofscapes. Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)
Show Figures

Graphical abstract

18 pages, 28090 KiB  
Article
City3D: Large-Scale Building Reconstruction from Airborne LiDAR Point Clouds
by Jin Huang, Jantien Stoter, Ravi Peters and Liangliang Nan
Remote Sens. 2022, 14(9), 2254; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14092254 - 07 May 2022
Cited by 35 | Viewed by 6542
Abstract
We present a fully automatic approach for reconstructing compact 3D building models from large-scale airborne point clouds. A major challenge of urban reconstruction from airborne LiDAR point clouds lies in that the vertical walls are typically missing. Based on the observation that urban [...] Read more.
We present a fully automatic approach for reconstructing compact 3D building models from large-scale airborne point clouds. A major challenge of urban reconstruction from airborne LiDAR point clouds lies in that the vertical walls are typically missing. Based on the observation that urban buildings typically consist of planar roofs connected with vertical walls to the ground, we propose an approach to infer the vertical walls directly from the data. With the planar segments of both roofs and walls, we hypothesize the faces of the building surface, and the final model is obtained by using an extended hypothesis-and-selection-based polygonal surface reconstruction framework. Specifically, we introduce a new energy term to encourage roof preferences and two additional hard constraints into the optimization step to ensure correct topology and enhance detail recovery. Experiments on various large-scale airborne LiDAR point clouds have demonstrated that the method is superior to the state-of-the-art methods in terms of reconstruction accuracy and robustness. In addition, we have generated a new dataset with our method consisting of the point clouds and 3D models of 20k real-world buildings. We believe this dataset can stimulate research in urban reconstruction from airborne LiDAR point clouds and the use of 3D city models in urban applications. Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)
Show Figures

Figure 1

24 pages, 4907 KiB  
Article
GA-Net-Pyramid: An Efficient End-to-End Network for Dense Matching
by Yuanxin Xia, Pablo d’Angelo, Friedrich Fraundorfer, Jiaojiao Tian, Mario Fuentes Reyes and Peter Reinartz
Remote Sens. 2022, 14(8), 1942; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14081942 - 17 Apr 2022
Cited by 1 | Viewed by 2295
Abstract
Dense matching plays a crucial role in computer vision and remote sensing, to rapidly provide stereo products using inexpensive hardware. Along with the development of deep learning, the Guided Aggregation Network (GA-Net) achieves state-of-the-art performance via the proposed Semi-Global Guided Aggregation layers and [...] Read more.
Dense matching plays a crucial role in computer vision and remote sensing, to rapidly provide stereo products using inexpensive hardware. Along with the development of deep learning, the Guided Aggregation Network (GA-Net) achieves state-of-the-art performance via the proposed Semi-Global Guided Aggregation layers and reduces the use of costly 3D convolutional layers. To solve the problem of GA-Net requiring large GPU memory consumption, we design a pyramid architecture to modify the model. Starting from a downsampled stereo input, the disparity is estimated and continuously refined through the pyramid levels. Thus, the disparity search is only applied for a small size of stereo pair and then confined within a short residual range for minor correction, leading to highly reduced memory usage and runtime. Tests on close-range, aerial, and satellite data demonstrate that the proposed algorithm achieves significantly higher efficiency (around eight times faster consuming only 20–40% GPU memory) and comparable results with GA-Net on remote sensing data. Thanks to this coarse-to-fine estimation, we successfully process remote sensing datasets with very large disparity ranges, which could not be processed with GA-Net due to GPU memory limitations. Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)
Show Figures

Figure 1

24 pages, 11219 KiB  
Article
B-FGC-Net: A Building Extraction Network from High Resolution Remote Sensing Imagery
by Yong Wang, Xiangqiang Zeng, Xiaohan Liao and Dafang Zhuang
Remote Sens. 2022, 14(2), 269; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14020269 - 07 Jan 2022
Cited by 29 | Viewed by 3450
Abstract
Deep learning (DL) shows remarkable performance in extracting buildings from high resolution remote sensing images. However, how to improve the performance of DL based methods, especially the perception of spatial information, is worth further study. For this purpose, we proposed a building extraction [...] Read more.
Deep learning (DL) shows remarkable performance in extracting buildings from high resolution remote sensing images. However, how to improve the performance of DL based methods, especially the perception of spatial information, is worth further study. For this purpose, we proposed a building extraction network with feature highlighting, global awareness, and cross level information fusion (B-FGC-Net). The residual learning and spatial attention unit are introduced in the encoder of the B-FGC-Net, which simplifies the training of deep convolutional neural networks and highlights the spatial information representation of features. The global feature information awareness module is added to capture multiscale contextual information and integrate the global semantic information. The cross level feature recalibration module is used to bridge the semantic gap between low and high level features to complete the effective fusion of cross level information. The performance of the proposed method was tested on two public building datasets and compared with classical methods, such as UNet, LinkNet, and SegNet. Experimental results demonstrate that B-FGC-Net exhibits improved profitability of accurate extraction and information integration for both small and large scale buildings. The IoU scores of B-FGC-Net on WHU and INRIA Building datasets are 90.04% and 79.31%, respectively. B-FGC-Net is an effective and recommended method for extracting buildings from high resolution remote sensing images. Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)
Show Figures

Figure 1

21 pages, 18893 KiB  
Article
Progress Guidance Representation for Robust Interactive Extraction of Buildings from Remotely Sensed Images
by Zhen Shu, Xiangyun Hu and Hengming Dai
Remote Sens. 2021, 13(24), 5111; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13245111 - 16 Dec 2021
Cited by 2 | Viewed by 1813
Abstract
Accurate building extraction from remotely sensed images is essential for topographic mapping, cadastral surveying and many other applications. Fully automatic segmentation methods still remain a great challenge due to the poor generalization ability and the inaccurate segmentation results. In this work, we are [...] Read more.
Accurate building extraction from remotely sensed images is essential for topographic mapping, cadastral surveying and many other applications. Fully automatic segmentation methods still remain a great challenge due to the poor generalization ability and the inaccurate segmentation results. In this work, we are committed to robust click-based interactive building extraction in remote sensing imagery. We argue that stability is vital to an interactive segmentation system, and we observe that the distance of the newly added click to the boundaries of the previous segmentation mask contains progress guidance information of the interactive segmentation process. To promote the robustness of the interactive segmentation, we exploit this information with the previous segmentation mask, positive and negative clicks to form a progress guidance map, and feed it to a convolutional neural network (CNN) with the original RGB image, we name the network as PGR-Net. In addition, an adaptive zoom-in strategy and an iterative training scheme are proposed to further promote the stability of PGR-Net. Compared with the latest methods FCA and f-BRS, the proposed PGR-Net basically requires 1–2 fewer clicks to achieve the same segmentation results. Comprehensive experiments have demonstrated that the PGR-Net outperforms related state-of-the-art methods on five natural image datasets and three building datasets of remote sensing images. Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)
Show Figures

Figure 1

17 pages, 8715 KiB  
Article
Parameter-Free Half-Spaces Based 3D Building Reconstruction Using Ground and Segmented Building Points from Airborne LiDAR Data with 2D Outlines
by Marko Bizjak, Borut Žalik and Niko Lukač
Remote Sens. 2021, 13(21), 4430; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214430 - 03 Nov 2021
Cited by 4 | Viewed by 1640
Abstract
This paper aims to automatically reconstruct 3D building models on a large scale using a new approach on the basis of half-spaces, while making no assumptions about the building layout and keeping the number of input parameters to a minimum. The proposed algorithm [...] Read more.
This paper aims to automatically reconstruct 3D building models on a large scale using a new approach on the basis of half-spaces, while making no assumptions about the building layout and keeping the number of input parameters to a minimum. The proposed algorithm is performed in two stages. First, the airborne LiDAR data and buildings’ outlines are preprocessed to generate buildings’ base models and the corresponding half-spaces. In the second stage, the half-spaces are analysed and used for shaping the final 3D building model using 3D Boolean operations. In experiments, the proposed algorithm was applied on a large scale, and its’ performance was inspected on a city level and on a single building level. Accurate reconstruction of buildings with various layouts were demonstrated and limitations were identified for large-scale applications. Finally, the proposed algorithm was validated on an ISPRS benchmark dataset, where a RMSE of 1.31 m and completeness of 98.9% were obtained. Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)
Show Figures

Figure 1

15 pages, 8441 KiB  
Article
Attention Enhanced U-Net for Building Extraction from Farmland Based on Google and WorldView-2 Remote Sensing Images
by Chuangnong Li, Lin Fu, Qing Zhu, Jun Zhu, Zheng Fang, Yakun Xie, Yukun Guo and Yuhang Gong
Remote Sens. 2021, 13(21), 4411; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214411 - 02 Nov 2021
Cited by 21 | Viewed by 2642
Abstract
High-resolution remote sensing images contain abundant building information and provide an important data source for extracting buildings, which is of great significance to farmland preservation. However, the types of ground features in farmland are complex, the buildings are scattered and may be obscured [...] Read more.
High-resolution remote sensing images contain abundant building information and provide an important data source for extracting buildings, which is of great significance to farmland preservation. However, the types of ground features in farmland are complex, the buildings are scattered and may be obscured by clouds or vegetation, leading to problems such as a low extraction accuracy in the existing methods. In response to the above problems, this paper proposes a method of attention-enhanced U-Net for building extraction from farmland, based on Google and WorldView-2 remote sensing images. First, a Resnet unit is adopted as the infrastructure of the U-Net network encoding part, then the spatial and channel attention mechanism module is introduced between the Resnet unit and the maximum pool and the multi-scale fusion module is added to improve the U-Net network. Second, the buildings found on WorldView-2 and Google images are extracted through farmland boundary constraints. Third, boundary optimization and fusion processing are carried out for the building extraction results on the WorldView-2 and Google images. Fourth, a case experiment is performed. The method in this paper is compared with semantic segmentation models, such as FCN8, U-Net, Attention_UNet, and DeepLabv3+. The experimental results indicate that this method attains a higher accuracy and better effect in terms of building extraction within farmland; the accuracy is 97.47%, the F1 score is 85.61%, the recall rate (Recall) is 93.02%, and the intersection of union (IoU) value is 74.85%. Hence, buildings within farming areas can be effectively extracted, which is conducive to the preservation of farmland. Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)
Show Figures

Graphical abstract

21 pages, 5851 KiB  
Article
Building Extraction from Airborne LiDAR Data Based on Multi-Constraints Graph Segmentation
by Zhenyang Hui, Zhuoxuan Li, Penggen Cheng, Yao Yevenyo Ziggah and JunLin Fan
Remote Sens. 2021, 13(18), 3766; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13183766 - 20 Sep 2021
Cited by 9 | Viewed by 3527
Abstract
Building extraction from airborne Light Detection and Ranging (LiDAR) point clouds is a significant step in the process of digital urban construction. Although the existing building extraction methods perform well in simple urban environments, when encountering complicated city environments with irregular building shapes [...] Read more.
Building extraction from airborne Light Detection and Ranging (LiDAR) point clouds is a significant step in the process of digital urban construction. Although the existing building extraction methods perform well in simple urban environments, when encountering complicated city environments with irregular building shapes or varying building sizes, these methods cannot achieve satisfactory building extraction results. To address these challenges, a building extraction method from airborne LiDAR data based on multi-constraints graph segmentation was proposed in this paper. The proposed method mainly converted point-based building extraction into object-based building extraction through multi-constraints graph segmentation. The initial extracted building points were derived according to the spatial geometric features of different object primitives. Finally, a multi-scale progressive growth optimization method was proposed to recover some omitted building points and improve the completeness of building extraction. The proposed method was tested and validated using three datasets provided by the International Society for Photogrammetry and Remote Sensing (ISPRS). Experimental results show that the proposed method can achieve the best building extraction results. It was also found that no matter the average quality or the average F1 score, the proposed method outperformed ten other investigated building extraction methods. Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)
Show Figures

Graphical abstract

25 pages, 6048 KiB  
Article
A Deep Learning-Based Framework for Automated Extraction of Building Footprint Polygons from Very High-Resolution Aerial Imagery
by Ziming Li, Qinchuan Xin, Ying Sun and Mengying Cao
Remote Sens. 2021, 13(18), 3630; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13183630 - 11 Sep 2021
Cited by 22 | Viewed by 6022
Abstract
Accurate building footprint polygons provide essential data for a wide range of urban applications. While deep learning models have been proposed to extract pixel-based building areas from remote sensing imagery, the direct vectorization of pixel-based building maps often leads to building footprint polygons [...] Read more.
Accurate building footprint polygons provide essential data for a wide range of urban applications. While deep learning models have been proposed to extract pixel-based building areas from remote sensing imagery, the direct vectorization of pixel-based building maps often leads to building footprint polygons with irregular shapes that are inconsistent with real building boundaries, making it difficult to use them in geospatial analysis. In this study, we proposed a novel deep learning-based framework for automated extraction of building footprint polygons (DLEBFP) from very high-resolution aerial imagery by combining deep learning models for different tasks. Our approach uses the U-Net, Cascade R-CNN, and Cascade CNN deep learning models to obtain building segmentation maps, building bounding boxes, and building corners, respectively, from very high-resolution remote sensing images. We used Delaunay triangulation to construct building footprint polygons based on the detected building corners with the constraints of building bounding boxes and building segmentation maps. Experiments on the Wuhan University building dataset and ISPRS Vaihingen dataset indicate that DLEBFP can perform well in extracting high-quality building footprint polygons. Compared with the other semantic segmentation models and the vector map generalization method, DLEBFP is able to achieve comparable mapping accuracies with semantic segmentation models on a pixel basis and generate building footprint polygons with concise edges and vertices with regular shapes that are close to the reference data. The promising performance indicates that our method has the potential to extract accurate building footprint polygons from remote sensing images for applications in geospatial analysis. Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)
Show Figures

Graphical abstract

21 pages, 85634 KiB  
Article
Precise Extraction of Buildings from High-Resolution Remote-Sensing Images Based on Semantic Edges and Segmentation
by Liegang Xia, Junxia Zhang, Xiongbo Zhang, Haiping Yang and Meixia Xu
Remote Sens. 2021, 13(16), 3083; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163083 - 05 Aug 2021
Cited by 12 | Viewed by 3078
Abstract
Building extraction is a basic task in the field of remote sensing, and it has also been a popular research topic in the past decade. However, the shape of the semantic polygon generated by semantic segmentation is irregular and does not match the [...] Read more.
Building extraction is a basic task in the field of remote sensing, and it has also been a popular research topic in the past decade. However, the shape of the semantic polygon generated by semantic segmentation is irregular and does not match the actual building boundary. The boundary of buildings generated by semantic edge detection has difficulty ensuring continuity and integrity. Due to the aforementioned problems, we cannot directly apply the results in many drawing tasks and engineering applications. In this paper, we propose a novel convolutional neural network (CNN) model based on multitask learning, Dense D-LinkNet (DDLNet), which adopts full-scale skip connections and edge guidance module to ensure the effective combination of low-level information and high-level information. DDLNet has good adaptability to both semantic segmentation tasks and edge detection tasks. Moreover, we propose a universal postprocessing method that integrates semantic edges and semantic polygons. It can solve the aforementioned problems and more accurately locate buildings, especially building boundaries. The experimental results show that DDLNet achieves great improvements compared with other edge detection and semantic segmentation networks. Our postprocessing method is effective and universal. Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)
Show Figures

Graphical abstract

Review

Jump to: Editorial, Research

29 pages, 440 KiB  
Review
Review on Active and Passive Remote Sensing Techniques for Road Extraction
by Jianxin Jia, Haibin Sun, Changhui Jiang, Kirsi Karila, Mika Karjalainen, Eero Ahokas, Ehsan Khoramshahi, Peilun Hu, Chen Chen, Tianru Xue, Tinghuai Wang, Yuwei Chen and Juha Hyyppä
Remote Sens. 2021, 13(21), 4235; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214235 - 21 Oct 2021
Cited by 18 | Viewed by 4988
Abstract
Digital maps of road networks are a vital part of digital cities and intelligent transportation. In this paper, we provide a comprehensive review on road extraction based on various remote sensing data sources, including high-resolution images, hyperspectral images, synthetic aperture radar images, and [...] Read more.
Digital maps of road networks are a vital part of digital cities and intelligent transportation. In this paper, we provide a comprehensive review on road extraction based on various remote sensing data sources, including high-resolution images, hyperspectral images, synthetic aperture radar images, and light detection and ranging. This review is divided into three parts. Part 1 provides an overview of the existing data acquisition techniques for road extraction, including data acquisition methods, typical sensors, application status, and prospects. Part 2 underlines the main road extraction methods based on four data sources. In this section, road extraction methods based on different data sources are described and analysed in detail. Part 3 presents the combined application of multisource data for road extraction. Evidently, different data acquisition techniques have unique advantages, and the combination of multiple sources can improve the accuracy of road extraction. The main aim of this review is to provide a comprehensive reference for research on existing road extraction technologies. Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)
Show Figures

Graphical abstract

Back to TopTop