Next Article in Journal
Mapping and Characterizing Displacements of Landslides with InSAR and Airborne LiDAR Technologies: A Case Study of Danba County, Southwest China
Next Article in Special Issue
Attention Enhanced U-Net for Building Extraction from Farmland Based on Google and WorldView-2 Remote Sensing Images
Previous Article in Journal
Fractal-Based Retrieval and Potential Driving Factors of Lake Ice Fractures of Chagan Lake, Northeast China Using Landsat Remote Sensing Images
Previous Article in Special Issue
Building Extraction from Airborne LiDAR Data Based on Multi-Constraints Graph Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Review on Active and Passive Remote Sensing Techniques for Road Extraction

1
Department of Remote Sensing and Photogrammetry, Finnish Geospatial Research Institute, 02430 Kirkkonummi, Finland
2
Key Laboratory of Intelligent Infrared Perception, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
3
Department of Forest Science, University of Helsinki, 00100 Helsinki, Finland
4
School of Automation, Nanjing University of Science and Technology, Nanjing 210094, China
5
Huawei Helsinki Research Centre, 00180 Helsinki, Finland
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(21), 4235; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214235
Submission received: 1 September 2021 / Revised: 14 October 2021 / Accepted: 18 October 2021 / Published: 21 October 2021
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)

Abstract

:
Digital maps of road networks are a vital part of digital cities and intelligent transportation. In this paper, we provide a comprehensive review on road extraction based on various remote sensing data sources, including high-resolution images, hyperspectral images, synthetic aperture radar images, and light detection and ranging. This review is divided into three parts. Part 1 provides an overview of the existing data acquisition techniques for road extraction, including data acquisition methods, typical sensors, application status, and prospects. Part 2 underlines the main road extraction methods based on four data sources. In this section, road extraction methods based on different data sources are described and analysed in detail. Part 3 presents the combined application of multisource data for road extraction. Evidently, different data acquisition techniques have unique advantages, and the combination of multiple sources can improve the accuracy of road extraction. The main aim of this review is to provide a comprehensive reference for research on existing road extraction technologies.

Graphical Abstract

1. Introduction

Digital mapping of road networks is necessary for various industrial applications such as land use and land cover mapping [1], geographic information system updates [2,3] and natural disaster warning [4]. Moreover, it is a critical requirement for digital cities and intelligent transportation [5]. Traditional cartographic techniques are time-consuming and labour-intensive [6,7]. In comparison, remote sensing techniques changed the mapping community fundamentally without relying entirely on surveyed ground measurements [6]. Remote sensing data used for road extraction include ground moving target indicator (GMTI) tracking, smart phones global positioning system (GPS) data, street view images, synthetic aperture radar (SAR) images, light detection and ranging (LiDAR) data, high-resolution images, and hyperspectral images. GMIT radar has been used for extracting road map information due to the advantages of all-weather, real-time capabilities, and wide-area [8,9]. Smart phone GPS data were used for extracting road centrelines and monitoring road and traffic conditions [10,11]. Street view images obtained by Google of USA and Baidu of China companies have been used to detect, classify, and map traffic signs and road crack information extraction [12,13]. In this paper, based on the different data sources, the existing road extraction technology can be roughly divided into four methods: high-resolution imaging-based, hyperspectral imaging-based, SAR imaging-based and LiDAR-based methods. SAR and LiDAR are active information acquisition methods. In contrast, high-resolution and hyperspectral imaging are passive optical imaging approaches. Each road extraction method, based on a different data source, has unique characteristics. For instance, the high-resolution imaging technology can be used to obtain images with centimetre-level accuracy and detailed target information [14]; hyperspectral remote sensing images are used for conventional road extraction; they also demonstrate excellent potential for road condition detection owing to a large number of bands (generally more than 100 bands) and continuous spectrum coverage [15]; SAR and LiDAR datasets are not easily affected by environmental factors such as changes in environmental illumination conditions or weather [16].
Road extraction is a popular research topic and has attracted the interest of numerous researchers. More than 2,770,000 results can be found in Google Scholar using the keyword ‘road extraction’, including some state-of-the-art reviews published in recent years. Wang et al. [17] summarised the main road extraction methods from 1984 to 2014 based on high-resolution images. In this study, road information extraction methods were divided into knowledge-based [18,19], classification-based [20,21,22,23,24,25,26], active contour-based [27,28,29], mathematical morphology-based [30,31], and dynamic programming-based grouping methods [32,33]. However, these methods were chiefly heuristic, and deep learning-based approaches were not presented [34]. This scenario has undergone a drastic change in recent years with the rapid development of patch-based convolutional neural networks (CNNs) [35,36,37,38,39,40], full convolutional network (FCN)-based [41,42,43,44,45,46], deconvolutional net-based [47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64], generative adversarial network (GAN)-based [41,65,66], and graph-based deep learning methods [67,68,69,70] for road extraction. Road extraction methods based on deep learning are collectively referred to as data-driven approaches in [34,71]. Abdollahi et al. [34] and Lian et al. [71] presented and compared the deep learning-based state-of-the-art road extraction methods using publicly available high-resolution image datasets. Sun et al. [72] reviewed the SAR image-based road extraction method. This review initially introduces the road characteristics and basic strategies and then presents a summary of the main road extraction techniques based on SAR images. Similarly, Sun et al. [73] analysed and summarised the SAR image-based road segmentation methods. They introduced the traditional edge detection and deep learning-based road segmentation methods and predicted that new segmentation methods of deep neural networks based on the self-attention mechanism and capsule paradigm as the future development trends. Wang and Weng [74] summarised the road extraction techniques based on LiDAR. In this report, road clusters were defined using the classification framework and algorithms for LiDAR point data-based road identification. Furthermore, techniques for generating road networks, including road classification refinement and centreline extraction, were summarised as well. Several other similar reviews [75,76,77] that provide scientific references for road extraction have been reported in recent years.
To the best of our understanding, the existing reviews on road extraction methods are commonly based on only a single data source; hence, they fail to provide a comprehensive view that can be derived using different data sources. However, a comprehensive road extraction review based on high-resolution imaging, hyperspectral imaging, SAR imaging, and LiDAR technologies is crucial to bridge the gap between potential applications and available technologies of road extraction. Thus, the aim of this study was to achieve this goal by combining the road extraction techniques based on diverse data sources, including high-resolution images, hyperspectral images, SAR images, and LiDAR data. In Section 2, we provide an overview of the four techniques and summarise the typically used sensors. In Section 3, we introduce and analyse the main road extraction methods using various data sources as well as summarise the road extraction status and prospects for different data sources. Finally, different combinations of the road extraction techniques are presented in Section 4. To the best of our knowledge, in this paper, we present the first comprehensive review of road extraction, including high-resolution images, hyperspectral images, SAR images and LiDAR data sources.

2. Overview of the Existing Data Acquisition Techniques for Road Extraction

2.1. High-Resolution Imaging Technology

In this review, high-resolution images refer to high-spatial-resolution images (resolution of less than 10 m) that were mainly acquired using airborne or spaceborne sensors. The spatial resolution of the images refers to the size of a single pixel. High-resolution images are usually divided into two categories: panchromatic and multispectral images [78].

2.1.1. Data Acquisition Methods and Characteristics

High-resolution images are primarily recorded using spaceborne and airborne sensors. Spaceborne high-resolution imaging techniques have a broad area coverage and stable revisit periods; however, the cost of the satellite is high, and the images are easily affected by the atmosphere [79]. Compared to the spaceborne high-resolution images, the airborne high-resolution images possess higher resolutions and are less affected by the atmosphere. However, the working efficiency of airborne cameras is lower than that of the spaceborne instruments because of the lower flight altitude and smaller coverage [80]. Airborne high-resolution images can be obtained using manned aircrafts and unmanned aerial vehicles (UAVs). In recent years, several imaging systems with high-resolution cameras mounted on UAVs have been rapidly developed; these systems can achieve centimetre-level spatial resolution [81].

2.1.2. Typical Sensors

Spaceborne high-resolution imaging is still the main technical approach for earth observation. As shown in Table 1, an increasing number of high-resolution satellites have been developed; some of these satellites have been developed in series and are constantly being upgraded [82,83,84,85,86]. It can be seen from Table 1 that most high-resolution satellites were developed and launched by the USA, while the number of high-resolution satellites in China has increased in recent years. With the development of related technologies, the performance of spaceborne cameras continues to improve, and images with a spatial resolution better than 1 m can now be obtained.

2.1.3. Application Status and Prospects

High-resolution images usually contain feature-rich information such as spectral characteristics, geometric features, and texture features, and hence, a significant amount of useful information can be extracted from such images. High-resolution imaging has been widely used in forest management [108], urban mapping [109], farmland management [110], disaster and security mapping, public information service and environmental monitoring. Numerous high-resolution satellites have been developed and launched in recent years. These satellites can form satellite networks to obtain image data with a wide coverage. However, the huge amounts of data also bring new challenges to data transmission and processing [80]. Such high-resolution images have been extensively used for road extraction [34,71]. Moreover, several commercial products such as Google Maps based on high-resolution images have been successfully developed and applied in many fields in recent years.

2.2. Hyperspectral Imaging Technology

Hyperspectral imaging technology is another commonly used technique for obtaining the spectra of a target [111,112]. The hyperspectral image containing two-dimensional (2D) spatial and 1D spectral information comprises a 3D data cube [113]. Notably, different objects exhibit different spectra, which can be used for the identification and detection of such objects. In the 1980s, Goetz et al. [114] began a revolution in remote sensing by developing an airborne visible infrared imaging spectrometer (AVIRIS) [113], which initiated the development of hyperspectral imagers. The number of bands in a multispectral image is usually less than five, while that in a hyperspectral image is more than 100; moreover, continuous spectral information is obtained from a hyperspectral image.

2.2.1. Data Acquisition Methods and Characteristics

Hyperspectral images are obtained by an imaging spectrometer, which is a complex and sophisticated optical system that includes several subsystems and components. The main components of the sensor are a scan mirror, fore-optics, spectrometers, detectors, onboard calibrators and electronic units. The fore-optics of the system receive light, which is dispersed by a spectrometer and converted from photons to electrons by the detector to yield an electronic signal. This electronic signal is then amplified, digitised and recorded by the electronic unit. The instrument performance and preprocessing data results are the main factors that aid in acquiring high-accuracy surface reflectance data. Such an instrument is characterised by its field-of-view (FOV), spectral range, spatial and spectral resolution and sensitivity. Data preprocessing includes geometric rectification, calibration and atmospheric correction [80].

2.2.2. Typical Sensors

According to the installed platforms, hyperspectral sensors can be divided into spaceborne, airborne, UAV [115,116,117], car-borne [118], and ground-based [119] imaging systems. Airborne hyperspectral imaging systems were the first hyperspectral imagers to be developed and used for verifying the design of later spaceborne instruments. Jia et al. [80] presented a comprehensive review of airborne hyperspectral imagers, including key design technologies, preprocessing and new applications. Based on this review, we added spaceborne sensors in this paper and summarised the typical hyperspectral imagers in Table 2. Evidently, most hyperspectral sensors, especially the spaceborne hyperspectral imagers, were developed in the USA. In addition, there are more airborne hyperspectral imagers are than spaceborne hyperspectral imagers due to the hardware investment. The future spaceborne program includes the HyspIRI hyperspectral satellite of America [120] and the ENMAP hyperspectral satellite of Germany [121]. Most hyperspectral imagers can acquire data in the visible to near-infrared spectral range owing to the availability of silicon detectors with wide spectral detection ranges. Additionally, shortwave infrared and longwave infrared hyperspectral imagers have emerged in recent years [80].

2.2.3. Application Status and Prospects

Hyperspectral imaging—a quantitative remote sensing approach—has been widely applied in environmental monitoring, vegetation analysis, geologic mapping, atmospheric characterisation, biological detection, camouflage detection and disaster assessment [135,136,137,138,139,140,141]. However, the application requirements for hyperspectral sensors underwent significant variations with the development of advanced sensors. First, a wide spectrum covering range is required to enhance the monitoring and detection capabilities of this system in various applications. This can be achieved by combining multiple sensors with different wavelength detection ranges [142] or by using an integrated system with a wide spectral range [80]. Second, the system sensitivity and preprocessing accuracies are equally important along with the spatial and spectral resolutions. For example, the AVIRIS next generation system [143] has been applied to detect methane, owing to its high signal-to-noise ratio and high data preprocessing accuracy. Finally, this technology facilitates the advantages of characterisation and quantification of targets; for example, hyperspectral imagers have been used for road network extraction.

2.3. SAR Imaging Technology

2.3.1. Data Acquisition Methods and Characteristics

SAR is an active remote sensing technology that uses microwaves with wavelengths of few centimetres as opposed to LiDAR, which functions using optical wavelengths (ultraviolet, visible, near-infrared or shortwave infrared light). Both these sensors measure the distance between the instrument and the target using the time delay of the echoes.
One of the main benefits of SAR is the acquisition of fine and detailed images through the clouds; moreover, this sensor can even work at night. In SAR, initially, short-pulsed microwave radiation is emitted and backscattered by the target; this backscattered signal from the illuminated area is then recorded. A SAR system with a long virtual antenna can generate fine spatial resolution. Among the currently available SAR systems, the spaceborne SAR sensors can provide sub-metre spatial resolutions. Notably, the long antenna aperture is realised in the cross-range direction and that the range resolution is given by the pulse width (in general, the bandwidth of the signal). Another particularity of the SAR compared with optical sensors is the low correlation between range and spatial resolution.
SAR uses a side-looking imaging geometry to generate 2D image data of the target area. In target areas with uneven topography, the side-looking imaging geometry distorts the SAR images because of various factors such as foreshortening, layover and radar shadows [144]. These distortions are challenging for mapping applications, especially in urban areas with high-rise buildings.
SAR sensors collect data in a complex domain, and the acquired data can be converted into intensity and phase information. SAR data are usually presented as 2D intensity images that provide information on the amount of backscattered signal. Since the backscattered signal is a combination of signals from multiple scatterers, the images have granular noise called speckle. Sensors transmit and receive horizontally and vertically polarised signals and provide either single, dual or quad polarisation data [145,146,147]. Phase information is used in SAR polarimetry and interferometry. Interferometric coherence, that is, the complex correlation coefficient between two SAR images, can provide information on the changes in the target changes and can be used in target classification as well [148,149,150,151].

2.3.2. Typical Sensors

A list of typical SAR satellite systems was recently presented in [152]. With the spotlight imaging mode, less than 1 m spatial resolutions can be achieved (e.g., RCM, Cosmo-Skymed, Terrasar-X, ICEYE). However, the area covered by the image is limited. In general, the swath width varies from 5 to 500 km, and for a wide swath, the resolution is of the order of tens of meters. Spaceborne systems use the X-, C-, S-, L- or P-band sensors. X-band sensors provide the highest resolution; however, their penetration into the vegetation canopy is limited [153]. Furthermore, the polarimetric capabilities of satellites vary significantly; for example, Alos-2, Radarsat-2 and Terrasar-X are fully polarimetric, providing HH, VV, VH and HV data (where HH is horizontal transmit, horizontal receive; VV is vertical transmit, vertical receive; VH is vertical transmit, horizontal receive and HV is horizontal transmit, vertical receive); Cosmo-Skymed and Sentinel-1 provide dual polarimetric HH and VV data and ICEYE provides single polarisation (VV) data. The incidence angle in most satellites can be adjusted between 10° and 60°.
The data recorded by Sentinel-1 of the European Copernicus system are openly and freely accessible. The Sentinel-1 data is similar to those of the previous European Envisat SAR; however, the data availability has increased because of the use of multiple satellites.
When satellites fly in a constellation, short revisit times are possible. Cosmo-Skymed (four satellites) and SAR-Lupe (five satellites) constellations were designed to provide intelligence information. New commercial microsatellite constellations such as ICEYE (10/18 satellites launched), Spacety (18/56 launched) and Capella XSAR (18/36 launched), can provide good temporal coverage. A particular constellation is also formed by TerraSAR-X and Tandem-X, allowing single-pass interferometry. Based on the data collected by these satellite constellations, a global digital elevation model (DEM) has been developed [154].
Airborne SAR systems that enable data acquisition at different wavelengths, higher resolutions and single-pass interferometry are more versatile than the spaceborne systems. Most of these airborne systems are used for research purposes; for instance, the German Aerospace Centre (DLR) has an F-SAR [155] system operating on a Dornier 228 aircraft, providing fully polarimetric data in the X-, C-, S-, L- and P-bands (maximum four bands simultaneously); this system enables single-pass interferometry in the X- and S-bands. In addition, several UAV SAR sensors are also available in the market; for example, SAR Aero offers 1.8 kg SAR sensors in the X-or L-band with 0.3 to 3 m for a range of up to 10 km.

2.3.3. Application Status and Prospects

The principal benefit of SAR is its all-day and all-weather imaging capability, enabling rapid mapping [156]. Therefore, the main application areas are related to emergency and security-related services, where the timeliness and availability of data are critical; for example, SAR data are operationally used in sea-ice mapping, which cannot be easily extracted using other remote sensing techniques, especially in cloudy winters. In addition, considerable scientific research has been conducted on agricultural monitoring, forest mapping and topographic mapping. Continuous environmental monitoring is possible using spaceborne SAR datasets. Moreover, some previously reported studies utilised SAR images for road extraction [72,157,158].

2.4. Airborne Laser Scanning (ALS)

2.4.1. Data Acquisition Methods and Characteristics

In airborne laser scanning (ALS), a LiDAR sensor is mounted on an aircraft, along with an inertial measurement unit and a global navigation satellite system (GNSS) receiver. The LiDAR sensor transmits narrow laser pulses towards the ground and generates a scanning pattern over the target area. ALS systems, typically based on an oscillating mirror and scanning patterns, receive the return signal, measure the time of signal travel and associate each return pulse with the GNSS time and scan angle at which the pulse was transmitted. The travel time can be converted to distance and then to height. The ALS technique can produce georeferenced 3D point clouds in the target area [75,159,160].
The operational ALS systems are mostly based on single-wavelength single-pulse linear-mode LiDAR. The new emerging multispectral ALS systems use a combination of LiDARs at different wavelengths. These sensors provide intensity data that can be used to derive colour images, such as optical imagery. The chief advantage of this technique is that the acquired data are independent of illumination conditions and are without shadows. Therefore, multispectral ALS systems have great potential for increasing the automation level in mapping. Geiger-mode LiDAR and single-photon LiDAR (SPL) are new ALS techniques that are sensitive to a single photon and can provide dense point clouds from higher flight altitudes, owing to their higher system sensitivity.

2.4.2. Typical Sensors

The biggest ALS manufacturers include Leica Geosystems (Switzerland), Teledyne Optech (Canada) and RIEGL (Austria). Examples of the current system are listed in Table 3. It can be seen from Table 3 that ALS can collect one to several million points per second, which secure the usability of the collected data for most surveyed and mapping cases. Meanwhile, the lidar systems can be used both the low-altitude platforms (UAV and helicopter) and high-altitude ones (fixed-wing aircraft). In addition, most ALSs do not operate in eye-safety wavelength: typical operating wavelengths for ALS systems are 532 (green), 1064 (near-infrared) and 1550 nm (shortwave infrared). The point density and accuracy depend on the flying height, reaching a maximum of 60 points/m2. In addition, its accuracy depends on the range measurement accuracy combining with attitude measurement accuracy. In addition, small sensors are available for UAVs.

2.4.3. Application Status and Prospects

ALS produces accurate 3D models of the target areas. The main application areas of ALS are in topographic mapping, particularly DEM production, city modelling and forestry. Even though the area covered by the ALS in a single scan is limited compared to that covered by the spaceborne sensors, nationwide datasets are available, especially for the Nordic countries. In addition, ALS is currently used to detect human activity in archaeology [167].

3. Road Extraction Based on Different Data Sources

High-resolution images, hyperspectral images, SAR images and LiDAR data are primarily used for road extraction. To date, various road extraction methods have been presented in previously reported studies, and the observed differences are due to the use of different data sources. In this section, we summarise the methods, application status and prospects of road extraction based on four data sources.

3.1. Road Extraction Based on High-Spatial Resolution Images

Extracting road information from high-resolution images requires clarification of the road features, including the radiation features, geometric features, topological features and texture features [71]. First, information on the various elements such as features, textures and edges are extracted from the image by analysing the road information. Then, the extracted image information is comprehensively analysed, selected and reorganised and combined with the road features. Finally, it is fused with the structural relationship, model and road-related rules of the road elements to identify the road.

3.1.1. Main Methods

Numerous road extraction algorithms based on high-resolution images have been developed over the past few decades, making it difficult to classify them. Traditional methods include automatic and semiautomatic extraction. Some methods based on deep learning have emerged and attracted considerable attention, owing to their high precision, in recent years. In this paper, we summarised the heuristic and data-driven road extraction methods based on two state-of-the-art reviews [34,71]. A comparison between the different data-driven methods applied on the Massachusetts road dataset is shown in Table 4. It can be seen from Table 4 that most data-driven methods can obtain a high precision (better than 0.8). Meanwhile, different methods have advantages and disadvantages.
The heuristic extraction methods can be subdivided into automatic and semiautomatic methods according to the degree of automation facilitated by the method. The semiautomatic extraction algorithms require initial seeds, and the user needs to check the results frequently. In these methods, the seeds and directions should be provided manually, and the algorithms can recognise and roll back the previous results. Several classic semiautomatic methods exist, such as the active contour model, dynamic programming, geodesic path and template matching. The active contour model, proposed by Kass et al. [171] and also known as the balloon snakes or ribbon snakes model, can extract road information through contour deformations from the labelled lines or points. The geodesic path method can produce a road-probability map through the extracted road edges. The dynamic programming method requires road parameters and mainly focuses on solving optimisation problems [172]. The template matching method can extract the road features through constructed windows and then matching the extracted points. All these semiautomatic methods include automatic processes.
Automatic road extraction uses information such as road topology and context features to extract and recognise roads through methods such as pattern recognition, computer vision, artificial intelligence, and image understanding [173,174,175,176,177]. Even though there are no fully automatic algorithms for all types of high-resolution images, several methods [71] such as segmentation-based, edge analysis, map-based, swarm intelligence-based, object-based and multispectral segmentation methods are much more efficient than the semiautomatic methods. Methods based on segmentation can identify regions through numerous algorithms [71] such as support vector machines (SVMs), artificial neural networks, Bayesian classifiers, mean shifts, watershed algorithms, super pixel segmentation, Gaussian mixture models, graph-based segmentation and conditional random field (CRF) models, which are usually used in combination to improve the classification accuracy. The edge analysis method is realised via edge detection, which is suitable for extracting the main roads. The map-based methods, especially OpenStreetMap, focus on urban roads. Swarm intelligence-based methods such as ant colony optimisation, artificial bee colony and firefly algorithms use discretisation and networking to simulate real biological behaviours [178]. Object-based methods are used to classify objects through object segmentation and feature characterisation. Multispectral segmentation methods usually require the support of multispectral or hyperspectral images [2].
With the application of deep learning techniques in road extraction, many data-driven methods such as patch-based CNN models, FCNs, deconvolution networks, GAN models and graph-based methods have been proposed for road extraction [34,71]. The patch-based CNN method can exploit a large image and predict a small patch of labels through the CNN models based on structured and refined CNNs; however, this method is time-consuming and computationally inefficient [34]. The FCN method predicts images by replacing connected layers with output labels and classifies images at the pixel level using the FCN-32 and U-shaped FCN models, thereby improving the road extraction accuracy [44,45]. A variant of FCN—DenseNets—including SegNet, DeepLab, RCFs, Y-net and U-Net decoder [49,51,52,54] can extract hierarchical features from images, especially high-resolution images. The GAN method includes a generator and discriminator to segment images and distinguishes between forged and real images, respectively [179]. The graph-based method realises the vector representation of road maps through iterative road tracking and polygon detection strategies.

3.1.2. Status and Prospects

Each algorithm has its advantages and disadvantages. For example, Lian et al. [71] compared the heuristic and data-driven road extraction methods based on four public datasets. Their results indicated that data-driven methods could achieve more than 10% accuracy compared to that achieved using heuristic methods. Similar conclusions were reported in [34]. However, data-driven methods have limitations such as the requirement of large training samples, long processing time and high-speed computing (most algorithms require graphics processing units). In addition, the training parameters used in one dataset may not be able to achieve high accuracy when used in another dataset. In contrast, some heuristic methods require less time and can meet real-time demands in some applications.
Road extraction using the currently available high-spatial-resolution images has some challenges. The key step in road extraction from high-resolution images is to describe the road features. Describing the linear or narrow bright band of a road, which provides good detection results, is the main focus of most existing methods. However, with the improvement in image resolution, we can obtain more noise interference (shadows, buildings and road obstructions) and more detailed road features; therefore, the road objects can also be described more precisely in this case [34,173]. Furthermore, road objects include many complex phenomena such as occlusion or shadows, discontinuities, sharp bends and near-parallel boundaries with constant widths. Incorporating all these factors and modelling them into a single model is almost impossible. Therefore, it is essential to establish a multimode approach to extract roads from images with high spatial resolutions.

3.2. Road Extraction Based on Hyperspectral Images

Multispectral remote sensing images have been used in road extraction because of their high spatial resolution and multiple spectral features. The commonly used data sources are the satellite multispectral images including QuickBird, IKONOS [180,181], Worldview 2 satellite [182], Landsat satellites [183] and Gaofen 1 and Gaofen 2 satellites. In addition, because of the large number of bands (generally more than 100 bands) and continuous spectrum bands, hyperspectral images are used for conventional road extraction as well as show great potential for road condition detection, road material identification, road pothole detection and crack detection.

3.2.1. Main Methods

Most road extraction methods using multispectral images are based on high-spatial-resolution images, and these methods can also be divided into heuristic and data-driven methods. Similar to the road extraction from high-resolution images, the heuristic methods in this case can also be divided into semiautomatic and automatic categories; some applications of these heuristic methods are reported in [184,185,186,187,188,189]. However, few data-driven methods are exclusively used for road extraction based on multispectral images, owing to the lack of public datasets. There are fewer road extraction methods based on hyperspectral images than on high-resolution images, and some of them are included in hyperspectral classifications. In this paper, we introduce road extraction methods using hyperspectral images based on different platforms.
The spaceborne hyperspectral imagers can realise a larger swath and provide a better stability platform than do the airborne instruments; hence, the spaceborne hyperspectral imagers are suitable for large-area work. However, they are mainly used to identify and extract arterial roads such as highways because of their low spatial resolution. The Hyperion hyperspectral imager of the US—EO-1 satellite—is the currently available spaceborne hyperspectral sensor for road extraction. Sun [190] used Hyperion hyperspectral data to complete the road network extraction from an image using three steps: road searching, road tracking and road connecting. In road searching, the spectral information in the image is used to find road features, and several different road feature extraction methods are qualitatively compared; however, no quantitative extraction accuracy results are provided in Sun’s report.
Airborne hyperspectral imagers can achieve higher spatial and spectral resolutions than do the spaceborne platforms; however, their operating efficiency is lower than that of the spaceborne instruments. They are mainly used for the extraction of urban roads and detection of road conditions. Airborne hyperspectral instruments currently used for road extraction mainly include AVIRIS, CASI, HyMap, HYDICE and AsiaFenix [122]. In 2001, Gardner et al. [191] used the hyperspectral dataset of AVIRIS in Santa Barbara, USA, to map the different types of typical urban surface features through multiterminal spectral analysis. Then, the Q-tree filter was used to distinguish between the roofs and the roads constructed using the same materials and exhibiting similar spectra. The visible results showed that the main roads in the image can be preliminarily extracted; however, several shortcomings were observed in the extraction of roads blocked by vegetation and connectivity of the road network. Noronha et al. [192] used the urban hyperspectral dataset of AVIRIS and a spectral database, based on the surface materials of the main urban features collected in the field, to extract the road centreline and observe the road surface conditions. Furthermore, the optimal parameters for designing a multispectral instrument to extract urban land-use types was proposed based on these reported results. The overall classification accuracy and kappa coefficient were 73.5% and 72.5%, respectively [192]. Based on the airborne hyperspectral image data of HYDICE and HyMap, Huang and Zhang [193] used an adaptive mean-shift method to accurately classify six major urban features in the image, including roads, houses, and grass. The overall classification accuracy was above 97%, and the road classification accuracy was above 95%. Resende et al. [194] used CASI-1500 airborne hyperspectral data to study the extraction of asphalt roads in cities. The results based on ISODATA unsupervised classification and maximum likelihood supervised classification methods qualitatively showed that the hyperspectral image could be used to extract the main asphalt roads in the city; however, they did not report the extraction accuracy. In 2012, Mohammadi [195] used HyMap airborne hyperspectral image data to study the classification of materials used in urban roads and the state of asphalt road conditions. He mainly distinguished asphalt roads, cement roads, and gravel roads, and based on this result, reported three road conditions: good, medium and poor asphalt roads. However, the experimental results were limited by the spatial resolution of the dataset, and a large number of unclassified pixels were not analysed by the method. Therefore, further studies are required to improve the methods used for reducing the number of unclassified pixels.
Currently, some publicly available airborne hyperspectral datasets such as Pavia Centre and University area hyperspectral datasets and Indian Pine hyperspectral datasets [196,197] are also used for road classification and recognition. In 2012, Liao et al. [196] proposed a directional morphology and semi supervised feature extraction to classify three hyperspectral datasets. The classification accuracy of the roads was the highest, reaching more than 97%; however, this classification accuracy was closely related to the number of training samples and extracted features. Miao et al. [197] studied the extraction of road centrelines from high-resolution images based on shape features and multiple adaptive regression splines. This method combined the shape features and spectral information to extract road segments from high-resolution images and then used multivariate adaptive regression spline functions to extract the road centrelines. The method was applied to the Pavia Centre hyperspectral dataset, and an extraction accuracy of 99% was obtained for the road centreline extraction. This method was based on uniform surface properties; hence, it was suitable only for high-resolution images and not for low-resolution images. In addition, the main limitation of this method was that the threshold in the method must be determined manually.
In addition to spaceborne and airborne hyperspectral imagers, UAV hyperspectral imaging systems that have gradually emerged in recent years have garnered increasing attention owing to their low cost and high spatial resolutions. These systems are mainly used for road condition detection and road material identification in specific areas [15,198]. However, their operating efficiency is lower than that of the spaceborne and airborne platforms, because of their low flight altitude. A summary of road extraction using hyperspectral images is shown in Table 5, which indicates that hyperspectral imaging systems of different platforms have different characteristics for road extraction due to spatial resolution. Spaceborne hyperspectral imagers are primarily used to extract main roads, while airborne hyperspectral imagers can be used for road quality assessment and road condition monitoring.

3.2.2. Status and Prospects

Only a few reports on extracting road information from spaceborne hyperspectral images are available. This is because the spatial resolution is insufficient to extract road information accurately (e.g., the spatial resolution of Hyperion and Gaofen-5 hyperspectral imagers is 30 m), especially for urban roads and narrow roads. In addition, spaceborne hyperspectral images are difficult to acquire, and public datasets for road extraction are not available. Airborne hyperspectral images are still the main data source for the study of road extraction, because of their higher spatial resolutions and lower costs compared to those of the spaceborne data.
Hyperspectral images have shown considerable potential for road condition detection, road material identification, road pothole detection and crack detection. However, the preprocessing accuracy of the hyperspectral images should be improved to promote their applications. Hyperspectral data with geometric correction and relative radiometric calibration can meet this requirement for qualitative applications such as road detection. However, for quantitative applications such as pavement material recognition, more steps are required, such as high-precision absolute radiometric calibration, spectral calibration and atmospheric correction.

3.3. Road Extraction Based on SAR Images

3.3.1. Main Methods

In general, roads in SAR images appear as dark linear features. However, the differential orientation of roads and antennas influences the ability of SAR to identify roads. In traditional heuristic methods, road segments are often extracted using an edge/line detector. Then, a graph is generated from the segments and optimised, and the segments are connected to develop a coherent road network. Recently, data-driven deep learning-based semantic segmentation methods have been reported. Moreover, road junctions and bridges are important parts of road networks and have been the topic of some previous studies.
Road network extraction from SAR images has been reported in various studies [199,200,201]. In [200], two local line detectors were used, and the results were fused to find candidates for road segments. The road segments were connected using a Markov random field, and an active contour model (snake) was used for the postprocessing. In [200], this method was applied on dense urban areas, and very-high-resolution data and different flight directions were combined to improve the results. In [201], constant false alarm rate detection, morphological filtering, segmentation and Hough transformation were integrated to recognise roads in high-resolution polarimetric airborne SAR images. The strong backscattering from fences was used to detect bridges, and then the roads were recognised using a Hough transformation
The fusion of different SAR datasets and algorithms in early studies improved road extraction accuracy. In [202], different preprocessing algorithms, road extractors and different images of the same area were fused, and a multiscale approach was used for road extraction. For SAR, a combination of different view angles is also proposed. Lisini [203] extracted road networks by combining the line extractor results with two classification results. Hedman [204] detected rural and urban areas and then fused a road extractor for rural areas to develop an extractor designed for urban areas.
The latest reported studies used new high-resolution data, usually from the TerraSAR-X or GaoFen-3 satellites. In [205], multiscale geometric analysis was performed on vectorised detector responses for road network grouping using two TerraSAR-X datasets. In [206], a method suitable for analysing SAR images of different resolutions was proposed. A weighted ratio line detector was developed to extract the road ratio and direction information. The road network was constructed using a region-growing method and tested using four SAR datasets obtained from different study areas. Saati [207] extracted road networks based on a network snake model and three TerraSAR-X images. Xu [208] introduced an algorithm in which road segment extraction and network optimisation were performed simultaneously using a Bayesian framework, multiscale feature extractor and CRF; for this analysis, Xu used the TerraSAR-X and airborne SAR data. Xiong [209] proposed a method based on vector Radon transformation, and promising results were presented for six SAR images of different resolutions, bands and polarisations; these images were obtained from airborne SAR, GaoFen-3 and TerraSAR-X. In general, for road extraction from new very-high-spatial-resolution (VHR) data, completeness of 74–93% and correctness of 70–94% have been reported, depending mostly on the study site.
Interferometric information has been used in road extraction by Jiang et al. [210], and the best results were obtained by the fusion of intensity and coherence information. Roads were considered as distributed scatterers and were separated from permanent and temporally variable scatterers. A constant false-alarm-rate line detector based on Wilks’ test statistics has been proposed by Jin et al. [211] for polarimetric SAR images.
Deep learning-based methods have been the topic of a few recent studies; they can be used to extract initial road information with good quality for network construction. Zhang [158] compared U-Net (FCN) and CNN to machine learning methods using dual-polarisation Sentinel-1 data; he reported that VV polarisation was better than VH, and dual-polarisation data was better than the single-polarisation data for road extraction. F1 score of 94% was achieved (the same area for training and testing but different shuffled samples were used) using the U-Net and dual-polarisation data. This F1 score was better than that of the best machine learning method (random forest) by 5%. Henry [46] enhanced the fully CNN (FCNN) sensitivity for thin objects and compared the FCN-8s, U-Net and Deeplabv3+ methods for road segmentation. The sensitivity was increased by addressing class imbalance in training and using spatial tolerance rules. A summary of road extraction using SAR images is shown in Table 6. It can be seen that the precision in diverse situations is quite different. Appropriate methods or a combination of multiple methods should be considered according to the scene’s characteristics.

3.3.2. Status and Prospects

Automatic road extraction from SAR imagery remains solely experimental to date. For operational tasks, semiautomatic methods [172] with human intervention or manual postprocessing are required. Large-scale tests have not been conducted, and for the reported methods, remarkable variations in the results between different test sites have been observed. The studied road types include forest roads, city streets, highways, desert roads, gravel roads, paved roads, icy roads and bridges. However, different or mixed road classes have been evaluated in only a few studies. Different methods are required for different road types and study areas. In general, the developed methods are complex and involve many steps.
Owing to the side-looking sensor [212], SAR produces many linear features, causing many false alarms in road extraction. In addition, road detection varies with the looking direction, especially in urban areas (radar shadows). The speckle and speckle filters also affect the road extraction accuracy. In addition, the roads may appear bright because of the surrounding structures, instead of appearing as dark lines. Therefore, the road appearance differs depending on the SAR image resolution. A highly complex image is obtained with high-resolution data, showing strong geometric effects in urban areas. Therefore, the use of multiple datasets is often required. Conversely, road detection from SAR is independent of road surface material; this feature is less pronounced in the optical detection method. Thus, the fusion of SAR and optical signals can be beneficial.

3.4. Road Extraction Based on LiDAR Data

3.4.1. Main Methods

Ground points and nonground points can be easily distinguished based on pulse and elevation information in airborne LiDAR data. To classify ground and road/nonroad areas, intensity information is needed. Roads, such as water surfaces, have low intensity. In addition to the surface reflectance, the intensity values are affected by atmospheric attenuation, transmitted power, detection range and incidence angle. Therefore, calibration is required to apply the methods over large areas. Road detection using intensity is usually based on the surface homogeneity and consistency of roads. The LiDAR-based methods for road extraction either use point cloud processing and classification or are based on a digital surface model (DSM), digital terrain model (DTM) and intensity raster produced from the point clouds. In some reported studies, the main focus was on data classification, while in others, a complete road network was extracted.
Point cloud-based automatic road extraction using LiDAR height and intensity was first proposed by Clode [213]; in this study, a hierarchical classification method was used to classify point clouds progressively into roads and nonroads. Individual points were selected based on the height difference to generate a DTM, and the intensity value was obtained via filtering based on point density and morphological filtering of a binary image. The method was enhanced in [214], where a phase-coded disk algorithm was introduced to vectorise the binary road network image. Hu [215] proposed a method for road centreline extraction using the salient linear features observed in the images of complex urban areas; in this method, tensor voting was used to eliminate the nonroad areas.
Intensity variations in the road network were considered in [216], where road points were selected from ground points based on a local intensity distribution histogram and filtered by roughness and area. Hui [217] extracted road centrelines using three steps; a skewness-balancing algorithm was proposed to obtain the intensity threshold. A rotating neighbourhood algorithm was proposed to extract the main roads (by removing the narrow roads), and a hierarchical fusion and optimisation algorithm was proposed to extract the road network. For the three test sites, correctness values of 43–92% and completeness of 36–91% were obtained.
A raster-based road extraction method for grid-structured urban areas was proposed by Zhao [218]; in this study, the ground objects were classified, road centrelines were extracted using a total least square line fitting approach, and a voting-based road direction was used to evaluate each road segment’s reliability by removing areas such as parking lots from the road segments. For a complete road network extraction, different parts of the road network such as junctions and bridges need to be considered as well. Chen [219] proposed an automatic method to detect and delineate road junctions from rasterised ALS intensity and normalised DSM in three steps: roughness-enhanced Gabor filters for key point extraction, a higher-order tensor voting algorithm to find the junction candidates, and a geometric template matching to identify the road junction positions and road branch directions. A bridge detection algorithm was proposed by Sithole [220]; in this method, DTM cross-sections, that is, profiles, were used to identify bridges. Moreover, forest road detection is possible using detailed ALS DSMs.
Road details and complex structures can be extracted from very dense point clouds as well and modelled. In [221], road points were accurately labelled from a dense point cloud of an urban area, using an approximate 2D road network map as the input. They combined both large-scale (snake smoothness) and small-scale (curb detector) cues to extract roads. The method worked on all types of roads, including tunnels, bridges and multilevel intersections. In [222], UAV data was used to perform fine-scale road mapping. Soilan [223] studied the automatic extraction of road features (sidewalks, pavement areas and road markings) from high-density ALS point clouds.
3D modelling is required for the most complex parts of the road network such as overpasses and multilevel intersections. The use of LiDAR point clouds to derive 3D city models was reviewed in [77]. Cheng [160] studied the detailed 3D reconstruction of multilayer interchange bridges, and satisfactory results were obtained for very complex bridges.
In several studies, road detection has been carried out as a part of land cover mapping. Urban land cover mapping by integrating rasterised LiDAR height and intensity data at the object level was proposed by Zhou [224]; for this method, an accuracy similar to that of multispectral optical imagery accompanied by ALS DSM was obtained. Matkan [225] classified the point cloud into five land cover classes using SVM; during the postprocessing, gaps in the road network were located and filled using a method based on Radon transformation and spline interpolation.
Land cover classification using multispectral ALS (MS-ALS) has been the topic of several recently reported studies. Even though a complete road network was not extracted, the road classification results were promising [164,226]. In [226], the point-based classification completeness for roads was 86% and 92%. In the raster-based classification, accuracies of 92% and 86% were obtained. Karila [227] used rasterised MS-ALS data for a typical road surface (that is, asphalt and gravel) and classified the road types from highways to cycle ways. Due to the lack of shadows, more complete roads (80.5%) were retrieved using this method than using the optical aerial images (71.6%). Ekhtari [228] classified multispectral point clouds into 10 land cover classes using an SVM. Three types of asphalt and concrete classes were included. In general, slightly better results were obtained when the classification was carried out in the point cloud domain; however, the computational costs increased significantly.
Deep learning methods for MS-ALS have been explored in a few recent studies. Pan [229] used deep learning-based high-level feature presentation (deep Boltzmann machine) and machine learning methods for land cover classification. Pan [230] proposed a CNN-based classification approach for MS-ALS data. The classification accuracy and computational performance of the constructed CNN model were superior to those of the classical CNN models. Yu [231] proposed a hybrid capsule network using MS-ALS data. The data were rasterised based on the elevation, the number of returns and intensity of the three channels, and an accuracy of 94% was obtained for the road classification. Dense point clouds from SPL were used for land cover classification in [232]. Due to the rough appearance of the intensity images created from the SPL data, small features such as narrow roads were often difficult to distinguish in the intensity images. A summary of road extraction using LiDAR data is shown in Table 7. Notably, ML-ALC has become the main approach for road extraction in recent years.

3.4.2. Status and Prospects

Airborne LiDAR data are used in the 3D modelling of city road networks [77]. ALS provides direct 3D information for road extraction and is less affected by occlusions and shadows than do optical data. In addition, ALS provides 3D road information with elevation; this is especially useful in complex interchange areas. However, the area covered during the flight considerably limits the automatic road extraction process using ALS. Extensive investigations are still required to address the underlying issues, i.e., the development of fully automatic algorithms suitable for various landscape and road types, application of intensity data over large areas, reducing the number of false positives (car parks, squares, playgrounds, etc.) and identifying as well as connecting road segments shadowed by occlusions (e.g., trees and vehicles). Road markings must be taken into consideration in VHR. The national ALS datasets provide a good basis for mapping; however, these datasets are seldom updated. LiDAR sensors mounted on mobile mapping systems, UAVs or VHR satellite imaging instrument can be used for map updating. In addition, the new MS-ALS data [227,228] and new dense point clouds created by collecting single photons enable the automatic detection of roads with higher accuracy.

4. Combination of Multisource Data for Road Extraction

4.1. Combination of High-Resolution Images with Other Data for Road Extraction

High-resolution images reveal very fine details of the earth’s surface and geometry; however, high-resolution imaging also increases the geometric noise and only provides spectral and spatial information on the surface. LiDAR data, as a special data source, can provide 3D information about objects. Tiwari et al. [233] proposed automatic road extraction methods through an integrated approach involving ALS altimetry and high-resolution imaging. The method was used to extract road information without background objects, and showed an increased road extraction accuracy of 90% when applied to Amsterdam data. Hu et al. [234] proposed a grid-structured urban road network extraction method using LiDAR data and high-resolution imagery. A significant improvement in the road extraction accuracy was obtained using this method than using high-resolution imagery or LiDAR data. Zhang et al. [235] proposed a method to improve the accuracy of road centreline extraction using high-resolution images and LiDAR data. The method adopted the minimum area bounding rectangle-based interference-filling approach, multistep approach and Harris corner detection. The experimental results based on the datasets of Vaihingen, New York, and Guangzhou showed that the proposed method was efficient in identifying complex scenes.

4.2. Combination of Hyperspectral Images with Other Data for Road Extraction

Feng et al. [236] fused hyperspectral images and LiDAR data to map urban land use using a state-of-the-art CNN. To improve the speed of the network design, the same structure was used in both the hyperspectral and LiDAR branches. Each branch used a residual block to extract multiscale, parallel, and hierarchical features. The experimental results underlined the efficient road extraction performance of the proposed method. When only hyperspectral images and LiDAR data were used, the highway classification accuracies were found to be 65.35% and 42.08%, respectively. However, this highway classification accuracy increased to 80.89% when fused data was used. Elaksher et al. [237] combined the LiDAR-based hyperspectral images obtained from the AVIRIS and DEM. First, a vector layer of polygons was constructed using the DEM data. Second, the buildings in the hyperspectral images were removed, and then the road and water were classified using a supervised classifier. The experimental results demonstrated that the performance of this classification process could be improved by using LiDAR data to remove the buildings from the hyperspectral image before the classification. The detection rate of the road was 91.3%, and the false alarm rate was 0. Two examples of multiple remote sensing data fusion are presented in [238]. One is the fusion of hyperspectral images with SAR images; this fused data can improve the detection accuracy of the target. The other is the fusion of hyperspectral images with high-resolution images. The spatial–spectral information of the target was fully analysed using the two combined data sources, and the identification accuracy was improved considerably.

4.3. Combination of SAR Images with Other Data for Road Extraction

Several studies have been published on the fusion of optical imagery and SAR data. Cao [239] proposed road extraction via the fusion of infrared and SAR images. Lin et al. [240] compared multiple remote sensing datasets (Spot5, IKONOS, QuickBird, DMC and airborne SAR datasets) and algorithms. Road trackers were designed for five different road types: national highways, interstate highways, railroads, avenues and lanes. Evidently, fused multiple remote sensing data were more efficient than a single data source. Perciano et al. [241] fused TerraSAR-X and QuickBird data for road network extraction at two test sites, and the road extraction accuracy for the fused data was 10–30% higher than those obtained for individual datasets. Multitemporal SAR image stacks (TSX and CSK) were also studied. Bartsch et al. [242] studied the arctic settlements using Sentinel-2 optical and Sentinel-1 SAR satellite images. Pixel-based classification using a gradient boosting machine and a deep learning approach based on windowed semantic segmentation using U-Net architecture were compared. Asphalt roads were easily detected than gravel roads, and for arctic mapping, both methods and sensors were recommended. Liu et al. [243] studied urban area mapping using Sentinel-1 SAR and Sentinel-2 optical data and proposed the integration of object-based postclassification refinement and CNNs for land cover mapping. Notably, for road mapping, SAR backscattering provided different physical information on roads (low backscatters) than that provided by optical remote sensing; moreover, the roads were identified with higher accuracy by combining the optical data with that of the SAR. Lin et al. [244] extracted impervious surfaces using optical, SAR and LiDAR DSM data. The non-shadow and shadow classes were trained using the combined optical–SAR–LiDAR data. As a result, the shadow effects in the classification results were reduced.

4.4. Combination of LiDAR with Other Data for Road Extraction

Aerial imaging cameras are often accompanied by LiDAR sensors in airborne mapping systems. Thus, it is common to fuse LiDAR point clouds and optical aerial imagery. In addition, optical satellite data are used as well. In particular, additional colour information is useful for single-channel LiDAR. Segmentation of the input imagery is often performed to enable object-based fusion of the datasets.
Kim [245] proposed a method to improve the classification of urban areas by fusing high-resolution satellite images (WorldView-2) and ALS data. Special attention was paid to the elevated roads, which were first detected in LiDAR ground points. Then, buildings were detected, and supervised SVM classification was performed on areas without elevated roads or buildings. Liu [246] proposed a road extraction framework based on the fusion of ALS point clouds and aerial imagery; in this framework, pseudo scan lines were created from the fused data, and a rule-based edge-clustering algorithm was used to extract the road segments. Mendes classified road regions using an ANN by integrating aerial RGB (where R is red, G is green and B is blue) images, laser intensity and height images. Compared to the use of optical data alone, incorporating the laser intensity data helped to overcome the road obstructions caused by shadows and trees, and the height information helped in separating the aboveground objects from the ground objects. Zhang [235] proposed an object-based method for road centreline extraction from aerial images and ALS DSMs; using this method, a completeness and correctness of over 90% was obtained for two of the test data sets, and a completeness and correctness of over 80% was obtained for a third large site. Further developments were proposed for curved roads.

4.5. Some Scopes of Future Research in Road Extraction

A summary of road extraction based on different data sources is presented in Table 8. Some prospects can be derived based on the status of current road extraction from high-resolution images. First, data-driven road extraction methods exhibit excellent performance and high extraction accuracy [34]. Therefore, more robust data-driven methods should be developed and verified using different datasets. In addition, it is difficult to obtain high detection accuracy using only one algorithm in some cases; therefore, a combination of multiple road extraction methods must be studied [247]. Finally, the combination of data sources (e.g., hyperspectral, SAR and LiDAR) should be evaluated further. Spatial resolution is one of the most important factors affecting the performance of road extraction methods. High-spatial-resolution images can describe fine objects in detail. However, this increases the spectral variability within the class [248]. Wang et al. [249] demonstrated that images with a spatial resolution of 0.5 m had higher accuracy than those with a spatial resolution of 0.1 m. Data-driven methods show higher road extraction accuracy with improved spatial resolution than do the traditional heuristic methods [34,71].
It is necessary to continue to study road extraction from airborne hyperspectral images [80]. In addition, new road extraction methods based on hyperspectral images should be developed. In particular, the advantages of a large number of bands and continuous spectrum coverage of hyperspectral images should be further exploited, and new data-driven methods should be proposed [250]. Finally, hyperspectral image datasets with a wide coverage area and road labels should be produced to promote the road extraction application of hyperspectral data.
Recently, the open-access global SAR satellite datasets (Sentinel-1) have enabled the mapping and monitoring of large areas. However, the resolution of this method is limited. Microsatellite constellations can acquire large amounts of very-high-resolution data with higher frequency; however, this aspect has not been studied in detail. For ALS, deep learning methods [251] for image classification are rapidly emerging, followed by methods for earth observation. Publicly available open training datasets acquired by the earth observation satellite are being used for road extraction. The highest accuracies for road classification have been reported using deep neural network algorithms. However, these studies cover limited areas where they perform relatively well, but no large-scale tests have been conducted yet.

5. Conclusions

High-resolution and hyperspectral images have been widely used in digital road network extraction. More satellites with high spatial resolution and short revisit periods are being developed and launched, promoting the development of heuristic and data-driven road extraction methods. However, few of these hyperspectral satellite data can be used for road extraction. Therefore, airborne systems are still the main approach for the acquisition of hyperspectral data [80,252]. Data-driven methods have high accuracy and show significant potential; however, transfer learning needs to be improved. In addition, the combination of high-resolution image data with other data sources such as LiDAR is one feasible approach to solve some challenging issues such as occlusion or shadows.
Large areas can be rapidly mapped using weather-independent spaceborne SAR images; for example, images acquired after sudden changes in the target area. In addition, global datasets enable global mapping. However, the roads are challenging to interpret because of the interaction of the signal with the surrounding areas. ALS provides excellent data for the generation of topographic databases and detailed mapping of limited areas. Multispectral ALS may be the best remote sensing data source for road mapping; however, only small areas are covered at a time in this case. Further studies are required for developing sensors for various landscapes and road types; moreover, fully automated road detection is still in its infancy.
High-resolution imaging, hyperspectral imaging, SAR imaging, and LiDAR are currently the primary techniques for road extraction. As shown in Table 8, different data sources have unique characteristics. For example, high-resolution images have high spatial resolution and contain rich textures, shapes, structures, and neighbourhood relations of ground objects. Hyperspectral images have multiple data dimensions and rich spectral features. In addition, SAR imaging and LiDAR are less affected by external environmental factors such as clouds, fog, and light and can operate in all weather conditions. Combining different remote sensing data to use their respective advantages is a notable approach for developing advanced road extraction methods in the future.

Author Contributions

Bibliographic review, J.J., H.S., C.J., K.K. and M.K.; paper organisation, Y.C. and J.J.; drawing of conclusions, J.J.; writing—original draft preparation, J.J., H.S., C.J., K.K. and M.K.; writing—review and editing, E.A., E.K., P.H., C.C. and T.X.; supervision, Y.C.; project administration, J.H.; funding acquisition, Y.C. and T.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by Academy of Finland projects “Ultrafast Data Production with Broadband Photodetectors for Active Hyperspectral Space Imaging (No. 336145)”, Forest-Human-Machine Interplay-Building Resilience, Redefining Value Networks and Enabling Meaningful Experiences (UNITE), (No. 337656) and Strategic Research Council project “Competence-Based Growth Through Integrated Disruptive Technologies of 3D Digitalization, Robotics, Geospatial Information and Image Processing/Computing–Point Cloud Ecosystem (No. 314312). Additionally, the Chinese Academy of Science (No. 181811KYSB20160040 XDA22030202), Shanghai Science and Technology Foundations (No. 18590712600) and Jihua lab (No. X190211TE190) and Huawei (No. 9424877) are acknowledged.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the contributions of the editor and reviewers.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Huang, B.; Zhao, B.; Song, Y. Urban Land-Use Mapping Using a Deep Convolutional Neural Network with High Spatial Resolution Multispectral Remote Sensing Imagery. Remote Sens. Environ. 2018, 214, 73–86. [Google Scholar] [CrossRef]
  2. Wang, J.; Treitz, P.M.; Howarth, P.J. Road Network Detection from SPOT Imagery for Updating Geographical Information Systems in the Rural–Urban Fringe. Int. J. Geogr. Inf. Syst. 1992, 6, 141–157. [Google Scholar] [CrossRef]
  3. Mena, J.B. State of the Art on Automatic Road Extraction for GIS Update: A Novel Classification. Pattern Recognit. Lett. 2003, 24, 3037–3058. [Google Scholar] [CrossRef]
  4. Coulibaly, I.; Spiric, N.; Sghaier, M.O.; Manzo-Vargas, W.; Lepage, R.; St-Jacques, M. Road Extraction from High Resolution Remote Sensing Image Using Multiresolution in Case of Major Disaster. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 2712–2715. [Google Scholar]
  5. Cheng, G.; Zhu, F.; Xiang, S.; Pan, C. Road Centerline Extraction via Semisupervised Segmentation and Multidirection Nonmaximum Suppression. IEEE Geosci. Remote Sens. Lett. 2016, 13, 545–549. [Google Scholar] [CrossRef]
  6. McKeown, D.M. Toward automatic cartographic feature extraction. In Mapping and Spatial Modelling for Navigation; Pau, L.F., Ed.; Springer: Berlin, Heidelberg, 1990; pp. 149–180. [Google Scholar]
  7. Robinson, A.H.; Morrison, J.L.; Muehrcke, P.C. Cartography 1950–2000. Trans. Inst. Br. Geogr. 1977, 2, 3–18. [Google Scholar] [CrossRef]
  8. Ulmke, M.; Koch, W. Road Map Extraction Using GMTI Tracking. In Proceedings of the 2006 9th International Conference on Information Fusion, Florence, Italy, 10–13 July 2006; pp. 1–7. [Google Scholar]
  9. Koch, W.; Koller, J.; Ulmke, M. Ground Target Tracking and Road Map Extraction. ISPRS J. Photogramm. Remote Sens. 2006, 61, 197–208. [Google Scholar] [CrossRef]
  10. Niu, Z.; Li, S.; Pousaeid, N. Road Extraction Using Smart Phones GPS. In Proceedings of the 2nd International Conference on Computing for Geospatial Research & Applications, Washington, DC, USA, 23–25 May 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 1–6. [Google Scholar]
  11. Bhoraskar, R.; Vankadhara, N.; Raman, B.; Kulkarni, P. Wolverine: Traffic and Road Condition Estimation Using Smartphone Sensors. In Proceedings of the 2012 Fourth International Conference on Communication Systems and Networks (COMSNETS 2012), Bangalore, India, 3–7 January 2012; pp. 1–6. [Google Scholar]
  12. Balali, V.; Ashouri Rad, A.; Golparvar-Fard, M. Detection, Classification, and Mapping of U.S. Traffic Signs Using Google Street View Images for Roadway Inventory Management. Vis. Eng. 2015, 3, 15. [Google Scholar] [CrossRef] [Green Version]
  13. Zhang, M.; Liu, Y.; Luo, S.; Gao, S. Research on Baidu Street View Road Crack Information Extraction Based on Deep Learning Method. J. Phys. Conf. Ser. 2020, 1616, 012086. [Google Scholar] [CrossRef]
  14. Li, D.; Ke, Y.; Gong, H.; Li, X. Object-Based Urban Tree Species Classification Using Bi-Temporal WorldView-2 and WorldView-3 Images. Remote Sens. 2015, 7, 16917–16937. [Google Scholar] [CrossRef] [Green Version]
  15. Pan, Y.; Zhang, X.; Cervone, G.; Yang, L. Detection of Asphalt Pavement Potholes and Cracks Based on the Unmanned Aerial Vehicle Multispectral Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3701–3712. [Google Scholar] [CrossRef]
  16. Irwin, K.; Beaulne, D.; Braun, A.; Fotopoulos, G. Fusion of SAR, Optical Imagery and Airborne LiDAR for Surface Water Detection. Remote Sens. 2017, 9, 890. [Google Scholar] [CrossRef] [Green Version]
  17. Wang, W.; Yang, N.; Zhang, Y.; Wang, F.; Cao, T.; Eklund, P. A Review of Road Extraction from Remote Sensing Images. J. Traffic Transp. Eng. 2016, 3, 271–282. [Google Scholar] [CrossRef] [Green Version]
  18. Hu, J.; Razdan, A.; Femiani, J.C.; Cui, M.; Wonka, P. Road Network Extraction and Intersection Detection From Aerial Images by Tracking Road Footprints. IEEE Trans. Geosci. Remote Sens. 2007, 45, 4144–4157. [Google Scholar] [CrossRef]
  19. Shen, J.; Lin, X.; Shi, Y.; Wong, C. Knowledge-Based Road Extraction from High Resolution Remotely Sensed Imagery. In Proceedings of the 2008 Congress on Image and Signal Processing, Sanya, China, 27–30 May 2008; IEEE: New York, NY, USA, 2008; Volume 4, pp. 608–612. [Google Scholar]
  20. George, J.; Mary, L.; Riyas, K.S. Vehicle Detection and Classification from Acoustic Signal Using ANN and KNN. In Proceedings of the 2013 International Conference on Control Communication and Computing (ICCC), Thiruvananthapuram, India, 13–15 December 2013; IEEE: New York, NY, USA, 2013; pp. 436–439. [Google Scholar]
  21. Li, J.; Chen, M. On-Road Multiple Obstacles Detection in Dynamical Background. In Proceedings of the 2014 Sixth International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, 26–27 August 2014; IEEE: New York, NY, USA, 2014; Volume 1, pp. 102–105. [Google Scholar]
  22. Simler, C. An Improved Road and Building Detector on VHR Images. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; IEEE: New York, NY, USA, 2011; pp. 507–510. [Google Scholar]
  23. Zhu, D.-M.; Wen, X.; Ling, C.-L. Road Extraction Based on the Algorithms of MRF and Hybrid Model of SVM and FCM. In Proceedings of the 2011 International Symposium on Image and Data Fusion, Tengchong, China, 9–11 August 2011; IEEE: New York, NY, USA, 2011; pp. 1–4. [Google Scholar]
  24. Zhou, J.; Bischof, W.F.; Caelli, T. Road Tracking in Aerial Images Based on Human–Computer Interaction and Bayesian Filtering. ISPRS J. Photogramm. Remote Sens. 2006, 61, 108–124. [Google Scholar] [CrossRef] [Green Version]
  25. Miao, Z.; Wang, B.; Shi, W.; Zhang, H. A Semi-Automatic Method for Road Centerline Extraction From VHR Images. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1856–1860. [Google Scholar] [CrossRef]
  26. Pawar, V.; Zaveri, M. Graph Based K-Nearest Neighbor Minutiae Clustering for Fingerprint Recognition. In Proceedings of the 2014 10th International Conference on Natural Computation (ICNC), Xiamen, China, 19–21 August 2014; IEEE: New York, NY, USA, 2014; pp. 675–680. [Google Scholar]
  27. Anil, P.N.; Natarajan, S. A Novel Approach Using Active Contour Model for Semi-Automatic Road Extraction from High Resolution Satellite Imagery. In Proceedings of the 2010 Second International Conference on Machine Learning and Computing, Bangalore, India, 9–11 February 2010; IEEE: New York, NY, USA, 2010; pp. 263–266. [Google Scholar]
  28. Abraham, L.; Sasikumar, M. A Fuzzy Based Road Network Extraction from Degraded Satellite Images. In Proceedings of the 2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Mysore, India, 22–25 August 2013; IEEE: New York, NY, USA, 2013; pp. 2032–2036. [Google Scholar]
  29. Awrangjeb, M. Road Traffic Island Extraction from High Resolution Aerial Imagery Using Active Contours. In Proceedings of the Australian Remote Sensing & Photogrammetry Conference (ARSPC 2010), Alice Springs, Australia, 13–17 September 2010. [Google Scholar]
  30. Valero, S.; Chanussot, J.; Benediktsson, J.A.; Talbot, H.; Waske, B. Advanced Directional Mathematical Morphology for the Detection of the Road Network in Very High Resolution Remote Sensing Images. Pattern Recognit. Lett. 2010, 31, 1120–1127. [Google Scholar] [CrossRef] [Green Version]
  31. Ma, R.; Wang, W.; Liu, S. Extracting Roads Based on Retinex and Improved Canny Operator with Shape Criteria in Vague and Unevenly Illuminated Aerial Images. J. Appl. Remote Sens. 2012, 6, 063610. [Google Scholar]
  32. Movaghati, S.; Moghaddamjoo, A.; Tavakoli, A. Road Extraction from Satellite Images Using Particle Filtering and Extended Kalman Filtering. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2807–2817. [Google Scholar] [CrossRef]
  33. Barzohar, M.; Cooper, D.B. Automatic Finding of Main Roads in Aerial Images by Using Geometric-Stochastic Models and Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 707–721. [Google Scholar] [CrossRef]
  34. Abdollahi, A.; Pradhan, B.; Shukla, N.; Chakraborty, S.; Alamri, A. Deep Learning Approaches Applied to Remote Sensing Datasets for Road Extraction: A State-Of-The-Art Review. Remote Sens. 2020, 12, 1444. [Google Scholar] [CrossRef]
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Identity Mappings in Deep Residual Networks, Proceedings of the Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 8–16 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 630–645. [Google Scholar]
  36. Zhong, Z.; Li, J.; Cui, W.; Jiang, H. Fully Convolutional Networks for Building and Road Extraction: Preliminary Results. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 1591–1594. [Google Scholar]
  37. Wei, Y.; Wang, Z.; Xu, M. Road Structure Refined CNN for Road Extraction in Aerial Image. IEEE Geosci. Remote Sens. Lett. 2017, 14, 709–713. [Google Scholar] [CrossRef]
  38. Alshehhi, R.; Marpu, P.R.; Woon, W.L.; Dalla Mura, M. Simultaneous Extraction of Roads and Buildings in Remote Sensing Imagery with Convolutional Neural Networks. ISPRS J. Photogramm. Remote Sens. 2017, 130, 139–149. [Google Scholar] [CrossRef]
  39. Liu, R.; Miao, Q.; Song, J.; Quan, Y.; Li, Y.; Xu, P.; Dai, J. Multiscale Road Centerlines Extraction from High-Resolution Aerial Imagery. Neurocomputing 2019, 329, 384–396. [Google Scholar] [CrossRef]
  40. Li, P.; Zang, Y.; Wang, C.; Li, J.; Cheng, M.; Luo, L.; Yu, Y. Road Network Extraction via Deep Learning and Line Integral Convolution. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 1599–1602. [Google Scholar]
  41. Varia, N.; Dokania, A.; Senthilnath, J. DeepExt: A Convolution Neural Network for Road Extraction Using RGB Images Captured by UAV. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18–21 November 2018; pp. 1890–1895. [Google Scholar]
  42. Abdollahi, A.; Pradhan, B.; Shukla, N. Extraction of Road Features from UAV Images Using a Novel Level Set Segmentation Approach. Int. J. Urban Sci. 2019, 23, 391–405. [Google Scholar] [CrossRef]
  43. Moranduzzo, T.; Melgani, F. Detecting Cars in UAV Images With a Catalog-Based Approach. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6356–6367. [Google Scholar] [CrossRef]
  44. Yang, B.; Chen, C. Automatic Registration of UAV-Borne Sequent Images and LiDAR Data. ISPRS J. Photogramm. Remote Sens. 2015, 101, 262–274. [Google Scholar] [CrossRef]
  45. Kestur, R.; Farooq, S.; Abdal, R.; Mehraj, E.; Narasipura, O.S.; Mudigere, M. UFCN: A Fully Convolutional Neural Network for Road Extraction in RGB Imagery Acquired by Remote Sensing from an Unmanned Aerial Vehicle. J. Appl. Remote Sens. 2018, 12, 016020. [Google Scholar] [CrossRef]
  46. Henry, C.; Azimi, S.M.; Merkle, N. Road Segmentation in SAR Satellite Images With Deep Fully Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1867–1871. [Google Scholar] [CrossRef] [Green Version]
  47. Panboonyuen, T.; Vateekul, P.; Jitkajornwanich, K.; Lawawirojwong, S. An Enhanced Deep Convolutional Encoder-Decoder Network for Road Segmentation on Aerial Imagery, Proceedings of the Recent Advances in Information and Communication Technology, Bangkok, Thailand, 5–6 July 2017; Meesad, P., Sodsee, S., Unger, H., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 191–201. [Google Scholar]
  48. Wang, J.; Song, J.; Chen, M.; Yang, Z. Road Network Extraction: A Neural-Dynamic Framework Based on Deep Learning and a Finite State Machine. Int. J. Remote Sens. 2015, 36, 3144–3169. [Google Scholar] [CrossRef]
  49. Panboonyuen, T.; Jitkajornwanich, K.; Lawawirojwong, S.; Srestasathiern, P.; Vateekul, P. Road Segmentation of Remotely-Sensed Images Using Deep Convolutional Neural Networks with Landscape Metrics and Conditional Random Fields. Remote Sens. 2017, 9, 680. [Google Scholar] [CrossRef] [Green Version]
  50. Constantin, A.; Ding, J.-J.; Lee, Y.-C. Accurate Road Detection from Satellite Images Using Modified U-Net. In Proceedings of the 2018 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS), Chengdu, China, 26–30 October 2018; pp. 423–426. [Google Scholar]
  51. Zhang, Z.; Liu, Q.; Wang, Y. Road Extraction by Deep Residual U-Net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef] [Green Version]
  52. Hong, Z.; Ming, D.; Zhou, K.; Guo, Y.; Lu, T. Road Extraction From a High Spatial Resolution Remote Sensing Image Based on Richer Convolutional Features. IEEE Access 2018, 6, 46988–47000. [Google Scholar] [CrossRef]
  53. Xin, J.; Zhang, X.; Zhang, Z.; Fang, W. Road Extraction of High-Resolution Remote Sensing Images Derived from DenseUNet. Remote Sens. 2019, 11, 2499. [Google Scholar] [CrossRef] [Green Version]
  54. Li, Y.; Xu, L.; Rao, J.; Guo, L.; Yan, Z.; Jin, S. A Y-Net Deep Learning Method for Road Segmentation Using High-Resolution Visible Remote Sensing Images. Remote Sens. Lett. 2019, 10, 381–390. [Google Scholar] [CrossRef]
  55. Cheng, G.; Wang, Y.; Xu, S.; Wang, H.; Xiang, S.; Pan, C. Automatic Road Detection and Centerline Extraction via Cascaded End-to-End Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3322–3337. [Google Scholar] [CrossRef]
  56. Xu, Y.; Xie, Z.; Feng, Y.; Chen, Z. Road Extraction from High-Resolution Remote Sensing Imagery Using Deep Learning. Remote Sens. 2018, 10, 1461. [Google Scholar] [CrossRef] [Green Version]
  57. Buslaev, A.; Seferbekov, S.; Iglovikov, V.; Shvets, A. Fully Convolutional Network for Automatic Road Extraction from Satellite Imagery. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 197–1973. [Google Scholar]
  58. Zhou, L.; Zhang, C.; Wu, M. D-LinkNet: LinkNet with Pretrained Encoder and Dilated Convolution for High Resolution Satellite Imagery Road Extraction. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 192–1924. [Google Scholar]
  59. Doshi, J. Residual Inception Skip Network for Binary Segmentation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 206–2063. [Google Scholar]
  60. Xu, Y.; Feng, Y.; Xie, Z.; Hu, A.; Zhang, X. A Research on Extracting Road Network from High Resolution Remote Sensing Imagery. In Proceedings of the 2018 26th International Conference on Geoinformatics, Kunming, China, 18–30 June 2018; pp. 1–4. [Google Scholar]
  61. He, H.; Yang, D.; Wang, S.; Wang, S.; Liu, X. Road Segmentation of Cross-Modal Remote Sensing Images Using Deep Segmentation Network and Transfer Learning. Ind. Robot Int. J. Robot. Res. Appl. 2019, 46, 384–390. [Google Scholar] [CrossRef]
  62. Xia, W.; Zhang, Y.-Z.; Liu, J.; Luo, L.; Yang, K. Road Extraction from High Resolution Image with Deep Convolution Network—A Case Study of GF-2 Image. Proceedings 2018, 2, 325. [Google Scholar] [CrossRef] [Green Version]
  63. Gao, L.; Song, W.; Dai, J.; Chen, Y. Road Extraction from High-Resolution Remote Sensing Imagery Using Refined Deep Residual Convolutional Neural Network. Remote Sens. 2019, 11, 552. [Google Scholar] [CrossRef] [Green Version]
  64. Xie, Y.; Miao, F.; Zhou, K.; Peng, J. HsgNet: A Road Extraction Network Based on Global Perception of High-Order Spatial Information. ISPRS Int. J. Geo-Inf. 2019, 8, 571. [Google Scholar] [CrossRef] [Green Version]
  65. Costea, D.; Marcu, A.; Slusanschi, E.; Leordeanu, M. Creating Roadmaps in Aerial Images with Generative Adversarial Networks and Smoothing-Based Optimization. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 2100–2109. [Google Scholar]
  66. Shi, Q.; Liu, X.; Li, X. Road Detection from Remote Sensing Images by Generative Adversarial Networks. IEEE Access 2017, 6, 25486–25494. [Google Scholar] [CrossRef]
  67. Belli, D.; Kipf, T. Image-Conditioned Graph Generation for Road Network Extraction. arXiv 2019, arXiv:1910.14388, 4388. [Google Scholar]
  68. Castrejon, L.; Kundu, K.; Urtasun, R.; Fidler, S. Annotating Object Instances With a Polygon-RNN. arXiv 2017, arXiv:1704.05548, 5230–5238. [Google Scholar]
  69. Acuna, D.; Ling, H.; Kar, A.; Fidler, S. Efficient Interactive Annotation of Segmentation Datasets With Polygon-RNN++. arXiv 2018, arXiv:1803.09693, 859–868. [Google Scholar]
  70. Li, Z.; Wegner, J.D.; Lucchi, A. Topological Map Extraction From Overhead Images. arXiv 2019, arXiv:1812.01497, 1715–1724. [Google Scholar]
  71. Lian, R.; Wang, W.; Mustafa, N.; Huang, L. Road Extraction Methods in High-Resolution Remote Sensing Images: A Comprehensive Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5489–5507. [Google Scholar] [CrossRef]
  72. Sun, N.; Zhang, J.X.; Huang, G.M.; Zhao, Z.; Lu, L.J. Review of Road Extraction Methods from SAR Image. IOP Conf. Ser. Earth Environ. Sci. 2014, 17, 012245. [Google Scholar] [CrossRef] [Green Version]
  73. Sun, Z.; Geng, H.; Lu, Z.; Scherer, R.; Woźniak, M. Review of Road Segmentation for SAR Images. Remote Sens. 2021, 13, 1011. [Google Scholar] [CrossRef]
  74. Wang, G.; Weng, Q. Remote Sensing of Natural Resources; CRC Press: Boca Raton, FL, USA, 2013; ISBN 978-1-4665-5692-8. [Google Scholar]
  75. Gargoum, S.; El-Basyouny, K. Automated Extraction of Road Features Using LiDAR Data: A Review of LiDAR Applications in Transportation. In Proceedings of the 2017 4th International Conference on Transportation Information and Safety (ICTIS), Banff, AB, Canada, 8–10 August 2017; pp. 563–574. [Google Scholar]
  76. Ma, L.; Li, Y.; Li, J.; Wang, C.; Wang, R.; Chapman, M.A. Mobile Laser Scanned Point-Clouds for Road Object Detection and Extraction: A Review. Remote Sens. 2018, 10, 1531. [Google Scholar] [CrossRef] [Green Version]
  77. Wang, R.; Peethambaran, J.; Chen, D. LiDAR Point Clouds to 3-D Urban Models: A Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 606–627. [Google Scholar] [CrossRef]
  78. Alparone, L.; Baronti, S.; Garzelli, A.; Nencini, F. A Global Quality Measurement of Pan-Sharpened Multispectral Imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 313–317. [Google Scholar] [CrossRef]
  79. French, A.N.; Norman, J.M.; Anderson, M.C. A Simple and Fast Atmospheric Correction for Spaceborne Remote Sensing of Surface Temperature. Remote Sens. Environ. 2003, 87, 326–333. [Google Scholar] [CrossRef]
  80. Jia, J.; Wang, Y.; Chen, J.; Guo, R.; Shu, R.; Wang, J. Status and Application of Advanced Airborne Hyperspectral Imaging Technology: A Review. Infrared Phys. Technol. 2020, 104, 103115. [Google Scholar] [CrossRef]
  81. Turner, D.; Lucieer, A.; Watson, C. An Automated Technique for Generating Georectified Mosaics from Ultra-High Resolution Unmanned Aerial Vehicle (UAV) Imagery, Based on Structure from Motion (SfM) Point Clouds. Remote Sens. 2012, 4, 1392–1410. [Google Scholar] [CrossRef] [Green Version]
  82. Ozesmi, S.L.; Bauer, M.E. Satellite Remote Sensing of Wetlands. Wetl. Ecol. Manag. Vol. 2002, 10, 381–402. [Google Scholar] [CrossRef]
  83. Sato, H.P.; Hasegawa, H.; Fujiwara, S.; Tobita, M.; Koarai, M.; Une, H.; Iwahashi, J. Interpretation of Landslide Distribution Triggered by the 2005 Northern Pakistan Earthquake Using SPOT 5 Imagery. Landslides 2007, 4, 113–122. [Google Scholar] [CrossRef]
  84. Yadav, S.K.; Singh, S.K.; Gupta, M.; Srivastava, P.K. Morphometric Analysis of Upper Tons Basin from Northern Foreland of Peninsular India Using CARTOSAT Satellite and GIS. Geocarto Int. 2014, 29, 895–914. [Google Scholar] [CrossRef]
  85. Dial, G.; Bowen, H.; Gerlach, F.; Grodecki, J.; Oleszczuk, R. IKONOS Satellite, Imagery, and Products. Remote Sens. Environ. 2003, 88, 23–36. [Google Scholar] [CrossRef]
  86. Li, D.; Wang, M.; Jiang, J. China’s High-Resolution Optical Remote Sensing Satellites and Their Mapping Applications. Geo-Spat. Inf. Sci. 2021, 24, 85–94. [Google Scholar] [CrossRef]
  87. Hao, P.; Wang, L.; Niu, Z. Potential of Multitemporal Gaofen-1 Panchromatic/Multispectral Images for Crop Classification: Case Study in Xinjiang Uygur Autonomous Region, China. J. Appl. Remote Sens. 2015, 9, 096035. [Google Scholar] [CrossRef]
  88. Zheng, Y.; Dai, Q.; Tu, Z.; Wang, L. Guided Image Filtering-Based Pan-Sharpening Method: A Case Study of GaoFen-2 Imagery. ISPRS Int. J. Geo-Inf. 2017, 6, 404. [Google Scholar] [CrossRef] [Green Version]
  89. Yang, A.; Zhong, B.; Hu, L.; Wu, S.; Xu, Z.; Wu, H.; Wu, J.; Gong, X.; Wang, H.; Liu, Q. Radiometric Cross-Calibration of the Wide Field View Camera Onboard GaoFen-6 in Multispectral Bands. Remote Sens. 2020, 12, 1037. [Google Scholar] [CrossRef] [Green Version]
  90. Liu, Y.-K.; Liu, Y.-K.; Ma, L.-L.; Ma, L.-L.; Wang, N.; Qian, Y.-G.; Qian, Y.-G.; Zhao, Y.-G.; Qiu, S.; Gao, C.-X.; et al. On-Orbit Radiometric Calibration of the Optical Sensors on-Board SuperView-1 Satellite Using Three Independent Methods. Opt. Express 2020, 28, 11085–11105. [Google Scholar] [CrossRef]
  91. Aguilar, M.A.; Saldaña, M.M.; Aguilar, F.J. GeoEye-1 and WorldView-2 Pan-Sharpened Imagery for Object-Based Classification in Urban Environments. Int. J. Remote Sens. 2013, 34, 2583–2606. [Google Scholar] [CrossRef]
  92. Shi, Y.; Huang, W.; Ye, H.; Ruan, C.; Xing, N.; Geng, Y.; Dong, Y.; Peng, D. Partial Least Square Discriminant Analysis Based on Normalized Two-Stage Vegetation Indices for Mapping Damage from Rice Diseases Using PlanetScope Datasets. Sensors 2018, 18, 1901. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  93. Meusburger, K.; Bänninger, D.; Alewell, C. Estimating Vegetation Parameter for Soil Erosion Assessment in an Alpine Catchment by Means of QuickBird Imagery. Int. J. Appl. Earth Obs. Geoinf. 2010, 12, 201–207. [Google Scholar] [CrossRef]
  94. Alkan, M.; Buyuksalih, G.; Sefercik, U.G.; Jacobsen, K. Geometric Accuracy and Information Content of WorldView-1 Images. Opt. Eng. 2013, 52, 026201. [Google Scholar] [CrossRef]
  95. Ye, B.; Tian, S.; Ge, J.; Sun, Y. Assessment of WorldView-3 Data for Lithological Mapping. Remote Sens. 2017, 9, 1132. [Google Scholar] [CrossRef]
  96. Akumu, C.E.; Amadi, E.O.; Dennis, S. Application of Drone and WorldView-4 Satellite Data in Mapping and Monitoring Grazing Land Cover and Pasture Quality: Pre- and Post-Flooding. Land 2021, 10, 321. [Google Scholar] [CrossRef]
  97. Mulawa, D. On-Orbit Geometric Calibration of the OrbView-3 High Resolution Imaging Satellite. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 1–6. [Google Scholar]
  98. Tyc, G.; Tulip, J.; Schulten, D.; Krischke, M.; Oxfort, M. The RapidEye Mission Design. Acta Astronaut. 2005, 56, 213–219. [Google Scholar] [CrossRef]
  99. Oh, K.-Y.; Jung, H.-S. Automated Bias-Compensation Approach for Pushbroom Sensor Modeling Using Digital Elevation Model. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3400–3409. [Google Scholar] [CrossRef]
  100. Kim, J.; Jin, C.; Choi, C.; Ahn, H. Radiometric Characterization and Validation for the KOMPSAT-3 Sensor. Remote Sens. Lett. 2015, 6, 529–538. [Google Scholar] [CrossRef]
  101. Seo, D.; Oh, J.; Lee, C.; Lee, D.; Choi, H. Geometric Calibration and Validation of Kompsat-3A AEISS-A Camera. Sensors 2016, 16, 1776. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  102. Kubik, P.; Lebègue, L.; Fourest, S.; Delvit, J.-M.; de Lussy, F.; Greslou, D.; Blanchet, G. First In-Flight Results of Pleiades 1A Innovative Methods for Optical Calibration. In Proceedings of the International Conference on Space Optics—ICSO 2012; International Society for Optics and Photonics, Ajaccio, France, 9–12 October 2012; Volume 10564, p. 1056407. [Google Scholar]
  103. Panagiotakis, E.; Chrysoulakis, N.; Charalampopoulou, V.; Poursanidis, D. Validation of Pleiades Tri-Stereo DSM in Urban Areas. ISPRS Int. J. Geo-Inf. 2018, 7, 118. [Google Scholar] [CrossRef] [Green Version]
  104. Yang, G.D.; Zhu, X. Ortho-Rectification of SPOT 6 Satellite Images Based on RPC Models. Appl. Mech. Mater. 2013, 392, 808–814. [Google Scholar] [CrossRef]
  105. Wilson, K.L.; Skinner, M.A.; Lotze, H.K. Eelgrass (Zostera Marina) and Benthic Habitat Mapping in Atlantic Canada Using High-Resolution SPOT 6/7 Satellite Imagery. Estuar. Coast. Shelf Sci. 2019, 226, 106292. [Google Scholar] [CrossRef]
  106. Rais, A.A.; Suwaidi, A.A.; Ghedira, H. DubaiSat-1: Mission Overview, Development Status and Future Applications. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; Volume 5, pp. V-196–V–199. [Google Scholar]
  107. Suwaidi, A.A. DubaiSat-2 Mission Overview. In Sensors, Systems, and Next-Generation Satellites XVI; International Society for Optics and Photonics: Edinburgh, UK, 2012; Volume 8533, p. 85330W. [Google Scholar]
  108. Immitzer, M.; Böck, S.; Einzmann, K.; Vuolo, F.; Pinnel, N.; Wallner, A.; Atzberger, C. Fractional Cover Mapping of Spruce and Pine at 1ha Resolution Combining Very High and Medium Spatial Resolution Satellite Imagery. Remote Sens. Environ. 2018, 204, 690–703. [Google Scholar] [CrossRef] [Green Version]
  109. Hamedianfar, A.; Shafri, H.Z.M. Detailed Intra-Urban Mapping through Transferable OBIA Rule Sets Using WorldView-2 Very-High-Resolution Satellite Images. Int. J. Remote Sens. 2015, 36, 3380–3396. [Google Scholar] [CrossRef]
  110. Diaz-Varela, R.A.; Zarco-Tejada, P.J.; Angileri, V.; Loudjani, P. Automatic Identification of Agricultural Terraces through Object-Oriented Analysis of Very High Resolution DSMs and Multispectral Imagery Obtained from an Unmanned Aerial Vehicle. J. Environ. Manag. 2014, 134, 117–126. [Google Scholar] [CrossRef]
  111. Goetz, A.F.H.; Vane, G.; Solomon, J.E.; Rock, B.N. Imaging Spectrometry for Earth Remote Sensing. Science 1985, 228, 1147–1153. [Google Scholar] [CrossRef] [PubMed]
  112. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.; et al. Recent Advances in Techniques for Hyperspectral Image Processing. Remote Sens. Environ. 2009, 113, S110–S122. [Google Scholar] [CrossRef]
  113. Green, R.O.; Eastwood, M.L.; Sarture, C.M.; Chrien, T.G.; Aronsson, M.; Chippendale, B.J.; Faust, J.A.; Pavri, B.E.; Chovit, C.J.; Solis, M.; et al. Imaging Spectroscopy and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). Remote Sens. Environ. 1998, 65, 227–248. [Google Scholar] [CrossRef]
  114. Goetz, A.F.H.; Srivastava, V. Mineralogical Mapping in the Cuprite Mining District, Nevada. In Proceedings of the Airborne Imaging Spectrometer Data Analysis Workshop, JPL Publication 85-41, Jet Propulsion Laboratory. Pasadena, CA, USA, 8–10 April 1985; pp. 22–29. [Google Scholar]
  115. Zarco-Tejada, P.J.; Guillén-Climent, M.L.; Hernández-Clemente, R.; Catalina, A.; González, M.R.; Martín, P. Estimating Leaf Carotenoid Content in Vineyards Using High Resolution Hyperspectral Imagery Acquired from an Unmanned Aerial Vehicle (UAV). Agric. For. Meteorol. 2013, 171–172, 281–294. [Google Scholar] [CrossRef] [Green Version]
  116. Hruska, R.; Mitchell, J.; Anderson, M.; Glenn, N.F. Radiometric and Geometric Analysis of Hyperspectral Imagery Acquired from an Unmanned Aerial Vehicle. Remote Sens. 2012, 4, 2736–2752. [Google Scholar] [CrossRef] [Green Version]
  117. Yue, J.; Yang, G.; Li, C.; Li, Z.; Wang, Y.; Feng, H.; Xu, B. Estimation of Winter Wheat Above-Ground Biomass Using Unmanned Aerial Vehicle-Based Snapshot Hyperspectral Sensor and Crop Height Improved Models. Remote Sens. 2017, 9, 708. [Google Scholar] [CrossRef] [Green Version]
  118. Lu, J.; Liu, H.; Yao, Y.; Tao, S.; Tang, Z.; Lu, J. Hsi Road: A Hyper Spectral Image Dataset For Road Segmentation. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar]
  119. Wendel, A.; Underwood, J. Illumination Compensation in Ground Based Hyperspectral Imaging. ISPRS J. Photogramm. Remote Sens. 2017, 129, 162–178. [Google Scholar] [CrossRef]
  120. Van der Meer, F.D.; van der Werff, H.M.A.; van Ruitenbeek, F.J.A.; Hecker, C.A.; Bakker, W.H.; Noomen, M.F.; van der Meijde, M.; Carranza, E.J.M.; de Smeth, J.B.; Woldai, T. Multi- and Hyperspectral Geologic Remote Sensing: A Review. Int. J. Appl. Earth Obs. Geoinf. 2012, 14, 112–128. [Google Scholar] [CrossRef]
  121. Stuffler, T.; Kaufmann, C.; Hofer, S.; Förster, K.P.; Schreier, G.; Mueller, A.; Eckardt, A.; Bach, H.; Penné, B.; Benz, U.; et al. The EnMAP Hyperspectral Imager—An Advanced Optical Payload for Future Applications in Earth Observation Programmes. Acta Astronaut. 2007, 61, 115–120. [Google Scholar] [CrossRef]
  122. Carmon, N.; Ben-Dor, E. Mapping Asphaltic Roads’ Skid Resistance Using Imaging Spectroscopy. Remote Sens. 2018, 10, 430. [Google Scholar] [CrossRef] [Green Version]
  123. Schaepman, M.E.; Jehle, M.; Hueni, A.; D’Odorico, P.; Damm, A.; Weyermann, J.; Schneider, F.D.; Laurent, V.; Popp, C.; Seidel, F.C.; et al. Advanced Radiometry Measurements and Earth Science Applications with the Airborne Prism Experiment (APEX). Remote Sens. Environ. 2015, 158, 207–219. [Google Scholar] [CrossRef] [Green Version]
  124. Edberg, S.J.; Evans, D.L.; Graf, J.E.; Hyon, J.J.; Rosen, P.A.; Waliser, D.E. Studying Earth in the New Millennium: NASA Jet Propulsion Laboratory’s Contributions to Earth Science and Applications Space Agencies. IEEE Geosci. Remote Sens. Mag. 2016, 4, 26–39. [Google Scholar] [CrossRef]
  125. Green, R.O.; Team, C. New Measurements of the Earth’s Spectroscopic Diversity Acquired during the AVIRIS-NG Campaign to India. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 3066–3069. [Google Scholar]
  126. Jie-lin, Z.; Jun-hu, W.; Mi, Z.; Yan-ju, H.; Ding, W. Aerial Visible-Thermal Infrared Hyperspectral Feature Extraction Technology and Its Application to Object Identification. IOP Conf. Ser. Earth Environ. Sci. 2014, 17, 012184. [Google Scholar] [CrossRef] [Green Version]
  127. Jia, J.; Wang, Y.; Cheng, X.; Yuan, L.; Zhao, D.; Ye, Q.; Zhuang, X.; Shu, R.; Wang, J. Destriping Algorithms Based on Statistics and Spatial Filtering for Visible-to-Thermal Infrared Pushbroom Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4077–4091. [Google Scholar] [CrossRef]
  128. Jia, J.; Zheng, X.; Guo, S.; Wang, Y.; Chen, J. Removing Stripe Noise Based on Improved Statistics for Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2020, 1–5. [Google Scholar] [CrossRef]
  129. Rouvière, L.R.; Sisakoun, I.; Skauli, T.; Coudrain, C.; Ferrec, Y.; Fabre, S.; Poutier, L.; Boucher, Y.; Løke, T.; Blaaberg, S. Sysiphe, an Airborne Hyperspectral System from Visible to Thermal Infrared. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 1947–1949. [Google Scholar]
  130. Marmo, J.; Folkman, M.A.; Kuwahara, C.Y.; Willoughby, C.T. Lewis Hyperspectral Imager Payload Development. Proc. SPIE 1996, 2819, 80–90. [Google Scholar]
  131. Pearlman, J.S.; Barry, P.S.; Segal, C.C.; Shepanski, J.; Beiso, D.; Carman, S.L. Hyperion, a Space-Based Imaging Spectrometer. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1160–1173. [Google Scholar] [CrossRef]
  132. Barnsley, M.J.; Settle, J.J.; Cutter, M.A.; Lobb, D.R.; Teston, F. The PROBA/CHRIS Mission: A Low-Cost Smallsat for Hyperspectral Multiangle Observations of the Earth Surface and Atmosphere. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1512–1520. [Google Scholar] [CrossRef]
  133. Murchie, S.; Arvidson, R.; Bedini, P.; Beisser, K.; Bibring, J.-P.; Bishop, J.; Boldt, J.; Cavender, P.; Choo, T.; Clancy, R.T.; et al. Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) on Mars Reconnaissance Orbiter (MRO). J. Geophys. Res. Planets 2007, 112. [Google Scholar] [CrossRef]
  134. Liu, Y.; Sun, D.; Hu, X.; Ye, X.; Li, Y.; Liu, S.; Cao, K.; Chai, M.; Zhou, W.; Zhang, J.; et al. The Advanced Hyperspectral Imager: Aboard China’s GaoFen-5 Satellite. IEEE Geosci. Remote Sens. Mag. 2019, 7, 23–32. [Google Scholar] [CrossRef]
  135. Kimuli, D.; Wang, W.; Wang, W.; Jiang, H.; Zhao, X.; Chu, X. Application of SWIR Hyperspectral Imaging and Chemometrics for Identification of Aflatoxin B1 Contaminated Maize Kernels. Infrared Phys. Technol. 2018, 89, 351–362. [Google Scholar] [CrossRef]
  136. Ambrose, A.; Kandpal, L.M.; Kim, M.S.; Lee, W.H.; Cho, B.K. High Speed Measurement of Corn Seed Viability Using Hyperspectral Imaging. Infrared Phys. Technol. 2016, 75, 173–179. [Google Scholar] [CrossRef]
  137. He, H.; Sun, D. Hyperspectral Imaging Technology for Rapid Detection of Various Microbial Contaminants in Agricultural and Food Products. Trends Food Sci. Technol. 2015, 46, 99–109. [Google Scholar] [CrossRef]
  138. Randolph, K.; Wilson, J.; Tedesco, L.; Li, L.; Pascual, D.L.; Soyeux, E. Hyperspectral Remote Sensing of Cyanobacteria in Turbid Productive Water Using Optically Active Pigments, Chlorophyll a and Phycocyanin. Remote Sens. Environ. 2008, 112, 4009–4019. [Google Scholar] [CrossRef]
  139. Huang, H.; Liu, L.; Ngadi, M.O. Recent Developments in Hyperspectral Imaging for Assessment of Food Quality and Safety. Sensors 2014, 14, 7248–7276. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  140. Brando, V.E.; Dekker, A.G. Satellite Hyperspectral Remote Sensing for Estimating Estuarine and Coastal Water Quality. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1378–1387. [Google Scholar] [CrossRef]
  141. Shimoni, M.; Haelterman, R.; Perneel, C. Hypersectral Imaging for Military and Security Applications: Combining Myriad Processing and Sensing Techniques. IEEE Geosci. Remote Sens. Mag. 2019, 7, 101–117. [Google Scholar] [CrossRef]
  142. Kruse, F.A. Comparative Analysis of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), and Hyperspectral Thermal Emission Spectrometer (HyTES) Longwave Infrared (LWIR) Hyperspectral Data for Geologic Mapping. In Proceedings of the Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXI; International Society for Optics and Photonics, Baltimore, MD, USA, 21 May 2015; Volume 9472, p. 94721F. [Google Scholar]
  143. Duren, R.M.; Thorpe, A.K.; Foster, K.T.; Rafiq, T.; Hopkins, F.M.; Yadav, V.; Bue, B.D.; Thompson, D.R.; Conley, S.; Colombi, N.K.; et al. California’s Methane Super-Emitters. Nature 2019, 575, 180–184. [Google Scholar] [CrossRef] [Green Version]
  144. Gelautz, M.; Frick, H.; Raggam, J.; Burgstaller, J.; Leberl, F. SAR Image Simulation and Analysis of Alpine Terrain. ISPRS J. Photogramm. Remote Sens. 1998, 53, 17–38. [Google Scholar] [CrossRef]
  145. Haldar, D.; Das, A.; Mohan, S.; Pal, O.; Hooda, R.S.; Chakraborty, M. Assessment of L-Band SAR Data at Different Polarization Combinations for Crop and Other Landuse Classification. Prog. Electromagn. Res. B 2012, 36, 303–321. [Google Scholar] [CrossRef] [Green Version]
  146. Raney, R.K. Hybrid-Polarity SAR Architecture. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3397–3404. [Google Scholar] [CrossRef] [Green Version]
  147. McNairn, H.; Brisco, B. The Application of C-Band Polarimetric SAR for Agriculture: A Review. Can. J. Remote Sens. 2004, 30, 525–542. [Google Scholar] [CrossRef]
  148. Jung, J.; Kim, D.; Lavalle, M.; Yun, S.-H. Coherent Change Detection Using InSAR Temporal Decorrelation Model: A Case Study for Volcanic Ash Detection. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5765–5775. [Google Scholar] [CrossRef]
  149. Liu, J.G.; Black, A.; Lee, H.; Hanaizumi, H.; Moore, J.M.c.M. Land Surface Change Detection in a Desert Area in Algeria Using Multi-Temporal ERS SAR Coherence Images. Int. J. Remote Sens. 2001, 22, 2463–2477. [Google Scholar] [CrossRef]
  150. Monti-Guarnieri, A.V.; Brovelli, M.A.; Manzoni, M.; Mariotti d’Alessandro, M.; Molinari, M.E.; Oxoli, D. Coherent Change Detection for Multipass SAR. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6811–6822. [Google Scholar] [CrossRef]
  151. Wahl, D.E.; Yocky, D.A.; Jakowatz, C.V.; Simonson, K.M. A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2460–2469. [Google Scholar] [CrossRef]
  152. Vosselman, G.; Maas, H.G. Airborne and Terrestrial Laser Scanning; CRC Press: Boca Raton, FL, USA, 2010; ISBN 978-1-904445-87-6. [Google Scholar]
  153. Garestier, F.; Dubois-Fernandez, P.C.; Papathanassiou, K.P. Pine Forest Height Inversion Using Single-Pass X-Band PolInSAR Data. IEEE Trans. Geosci. Remote Sens. 2007, 46, 59–68. [Google Scholar] [CrossRef]
  154. Rizzoli, P.; Martone, M.; Gonzalez, C.; Wecklich, C.; Borla Tridon, D.; Bräutigam, B.; Bachmann, M.; Schulze, D.; Fritz, T.; Huber, M.; et al. Generation and Performance Assessment of the Global TanDEM-X Digital Elevation Model. ISPRS J. Photogramm. Remote Sens. 2017, 132, 119–139. [Google Scholar] [CrossRef] [Green Version]
  155. Horn, R.; Nottensteiner, A.; Reigber, A.; Fischer, J.; Scheiber, R. F-SAR—DLR’s New Multifrequency Polarimetric Airborne SAR. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; Volume 2, pp. II-902–II-905. [Google Scholar]
  156. Dell’Acqua, F.; Gamba, P. Rapid Mapping Using Airborne and Satellite SAR Images. In Radar Remote Sensing of Urban Areas; Soergel, U., Ed.; Remote Sensing and Digital Image Processing; Springer Netherlands: Dordrecht, The Netherlands, 2010; pp. 49–68. ISBN 978-90-481-3751-0. [Google Scholar]
  157. Xiao, F.; Tong, L.; Luo, S. A Method for Road Network Extraction from High-Resolution SAR Imagery Using Direction Grouping and Curve Fitting. Remote Sens. 2019, 11, 2733. [Google Scholar] [CrossRef] [Green Version]
  158. Zhang, Q.; Kong, Q.; Zhang, C.; You, S.; Wei, H.; Sun, R.; Li, L. A New Road Extraction Method Using Sentinel-1 SAR Images Based on the Deep Fully Convolutional Neural Network. Eur. J. Remote Sens. 2019, 52, 572–582. [Google Scholar] [CrossRef] [Green Version]
  159. Harvey, W.A.; McKeown, D.M., Jr. Automatic Compilation of 3D Road Features Using LIDAR and Multi-Spectral Source Data. In Proceedings of the ASPRS Annual Conference, Portland, OR, USA, 28 April–2 May 2008; p. 11. [Google Scholar]
  160. Cheng, L.; Wu, Y.; Wang, Y.; Zhong, L.; Chen, Y.; Li, M. Three-Dimensional Reconstruction of Large Multilayer Interchange Bridge Using Airborne LiDAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 691–708. [Google Scholar] [CrossRef]
  161. How to Plan for a Leica CityMapper-2 Project. Available online: https://blog.hexagongeosystems.com/how-to-plan-for-a-leica-citymapper-2-project/ (accessed on 21 July 2021).
  162. Leica SPL100 Single Photon LiDAR Sensor. Available online: https://leica-geosystems.com/products/airborne-systems/topographic-lidar-sensors/leica-spl100 (accessed on 21 July 2021).
  163. Communicatie, F.M. ALTM Galaxy PRIME. Available online: https://geo-matching.com/airborne-laser-scanning/altm-galaxy-prime (accessed on 21 July 2021).
  164. Wichmann, V.; Bremer, M.; Lindenberger, J.; Rutzinger, M.; Georges, C.; Petrini-Monteferri, F. Evaluating the Potential of Multispectral Airborne LiDAR for Topographic Mapping and Land Cover Classification. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 2. [Google Scholar] [CrossRef] [Green Version]
  165. Pilarska, M.; Ostrowski, W. Evaluating the Possibility of Tree Species Calssification with Dual-Wavelength ALS Data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019. [Google Scholar]
  166. RIEGL—RIEGL VUX-240. Available online: http://www.riegl.com/products/unmanned-scanning/riegl-vux-240/ (accessed on 21 July 2021).
  167. Magnoni, A.; Stanton, T.W.; Barth, N.; Fernandez-Diaz, J.C.; León, J.F.O.; Ruíz, F.P.; Wheeler, J.A. Detection Thresholds of Archaeological Features in Airborne LiDAR Data from Central Yucatán. Adv. Archaeol. Pract. 2016, 4, 232–248. [Google Scholar] [CrossRef]
  168. Saito, S.; Yamashita, T.; Aoki, Y. Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks. Electron. Imaging 2016, 2016, 1–9. [Google Scholar] [CrossRef]
  169. Ventura, C.; Pont-Tuset, J.; Caelles, S.; Maninis, K.-K.; Van Gool, L. Iterative Deep Learning for Road Topology Extraction. arXiv 2018, arXiv:1808.09814. [Google Scholar]
  170. Lian, R.; Huang, L. DeepWindow: Sliding Window Based on Deep Learning for Road Extraction from Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1905–1916. [Google Scholar] [CrossRef]
  171. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active Contour Models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
  172. Gruen, A.; Li, H. Semi-Automatic Linear Feature Extraction by Dynamic Programming and LSB-Snakes. Photogramm. Eng. Remote Sens. 1997, 63, 985–994. [Google Scholar]
  173. Jagalingam, P.; Vittal, V.H.; Vittal, A. Hegde Review of Quality Metrics for Fused Image. Aquat. Procedia 2015.
  174. Song, M.; Civco, D. Road Extraction Using SVM and Image Segmentation. Photogramm. Eng. Remote Sens. 2004, 70, 1365–1371. [Google Scholar] [CrossRef] [Green Version]
  175. Mayer, H. Object Extraction in Photogrammetric Computer Vision. ISPRS J. Photogramm. Remote Sens. 2008, 63, 213–222. [Google Scholar] [CrossRef]
  176. Kirthika, A.; Mookambiga, A. Automated Road Network Extraction Using Artificial Neural Network. In Proceedings of the 2011 International Conference on Recent Trends in Information Technology (ICRTIT), Chennai, India, 3–5 June 2011; pp. 1061–1065. [Google Scholar]
  177. Li, M.; Zang, S.; Zhang, B.; Li, S.; Wu, C. A Review of Remote Sensing Image Classification Techniques: The Role of Spatio-Contextual Information. Eur. J. Remote Sens. 2014, 47, 389–411. [Google Scholar] [CrossRef]
  178. Yang, X.-S.; Cui, Z.; Xiao, R.; Gandomi, A.H.; Karamanoglu, M. Swarm Intelligence and Bio-Inspired Computation: Theory and Applications; Elsevier: Waltham, MA, USA, 2013; ISBN 0-12-405177-4. [Google Scholar]
  179. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar]
  180. Zhang, Q.; Couloigner, I. Benefit of the Angular Texture Signature for the Separation of Parking Lots and Roads on High Resolution Multi-Spectral Imagery. Pattern Recognit. Lett. 2006, 27, 937–946. [Google Scholar] [CrossRef]
  181. Zhang, Q.; Couloigner, I. Automated Road Network Extraction from High Resolution Multi-Spectral Imagery. In Proceedings of the ASPRS 2006 Annual Conference, Reno, NV, USA, 1–5 May 2006. [Google Scholar]
  182. Manandhar, P.; Marpu, P.R.; Aung, Z. Segmentation Based Traversing-Agent Approach for Road Width Extraction from Satellite Images Using Volunteered Geographic Information. Appl. Comput. Inform. 2018, 17, 131–152. [Google Scholar] [CrossRef]
  183. Boggess, J.E. Identification of Roads in Satellite Imagery Using Artificial Neural Networks: A Contextual Approach; Mississippi State University: Starkville, MS, USA, 1993. [Google Scholar]
  184. Doucette, P.; Agouris, P.; Stefanidis, A.; Musavi, M. Self-Organised Clustering for Road Extraction in Classified Imagery. ISPRS J. Photogramm. Remote Sens. 2001, 55, 347–358. [Google Scholar] [CrossRef]
  185. Shackelford, A.K.; Davis, C.H. A Hierarchical Fuzzy Classification Approach for High-Resolution Multispectral Data over Urban Areas. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1920–1932. [Google Scholar] [CrossRef] [Green Version]
  186. Doucette, P.; Agouris, P.; Stefanidis, A. Automated Road Extraction from High Resolution Multispectral Imagery. Photogramm. Eng. Remote Sens. 2004, 70, 1405–1416. [Google Scholar] [CrossRef]
  187. Jin, X.; Davis, C.H. An Integrated System for Automatic Road Mapping from High-Resolution Multi-Spectral Satellite Imagery by Information Fusion. Inf. Fusion 2005, 6, 257–273. [Google Scholar] [CrossRef]
  188. Shi, W.; Miao, Z.; Debayle, J. An Integrated Method for Urban Main-Road Centerline Extraction From Optical Remotely Sensed Imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3359–3372. [Google Scholar] [CrossRef]
  189. Liu, W.; Zhang, Z.; Chen, X.; Li, S.; Zhou, Y. Dictionary Learning-Based Hough Transform for Road Detection in Multispectral Image. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2330–2334. [Google Scholar] [CrossRef]
  190. Sun, T.-L. A Detection Algorithm for Road Feature Extraction Using EO-1 Hyperspectral Images. In Proceedings of the IEEE 37th Annual 2003 International Carnahan Conference onSecurity Technology, Taipei, Taiwan, 14–16 October 2003; pp. 87–95. [Google Scholar]
  191. Gardner, M.E.; Roberts, D.A.; Funk, C. Road Extraction from AVIRIS Using Spectral Mixture and Q-Tree Filter Techniques. In Proceedings of the AVIRIS Airborne Geoscience Workshop, Santa Barbara, CA, USA, 1 December 2001; Volume 27, p. 6. [Google Scholar]
  192. Noronha, V.; Herold, M.; Roberts, D.; Gardner, M. Spectrometry and Hyperspectral Remote Sensing for Road Centerline Extraction and Evaluation of Pavement Condition. In Proceedings of the Pecora Conference, San Diego, CA, USA, 11–13 March 2002. [Google Scholar]
  193. Huang, X.; Zhang, L. An Adaptive Mean-Shift Analysis Approach for Object Extraction and Classification From Urban Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2008, 46, 4173–4185. [Google Scholar] [CrossRef]
  194. Resende, M.; Jorge, S.; Longhitano, G.; Quintanilha, J.A. Use of Hyperspectral and High Spatial Resolution Image Data in an Asphalted Urban Road Extraction. In Proceedings of the IGARSS 2008—2008 IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 8–11 July 2008; IEEE: New York, NY, USA, 2008; pp. III-1323–III-1325. [Google Scholar]
  195. Mohammadi, M. Road Classification and Condition Determination Using Hyperspectral Imagery. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXIX-B7, 141–146. [Google Scholar] [CrossRef] [Green Version]
  196. Liao, W.; Bellens, R.; Pizurica, A.; Philips, W.; Pi, Y. Classification of Hyperspectral Data Over Urban Areas Using Directional Morphological Profiles and Semi-Supervised Feature Extraction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 1177–1190. [Google Scholar] [CrossRef]
  197. Miao, Z.; Shi, W.; Zhang, H.; Wang, X. Road Centerline Extraction From High-Resolution Imagery Based on Shape Features and Multivariate Adaptive Regression Splines. IEEE Geosci. Remote Sens. Lett. 2013, 10, 583–587. [Google Scholar] [CrossRef]
  198. Abdellatif, M.; Peel, H.; Cohn, A.G.; Fuentes, R. Hyperspectral Imaging for Autonomous Inspection of Road Pavement Defects. In Proceedings of the 36th International Symposium on Automation and Robotics in Construction (ISARC), Banff, AB, Canada, 24 May 2019. [Google Scholar]
  199. Tupin, F.; Maitre, H.; Mangin, J.-F.; Nicolas, J.-M.; Pechersky, E. Detection of Linear Features in SAR Images: Application to Road Network Extraction. IEEE Trans. Geosci. Remote Sens. 1998, 36, 434–453. [Google Scholar] [CrossRef] [Green Version]
  200. Tupin, F.; Houshmand, B.; Datcu, M. Road Detection in Dense Urban Areas Using SAR Imagery and the Usefulness of Multiple Views. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2405–2414. [Google Scholar] [CrossRef] [Green Version]
  201. Wang, Y.; Zheng, Q. Recognition of Roads and Bridges in SAR Images. Pattern Recognit. 1998, 31, 953–962. [Google Scholar] [CrossRef]
  202. Dell’Acqua, F.; Gamba, P.; Lisini, G. Road Map Extraction by Multiple Detectors in Fine Spatial Resolution SAR Data. Can. J. Remote Sens. 2003, 29, 481–490. [Google Scholar] [CrossRef]
  203. Lisini, G.; Tison, C.; Tupin, F.; Gamba, P. Feature Fusion to Improve Road Network Extraction in High-Resolution SAR Images. IEEE Geosci. Remote Sens. Lett. 2006, 3, 217–221. [Google Scholar] [CrossRef]
  204. Hedman, K.; Stilla, U.; Lisini, G.; Gamba, P. Road Network Extraction in VHR SAR Images of Urban and Suburban Areas by Means of Class-Aided Feature-Level Fusion. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1294–1296. [Google Scholar] [CrossRef]
  205. He, C.; Liao, Z.; Yang, F.; Deng, X.; Liao, M. Road Extraction From SAR Imagery Based on Multiscale Geometric Analysis of Detector Responses. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 1373–1382. [Google Scholar] [CrossRef]
  206. Lu, P.; Du, K.; Yu, W.; Wang, R.; Deng, Y.; Balz, T. A New Region Growing-Based Method for Road Network Extraction and Its Application on Different Resolution SAR Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4772–4783. [Google Scholar] [CrossRef]
  207. Saati, M.; Amini, J. Road Network Extraction from High-Resolution SAR Imagery Based on the Network Snake Model. Photogramm. Eng. Remote Sens. 2017, 83, 207–215. [Google Scholar] [CrossRef]
  208. Xu, R.; He, C.; Liu, X.; Chen, D.; Qin, Q. Bayesian Fusion of Multi-Scale Detectors for Road Extraction from SAR Images. ISPRS Int. J. Geo-Inf. 2017, 6, 26. [Google Scholar] [CrossRef] [Green Version]
  209. Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.; Xu, J. Robust Line Detection of Synthetic Aperture Radar Images Based on Vector Radon Transformation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 5310–5320. [Google Scholar] [CrossRef]
  210. Jiang, M.; Miao, Z.; Gamba, P.; Yong, B. Application of Multitemporal InSAR Covariance and Information Fusion to Robust Road Extraction. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3611–3622. [Google Scholar] [CrossRef]
  211. Jin, R.; Zhou, W.; Yin, J.; Yang, J. CFAR Line Detector for Polarimetric SAR Images Using Wilks’ Test Statistic. IEEE Geosci. Remote Sens. Lett. 2016, 13, 711–715. [Google Scholar] [CrossRef]
  212. Scharf, D.P. Analytic Yaw–Pitch Steering for Side-Looking SAR With Numerical Roll Algorithm for Incidence Angle. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3587–3594. [Google Scholar] [CrossRef]
  213. Clode, S.; Kootsookos, P.J.; Rottensteiner, F. The Automatic Extraction of Roads from LIDAR Data; ISPRS: Istanbul, Turkey, 2004. [Google Scholar]
  214. Clode, S.; Rottensteiner, F.; Kootsookos, P.; Zelniker, E. Detection and Vectorization of Roads from Lidar Data. Photogramm. Eng. Remote Sens. 2007, 73, 517–535. [Google Scholar] [CrossRef] [Green Version]
  215. Hu, X.; Li, Y.; Shan, J.; Zhang, J.; Zhang, Y. Road Centerline Extraction in Complex Urban Scenes From LiDAR Data Based on Multiple Features. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7448–7456. [Google Scholar] [CrossRef]
  216. Li, Y.; Yong, B.; Wu, H.; An, R.; Xu, H. Road Detection from Airborne LiDAR Point Clouds Adaptive for Variability of Intensity Data. Optik 2015, 126, 4292–4298. [Google Scholar] [CrossRef]
  217. Hui, Z.; Hu, Y.; Jin, S.; Yevenyo, Y.Z. Road Centerline Extraction from Airborne LiDAR Point Cloud Based on Hierarchical Fusion and Optimization. ISPRS J. Photogramm. Remote Sens. 2016, 118, 22–36. [Google Scholar] [CrossRef]
  218. Zhao, J.; You, S.; Huang, J. Rapid Extraction and Updating of Road Network from Airborne LiDAR Data. In Proceedings of the 2011 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 11–13 October 2011; pp. 1–7. [Google Scholar]
  219. Chen, Z.; Liu, C.; Wu, H. A Higher-Order Tensor Voting-Based Approach for Road Junction Detection and Delineation from Airborne LiDAR Data. ISPRS J. Photogramm. Remote Sens. 2019, 150, 91–114. [Google Scholar] [CrossRef]
  220. Sithole, G.; Vosselman, G. Bridge Detection in Airborne Laser Scanner Data. ISPRS J. Photogramm. Remote Sens. 2006, 61, 33–46. [Google Scholar] [CrossRef]
  221. Boyko, A.; Funkhouser, T. Extracting Roads from Dense Point Clouds in Large Scale Urban Environment. ISPRS J. Photogramm. Remote Sens. 2011, 66, S2–S12. [Google Scholar] [CrossRef] [Green Version]
  222. Lin, Y.; Hyyppä, J.; Jaakkola, A. Mini-UAV-Borne LIDAR for Fine-Scale Mapping. IEEE Geosci. Remote Sens. Lett. 2011, 8, 426–430. [Google Scholar] [CrossRef]
  223. Soilán, M.; Truong-Hong, L.; Riveiro, B.; Laefer, D. Automatic Extraction of Road Features in Urban Environments Using Dense ALS Data. Int. J. Appl. Earth Obs. Geoinformation 2018, 64, 226–236. [Google Scholar] [CrossRef]
  224. Zhou, W. An Object-Based Approach for Urban Land Cover Classification: Integrating LiDAR Height and Intensity Data. IEEE Geosci. Remote Sens. Lett. 2013, 10, 928–931. [Google Scholar] [CrossRef]
  225. Matkan, A.A.; Hajeb, M.; Sadeghian, S. Road Extraction from Lidar Data Using Support Vector Machine Classification. Photogramm. Eng. Remote Sens. 2014, 80, 409–422. [Google Scholar] [CrossRef] [Green Version]
  226. Morsy, S.; Shaker, A.; El-Rabbany, A. Multispectral LiDAR Data for Land Cover Classification of Urban Areas. Sensors 2017, 17, 958. [Google Scholar] [CrossRef] [Green Version]
  227. Karila, K.; Matikainen, L.; Puttonen, E.; Hyyppä, J. Feasibility of Multispectral Airborne Laser Scanning Data for Road Mapping. IEEE Geosci. Remote Sens. Lett. 2017, 14, 294–298. [Google Scholar] [CrossRef]
  228. Ekhtari, N.; Glennie, C.; Fernandez-Diaz, J.C. Classification of Airborne Multispectral Lidar Point Clouds for Land Cover Mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2068–2078. [Google Scholar] [CrossRef]
  229. Pan, S.; Guan, H.; Yu, Y.; Li, J.; Peng, D. A Comparative Land-Cover Classification Feature Study of Learning Algorithms: DBM, PCA, and RF Using Multispectral LiDAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1314–1326. [Google Scholar] [CrossRef]
  230. Pan, S.; Guan, H.; Chen, Y.; Yu, Y.; Nunes Gonçalves, W.; Marcato Junior, J.; Li, J. Land-Cover Classification of Multispectral LiDAR Data Using CNN with Optimized Hyper-Parameters. ISPRS J. Photogramm. Remote Sens. 2020, 166, 241–254. [Google Scholar] [CrossRef]
  231. Yu, Y.; Guan, H.; Li, D.; Gu, T.; Wang, L.; Ma, L.; Li, J. A Hybrid Capsule Network for Land Cover Classification Using Multispectral LiDAR Data. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1263–1267. [Google Scholar] [CrossRef]
  232. Matikainen, L.; Karila, K.; Litkey, P.; Ahokas, E.; Hyyppä, J. Combining Single Photon and Multispectral Airborne Laser Scanning for Land Cover Classification. ISPRS J. Photogramm. Remote Sens. 2020, 164, 200–216. [Google Scholar] [CrossRef]
  233. Tiwari, P.S.; Pande, H.; Pandey, A.K. Automatic Urban Road Extraction Using Airborne Laser Scanning/Altimetry and High Resolution Satellite Data. J. Indian Soc. Remote Sens. 2009, 37, 223. [Google Scholar] [CrossRef]
  234. Hu, X.; Tao, C.V.; Hu, Y. Automatic Road Extraction from Dense Urban Area by Integrated Processing of High Resolution Imagery and Lidar Data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 288–292. [Google Scholar]
  235. Zhang, Z.; Zhang, X.; Sun, Y.; Zhang, P. Road Centerline Extraction from Very-High-Resolution Aerial Image and LiDAR Data Based on Road Connectivity. Remote Sens. 2018, 10, 1284. [Google Scholar] [CrossRef] [Green Version]
  236. Feng, Q.; Zhu, D.; Yang, J.; Li, B. Multisource Hyperspectral and LiDAR Data Fusion for Urban Land-Use Mapping Based on a Modified Two-Branch Convolutional Neural Network. ISPRS Int. J. Geo-Inf. 2019, 8, 28. [Google Scholar] [CrossRef] [Green Version]
  237. Elaksher, A.F. Fusion of Hyperspectral Images and Lidar-Based Dems for Coastal Mapping. Opt. Lasers Eng. 2008, 46, 493–498. [Google Scholar] [CrossRef] [Green Version]
  238. Hsu, S.M.; Burke, H. Multisensor fusion with hyperspectral imaging data: Detection and classification. In Handbook of Pattern Recognition and Computer Vision; WORLD SCIENTIFIC: Singapore, 2005; pp. 347–364. ISBN 978-981-256-105-3. [Google Scholar]
  239. Cao, G.; Jin, Y.Q. A Hybrid Algorithm of the BP-ANN/GA for Classification of Urban Terrain Surfaces with Fused Data of Landsat ETM+ and ERS-2 SAR. Int. J. Remote Sens. 2007, 28, 293–305. [Google Scholar] [CrossRef]
  240. Lin, X.; Liu, Z.; Zhang, J.; Shen, J. Combining Multiple Algorithms for Road Network Tracking from Multiple Source Remotely Sensed Imagery: A Practical System and Performance Evaluation. Sensors 2009, 9, 1237–1258. [Google Scholar] [CrossRef] [Green Version]
  241. Perciano, T.; Tupin, F.; Jr, R.H.; Jr, R.M.C. A Two-Level Markov Random Field for Road Network Extraction and Its Application with Optical, SAR, and Multitemporal Data. Int. J. Remote Sens. 2016, 37, 3584–3610. [Google Scholar] [CrossRef]
  242. Bartsch, A.; Pointner, G.; Ingeman-Nielsen, T.; Lu, W. Towards Circumpolar Mapping of Arctic Settlements and Infrastructure Based on Sentinel-1 and Sentinel-2. Remote Sens. 2020, 12, 2368. [Google Scholar] [CrossRef]
  243. Liu, S.; Qi, Z.; Li, X.; Yeh, A.G.-O. Integration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data. Remote Sens. 2019, 11, 690. [Google Scholar] [CrossRef] [Green Version]
  244. Lin, Y.; Zhang, H.; Li, G.; Wang, T.; Wan, L.; Lin, H. Improving Impervious Surface Extraction With Shadow-Based Sparse Representation From Optical, SAR, and LiDAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2417–2428. [Google Scholar] [CrossRef]
  245. Kim, Y.; Kim, Y. Improved Classification Accuracy Based on the Output-Level Fusion of High-Resolution Satellite Images and Airborne LiDAR Data in Urban Area. IEEE Geosci. Remote Sens. Lett. 2014, 11, 636–640. [Google Scholar] [CrossRef]
  246. Liu, L.; Lim, S. A Framework of Road Extraction from Airborne Lidar Data and Aerial Imagery. J. Spat. Sci. 2016, 61, 263–281. [Google Scholar] [CrossRef]
  247. Chen, Z.; Fan, W.; Zhong, B.; Li, J.; Du, J.; Wang, C. Corse-to-Fine Road Extraction Based on Local Dirichlet Mixture Models and Multiscale-High-Order Deep Learning. IEEE Trans. Intell. Transp. Syst. 2019, 21, 4283–4293. [Google Scholar] [CrossRef]
  248. Bruzzone, L.; Carlin, L. A Multilevel Context-Based System for Classification of Very High Spatial Resolution Images. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2587–2600. [Google Scholar] [CrossRef] [Green Version]
  249. Wang, J.; Qin, Q.; Yang, X.; Wang, J.; Ye, X.; Qin, X. Automated Road Extraction from Multi-Resolution Images Using Spectral Information and Texture. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 533–536. [Google Scholar]
  250. Zhao, W.; Du, S. Spectral–Spatial Feature Extraction for Hyperspectral Image Classification: A Dimension Reduction and Deep Learning Approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  251. Hamraz, H.; Jacobs, N.B.; Contreras, M.A.; Clark, C.H. Deep Learning for Conifer/Deciduous Classification of Airborne LiDAR 3D Point Clouds Representing Individual Trees. ISPRS J. Photogramm. Remote Sens. 2019, 158, 219–230. [Google Scholar]
  252. Jia, J.; Chen, J.; Zheng, X.; Wang, Y.; Guo, S.; Sun, H.; Jiang, C.; Karjalainen, M.; Karila, K.; Duan, Z.; et al. Tradeoffs in the Spatial and Spectral Resolution of Airborne Hyperspectral Imaging Systems: A Crop Identification Case Study. IEEE Trans. Geosci. Remote Sens. 2021, 1–18. [Google Scholar] [CrossRef]
Table 1. Main parameters of typical high-resolution satellites.
Table 1. Main parameters of typical high-resolution satellites.
SatelliteLaunch (Year)Swath (km)PAN (m)R (m)G (m)B (m)NIR (m)
Gaofen 1 (CN)2013 [87]7028888
Gaofen 2 (CN)2014 [88]0.83.23.23.23.2
Gaofen 6 (CN)2015 [89]28888
SuperView (CN)2016 [90]120.52222
GeoEye 1 (US)2008 [91]15.20.411.651.651.651.65
IKONOS (US)1999 [85]11.314444
PlanetScope (US)2018 [92]24.6/3333
QuickBirds (US)2001 [93]16.50.62.42.42.42.4
WorldView 1 (US)2007 [94]170.5////
WorldView 2 (US)2009 [91]170.52222
WorldView 3 (US)2014 [95]13.10.311.241.241.241.24
WorldView 4 (US)2016 [96]13.10.311.241.241.241.24
OrbView 3 (US)2003 [97]814444
RapidEye (DE)2008 [98]77/6.56.56.56.5
KOMPSAT 2 (KR)2006 [99]1514444
KOMPSAT 3 (KR)2012 [100]160.72.82.82.82.8
KOMPSAT 3A (KR)2015 [101]120.552.22.22.22.2
Pléiades 1A (FR)2011 [102]200.72.82.82.82.8
Pléiades 1B (FR)2012 [103]200.72.82.82.82.8
SPOT 6 (FR)2012 [104]601.56666
SPOT 7 (FR)2014 [105]601.56666
DubaiSat 1 (AE)2009 [106]122.55555
DubaiSat 2 (AE)2013 [107]1214444
PAN: panchromatic. R: red. G: green. B: blue. NIR: near-infrared. WorldView 2 [91] and WorldView 3 [95] have four other multispectral bands (red edge, coastal, yellow and NIR), and RapidEye has another multispectral band (red edge) [98].
Table 2. Typical airborne and spaceborne hyperspectral sensors.
Table 2. Typical airborne and spaceborne hyperspectral sensors.
NameReferencesPlatformWavelength
Range (μm)
ChannelSpectral Resolution (nm)IFOV
(mrad)
FOV/Swath
AISA-FENIX 1K[122], 2018Airborne0.38–0.97,
0.97–2.5
348,
246
≤4.5,
≤12
0.6840°
APEX[123], 2015Airborne0.372–1.015
0.94–2.54
114,
198
0.45–0.75,
5–10
0.48928.1°
AVIRIS-NG[124,125],
2016, 2017
Airborne0.38–2.524305134°
CASI-1500
SASI-1000A
TASI-600A
[126], 2014Airborne0.38–1.05,
0.95–2.45,
8–11.5
288,
100,32
2.3,
15,
110
0.49, 1.22, 1.1940°
AMMIS[127,128],
2019, 2020
Airborne0.4–0.95,
0.95–2.5,
8–12.5
256,
512,
128
2.34,
3,
32
0.25,
0.5,
1
40°
SYSIPHE[129], 2016Airborne0.4–1,
0.95–2.5,
3–5.4,
8.1–11.8
560 (total)5,
6.1,
11 cm−1,
5 cm−1
0.2515°
HSI[130], 1996LEWIS
Satellite
0.4–1,
1–2.5
128,
256
5,
5.8
0.0577.68 km
Hyperion[131], 2003EO-1
Satellite
0.4–1,
0.9–2.5
242 (total)100.0437.7 km
CHRIS[132], 2004PROBA-1
Satellite
0.4–1.0518/621.25–110.0318.6 km
CRISM[133], 2007MRO
Satellite
0.362–1.053,
1.002–3.92
544 (total)6.55 0.061>7.5 km
AHSI[134], 2019Gaofen-5
Satellite
0.39–2.51330 (total)5,
10
0.04360 km
IFOV: instantaneous field of view.
Table 3. Examples of currently available commercial ALS systems.
Table 3. Examples of currently available commercial ALS systems.
Special
Characteristics
WaveLengthHorizontal and
Elevation Accuracy
AltitudePulse
Repetition
Frequency
Point Density
Leica
Hyperion2+ [161], 2021
Multiple pulses in the air measured1064 nm<13 cm,
<5 cm
300–5500 m−2000 kHz2 pts/m2/4000 m,
40 pts/m2/600 m
Leica SPL [162], 2021Single photon532 nm<15 cm,
<10 cm
2000–4500 m20–60 kHz6 million points per second, 20 pts/m2 (4000 m AGL)
Optech
Galaxy Prime
[163], 2020
Wide-area
mapping
1064 nm1/10,000 × altitude,
<0.03–0.25 m
150–6000 m10–1000 kHz1 million point per s, 60 pts/m2 (500 m AGL), 2 pts/m2 (3000 m)
Optech Titan [164], 2015 3 wavelength1550 nm, 1064 nm,
532 nm
1/7500 × altitude,
<5–10 cm
300–2000 m3 × 50–300 kHz45 pts/m2 (400 AGL)
Riegl VQ-1560i-DW
[165], 2019
Dual-wavelength, multiple pulses in the air measured.532 nm,
1064 nm
/900–2500 m2 × 700–1000 kHz2 × 666,000 pts/s, 20 pts/m2 (1000 m AGL)
Riegl
Vux-240 [166], 2021
UAV1550 nm<0.05 m
<0.1 m
250–1400 m150–1800 kHz60 pts/m2 (300 m)
Table 4. Comparison of different data-driven methods on Massachusetts road dataset.
Table 4. Comparison of different data-driven methods on Massachusetts road dataset.
MethodAdvantagesDisadvantagesReferencesPrecision
Patch-based DCNNWeight sharing, less parameterInefficiency, large-scale training samples[168], 2016
[38], 2017
0.905
0.917
FCN-basedArbitrary image size, end to end trainingLow fitness, low position accuracy, lack of spatial consistency[36], 20160.710
DeconvNet-basedArbitrary image size, end to end training, better fitnessHigh cost of computing and storage[49], 2017
[51], 2018
0.858
0.919
GAN-basedMore consistentNon-convergence,
gradient vanishing, and model collapse
[65], 2017
[66], 2017
0.841
0.883
Graph-basedHigh connectivityComplex graph reconstruction and optimisation[169], 2018
[170], 2020
0.835
0.823
Table 5. Summary of road extraction using hyperspectral images.
Table 5. Summary of road extraction using hyperspectral images.
MethodPlatformCharacteristicReferences
Traditional process includes the spectral informationSpaceborneExtract the main roads[190], 2003
Spectral mixture and Q-tree filterAirborneAssess road quality[191], 2001
Pixel
to pixel classification
AirborneExtract asphalted urban roads[194], 2008
Spectral angle mapperAirborneRoad classification and condition determination[196], 2012
Computing the angle from spectral responseUAVDetect pavement roads[198], 2019
Table 6. Summary of road extraction using SAR images.
Table 6. Summary of road extraction using SAR images.
MethodCategoryCharacteristicReferencesPrecision
Multiple DetectorsHeuristicFusion of different pre-processing algorithms, road extractors[202], 20030.580
correctness
Line based on vector Radon transformHeuristicSuitable for different platform SAR images[209], 20190.700–0.940 correctness
Multitemporal InSAR covariance and information fusionHeuristicUse interferometric information[210], 20170.816
correctness
FCN-basedData-drivenAutomatic road extraction[158], 20190.921
FCN-8sData-drivenLack efficiency[46], 2018 0.717
Table 7. Summary of road extraction using LiDAR data.
Table 7. Summary of road extraction using LiDAR data.
MethodCategoryCharacteristicReferencesCorrectness
Hierarchical fusion and optimisationALSExtract road centreline[217], 20150.914
Point-based classification
Raster-based classification
MS-ALSLand cover classification[226], 20170.920
0.860
Object-based image analysis and random forestMS-ALSroad detection and road surface classification[227], 20170.805
Support vector machineMS-ALSThree types of asphalt and a
concrete class
[228], 20180.947
(Overall accuracy)
Hybrid capsule networkMS-ALSLand cover classification[231], 2020 0.979
(Overall accuracy)
Table 8. Summary of road extraction based on different data sources.
Table 8. Summary of road extraction based on different data sources.
DataResolution/
Mapping Unit
ExtentAdvantagesRoads Extracted Mostly by
High spatial resolution [71], 20200.5–10 mLocal/regional/globalMost tools available, “basic” softwareColour, texture
Hyperspectral [198], 20190.25–30 m/
(>100 channels)
Local/regionalSpectral informationColour, texture and spectral features
SAR [72], 20141–10 mLocal/regional/globalSee through clouds, rapid mappingLinear features/edge
ALS [75], 20170.25–2 mLocal (nationwide)Height information3D geometry
(intensity)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jia, J.; Sun, H.; Jiang, C.; Karila, K.; Karjalainen, M.; Ahokas, E.; Khoramshahi, E.; Hu, P.; Chen, C.; Xue, T.; et al. Review on Active and Passive Remote Sensing Techniques for Road Extraction. Remote Sens. 2021, 13, 4235. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214235

AMA Style

Jia J, Sun H, Jiang C, Karila K, Karjalainen M, Ahokas E, Khoramshahi E, Hu P, Chen C, Xue T, et al. Review on Active and Passive Remote Sensing Techniques for Road Extraction. Remote Sensing. 2021; 13(21):4235. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214235

Chicago/Turabian Style

Jia, Jianxin, Haibin Sun, Changhui Jiang, Kirsi Karila, Mika Karjalainen, Eero Ahokas, Ehsan Khoramshahi, Peilun Hu, Chen Chen, Tianru Xue, and et al. 2021. "Review on Active and Passive Remote Sensing Techniques for Road Extraction" Remote Sensing 13, no. 21: 4235. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214235

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop