Next Article in Journal
Calibrating Satellite-Based Indices of Burn Severity from UAV-Derived Metrics of a Burned Boreal Forest in NWT, Canada
Next Article in Special Issue
Automatic Object-Oriented, Spectral-Spatial Feature Extraction Driven by Tobler’s First Law of Geography for Very High Resolution Aerial Imagery Classification
Previous Article in Journal
A Hierarchical Maritime Target Detection Method for Optical Remote Sensing Imagery
Previous Article in Special Issue
Detection and Segmentation of Vine Canopy in Ultra-High Spatial Resolution RGB Imagery Obtained from Unmanned Aerial Vehicle (UAV): A Case Study in a Commercial Vineyard
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

UAV-Based Oblique Photogrammetry for Outdoor Data Acquisition and Offsite Visual Inspection of Transmission Line

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Submission received: 12 September 2016 / Revised: 7 March 2017 / Accepted: 14 March 2017 / Published: 16 March 2017
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)

Abstract

:
Regular inspection of transmission lines is an essential work, which has been implemented by either labor intensive or very expensive approaches. 3D reconstruction could be an alternative solution to satisfy the need for accurate and low cost inspection. This paper exploits the use of an unmanned aerial vehicle (UAV) for outdoor data acquisition and conducts accuracy assessment tests to explore potential usage for offsite inspection of transmission lines. Firstly, an oblique photogrammetric system, integrating with a cheap double-camera imaging system, an onboard dual-frequency GNSS (Global Navigation Satellite System) receiver and a ground master GNSS station in fixed position, is designed to acquire images with ground resolutions better than 3 cm. Secondly, an image orientation method, considering oblique imaging geometry of the dual-camera system, is applied to detect enough tie-points to construct stable image connection in both along-track and across-track directions. To achieve the best geo-referencing accuracy and evaluate model measurement precision, signalized ground control points (GCPs) and model key points have been surveyed. Finally, accuracy assessment tests, including absolute orientation precision and relative model precision, have been conducted with different GCP configurations. Experiments show that images captured by the designed photogrammetric system contain enough information of power pylons from different viewpoints. Quantitative assessment demonstrates that, with fewer GCPs for image orientation, the absolute and relative accuracies of image orientation and model measurement are better than 0.3 and 0.2 m, respectively. For regular inspection of transmission lines, the proposed solution can to some extent be an alternative method with competitive accuracy, lower operational complexity and considerable gains in economic cost.

Graphical Abstract

1. Introduction

Modern society is becoming increasingly more dependent on reliable supply and distribution of electric power. In order to minimize outage of power system, transmission lines are regularly inspected by electricity companies to detect potential risks, such as vegetation encroachment, corrosion of conductor, and pylon overload [1,2]. Vegetation encroachment causes power failure most frequently, and leads to the majority of workload for power line inspection [3].
The most common approaches for power line inspection are foot patrol and helicopter-assisted inspection. During foot patrol inspection, there are usually two inspectors who walk towards each other along power line corridor to visually inspect power lines. For helicopter-assisted inspection, the helicopter flies over power lines while inspectors use some instruments, such as binoculars or video cameras, to record images for onboard or laboratory check. Because power lines are usually located in mountainous areas and visual inspection is a subjective matter, both methods are either time consuming or very expensive as well as prone to mistake-making [1]. Furthermore, because of expansion of power grid and enhancement of voltage level, these traditional manual operation modes less and less satisfy the demand of fast, safe and low cost inspection.
3D models of power pylon and line could be used for transmission line management in many aspects, such as geometry calculation of power pylon for electrical parameter prediction, distance measurement between power line and vegetation for encroachment inspection, etc. Comparing to traditional manual operation methods, this strategy could provide accurate measurement ability to obtain essential information for transmission line inspection under 3D measurable environment. The most commonly used data sources for 3D reconstruction can be categorized as laser scanning point cloud and optical images. Until recently, LiDAR (Light Detection and Ranging) technique has been the dominant approach for dense 3D point cloud acquisition with high accuracy [4,5], which could be used to extract semantic information of transmission line [6], especially for 3D reconstruction. Many research works have focused on 3D reconstruction of power pylon and line from point cloud acquired by laser scanning. McLaughlin et al. [7] presented an algorithm to aid in monitoring of high-voltage transmission line, which consisted of initial classification of point cloud and segmentation of extracted point data. Considering the environmental complexity of lower voltage transmission line, Zhu et al. [8] utilized statistical analysis and 2D image-based processing technology to extract point cloud of transmission line in forest environments. Guo et al. [9,10] proposed methods for 3D reconstruction of power pylon and line from airborne LiDAR data. Point cloud classification was performed in both methods. For power line reconstruction, distribution properties and contextual information were considered. A predefined model library has been used for power pylon reconstruction. Because of different conditions in urban environment, Cheng et al. [11] preferred vehicle-borne laser data for urban power line reconstruction based on a voxel-based hierarchical method for power line point extraction. However, almost all these promoted methods rely on point cloud classification results for model reconstruction. Considering that transmission line usually locates in mountainous regions and be surrounded by complex environments, the classification procedure could not be easily addressed, especially for power pylon point extraction.
Although advantages result from direct geo-referencing, LiDAR systems are much more expensive than cameras. Optical images, including satellite remote sensing image and aerial photogrammetric image, are the other frequently utilized data source for visual interpretation in many applications. Thus, it is rational and cost-effective to use only images for power line inspection [12]. In the literature, some experiments have been reported. Mills et al. [13] presented a three-stage method for vegetation management in power line corridor, including detection of trees, relative positioning with respect to power line and vegetation height estimation. Ahmad et al. [14] suggested a method to monitor dangerous vegetation by installing a camera on each transmission pole to obtain images of power line corridor. The usage of this method was limited because it is not possible to image the whole corridor with long span. Sun et al. [15] designed an airborne system with two cameras for automatic measurement of distance between vegetation and power line based on stereo vision techniques. In their method, vegetation surface was firstly recovered based on image matching technique, and power lines between successive power pylons were modeled as catenary after successful detection of power pylons. The accuracy of measurement was not guaranteed because ray intersection angles were relative small for image pairs with short baseline. Power line fitting methods would not be applicable because of the complex steel structure of power pylon, which led to many more difficulties for power pylon detection and hanging point identification. For 3D model reconstruction of power pylon and line from optical images, two factors are important. The first one is that flexible data acquisition strategy should be considered because of the elongated and complex geo-location structure of transmission line. The second factor is that images should be captured with high spatial resolution in order to extract the thin structure of power line. However, platforms, including satellites and aerial planes, could not satisfy the need for flexible data acquisition. In addition, spatial resolution of images captured from these two platforms is usually in sub-meter, which could not facilitate power line extraction. Hence, some other platforms and techniques should be developed to meet new coming demands.
In recent years, unmanned aerial vehicle (UAV) has been emerged as new platform for data acquisition with a growing number of applications [16,17,18,19]. UAV platforms feature rapid data acquisition, low cost and easy to use [20,21,22]. Due to the fast and flexible data acquisition ability, UAV-based system could be used in fields of agricultural and environmental management, emergency response and engineering monitoring, etc. Aicardi et al. [23] proposed a technique to automate the co-registration of multi-temporal UAV images without GCPs for dynamic scene monitoring. Bendig et al. [24] designed a mini-UAV integrated with either a thermal imaging system or a multi-spectral imaging system for computation of NDVI index. In the study of [25], a rotary-wing UAV equipped with two cameras and a laser scanner has been promoted for the purpose of emergency response, which demanded rapid data acquisition ability. Matsuoka et al. [26] reported an experiment to investigate the feasibility of the deformation measurement of a large-scale solar power plant by using images acquired by a non-metric digital camera onboard a micro UAV platform. Based on the stereo vision principle of photogrammetry, UAV systems integrated with digital camera, GNSS, IMU or other sensors, could be used to acquire low altitude, high resolution images and achieve 3D reconstruction for natural or man-made scenes [27,28,29,30,31]. Some tests have been conducted to verify the validity of the UAV-based photogrammetric solution. Gonçalves et al. [32] demonstrated the use of UAV photogrammetry for topographic monitoring of coastal areas with average accuracy matched conventional aerial photogrammetry. Diaz-Varela et al. [33] tested the performance of UAV for parameters estimation of olive crown through image data acquisition, ortho-mosaic and digital surface generation. Rinaudo et al. [34] described a low cost UAV system for image data acquisition of an archaeological site. Successful orientation and ortho-photo generation demonstrated the usage for archaeological site excavation management. Compared with previous solutions, a lightweight UAV system could dramatically reduce labor intensity and be a more efficient approach with lower operational complexity and considerable gains in economic cost. In addition, flexible flight control and lower flight altitude could satisfy the basic needs for data acquisition of transmission line. While diverse applications of UAV-based photogrammetry across different fields of science have been reported, there is some lack of knowledge about 3D reconstruction of power pylon and line for the purpose of transmission line inspection.
This paper exploits a UAV-based photogrammetric system for outdoor data acquisition and conducts model accuracy assessment tests to explore potential usage for offsite inspection of transmission line. Firstly, an UAV-based oblique photogrammetric system is designed [35], which integrates with a dual-camera system for image acquisition, an onboard GNSS receiver and a ground master GNSS station in fixed position for DGNSS (Differential Global Navigation Satellite System). Secondly, a data processing solution for image orientation and model accuracy evaluation of transmission line will be presented after analysis of requirements for oblique photogrammetry. The technique framework for outdoor data acquisition and model accuracy assessment experiments of transmission line is shown in Figure 1.
This paper is organized as follows. Section 2 introduces design and integration of the UAV-based oblique photogrammetric system. Materials and methods for image orientation and model accuracy assessment of transmission line are presented in Section 3, which is followed by analysis of image orientation results and model measurement accuracy in Section 4. In addition, some aspects will be discussed in Section 5, including dual-camera system, model accuracy and potential usage of this presented solution for transmission line inspection. Finally, Section 6 presents the conclusion and future studies.

2. UAV Photogrammetric System

2.1. Imaging Geometry for Transmission Line

Figure 2 illustrates imaging geometry of the UAV oblique photogrammetry system for transmission line in both along-corridor and across-corridor directions, where h and H stand for heights of power pylon and flight trajectory, respectively; and w is the distance between the ground projection of flight trajectory and the base center of power pylon, which will be restricted by the minimum operation distance. Under determination of camera’s mounting angles α , β and FOV (Field Of View) θ , ground coverage width will be given by W , where α and β represent roll and pitch angles of the camera.
In order to make sure that ground extent represented by w will be completely covered during one campaign, the roll angle α should be less than θ / 2 when pitch angle β equals zero. Relationship between H , α and W , illustrated by Figure 2a, is represented by Formulas (1) and (2) which indicates that roll angle α is the most important parameter for the integrated photogrammetric system when safety distance w and camera FOV θ are determined. Meanwhile, the height of flight trajectory directly influences the value of α to some extent. The same situation applies to pitch angle β . Relationship expressed by these two formulas should be seriously considered for design of photogrammetric system and configuration of camera mounting angle.
tan   α = w / H
W = H × ( tan ( θ / 2 + α ) + tan ( θ / 2 α ) )

2.2. Camera Selection Strategy

Considering the most related factors, including payload limitation of UAV platform, desired GSD (Ground Sample Distance) of image, Sony RX1R digital camera has been selected as image record instrument for the integrated photogrammetric system. The camera has a 24 Mpixel (6000 by 4000 pixels) sensor, physical camera dimensions of 35.8 mm by 23.9 mm and FOV angles of 54.16° and 37.70° for long and short sensor sides, respectively. Besides, the camera focal length is 35 mm. Thus, the maximum roll angle α would be 27.80° or 18.85° when the flight is performed in the direction of small sensor side or long sensor side (roll angle α should be less than θ / 2 ). The camera weight is about 482 g, including battery and memory card, which could decrease overload hazard.
According to the basic parameters of RX1R camera, ground coverage and GSD under different flight heights are calculated and presented in Table 1. For example, under flight height of 100 m, image would have ground coverage of 102.28 m by 68.28 m for long side and short side and GSD is about 1.70 cm. Flight height would usually vary from 50 to 150 m for data acquisition of transmission line. Thus, the camera could acquire images with ground resolution better than 3 cm.

2.3. Design and Integration of Oblique Photogrammetric System

For data acquisition of transmission line, two factors are important. The first one is that images of power pylons in directions of along-corridor and across-corridor should be captured. This could dramatically facilitate key point measurement from stereo image pairs and increase model measurement accuracy. The other factor is that transmission line is elongated. Then, the strategy to obtain side images by additional flights in direction of across-corridor is labor intensive.
The designed and manufactured photogrammetric system is shown by Figure 3a. This system mainly consists of a dual-camera imaging instrument characterized with adjustable roll and pitch angles, a mobile dual-frequency GNSS receiver, a ground master GNSS station in fixed position and a central control unit to maintain GNSS signal receiving and camera capture (mobile GNSS receiver and ground master GNSS station are not shown). This dual-camera imaging system could simultaneously record images of power pylon in directions of along-track and across-track to obtain as much information as possible by only one flight. Because of small size and low power consumption, the NovAtel SPAN on OEM615 dual-frequency GNSS receiver board has been adopted and developed as onboard dual-frequency GNSS receiver [36]. These two GNSS receivers, including onboard and ground master receivers, could be used to achieve differential GNSS for more precise image position.
A multi-rotor UAV is used in this work. The UAV is composed of six rotors, a positioning sensor to provide rough location, an autopilot circuit board and a flight control with display and control systems. Energy is supplied by two lithium batteries that last approximate 30 min. Considering power consumption due to winds, takeoff and landing, the duration of the UAV is no more than 20 min. This UAV system provides three operation modes, which consists of manual, semi-automatic and automatic modes. The semi-automatic mode has been used in this study, where takeoff and landing of one flight are assisted by operators. Inspection of transmission line is automatically conducted based on the pre-planning trajectory.
The oblique photogrammetric system integrated with the UAV platform is illustrated in Figure 3b, where the dual-camera image acquisition instrument has been mounted at the bottom of the UAV platform. For photogrammetric measurement, these cameras have been calibrated in laboratory. Calibration model with 10 parameters has been adopted in this study, including one for focal length, two for principle point, two for distortion center and five for coefficients of radial distortion, which is described in MicMac documentation [37]. After integration with the UAV platform, system calibration has been conducted to estimate lever-arm vector between phase center of GNSS antenna and optical center of camera. Initial value of lever-arm vector firstly has been measured. Then, this value is refined in bundle adjustment by using image observations and GNSS measurements. Both camera calibration and system calibration have been done using MicMac software (details in Section 3.2.2).
Under flight height of 100 m, ground coverage and degree of overlap of images acquired by the integrated photogrammetric system are listed in Table 2. (Velocity of UAV platform is about 10 m/s. Camera capture frequency is 3 s. Front camera’s long sensor side is in along-track direction and back camera’s long sensor side is in across-track direction. Image overlap has been calculated in direction of long sensor side for two cameras.) Oblique angle is varying from 5° to 25° which is within the limited range (less than θ / 2 ). The minimum ground coverage is 103.24 m for long sensor side and 70.94% is the minimum image overlap in along-track direction. The largest image overlap in across-track direction is 92.04%. Three degrees of overlap would be realized in both along-track and across-track directions.

3. Materials and Methods

3.1. Study Area and Data Acquisition

3.1.1. UAV Image Acquisition

The study area is located in Longgang District of Shenzhen, China. The transmission line named Lingkun, with voltage of 500 kv, has been selected for data acquisition test. One experimental site has been considered in this study as showed by Figure 4. This test site is covered by farmlands, in which several roads exist. Four power pylons with height of approximate 65 m are within the test region, which are labeled 54 to 57.
Data acquisition procedure mainly contains three steps. For the first step, materials about the test site should be seriously considered, which include location of the test site, topographical change of terrain, voltage and height of power pylon, etc. Trajectory planning would benefit from this information. Configuration of flight is the second step, involving flight height, flying speed, mounting angles for the dual-camera imaging instrument and UAV control mode and so on. These settings would significantly affect degree of overlap and GSD of captured images. Much attention thus should be paid to this step. These two steps are usually completed off site. The third step is to install instruments and acquire images based on integrated photogrammetric system.
Based on basic information of the test site, mission configuration could be made according to imaging geometry of the oblique photogrammetry. Usually, the safe flight distance, perpendicular to transmission line as represented by w in Figure 2a, is 35 m and the height offset between flight trajectory and top of power pylon is in the range between 35 and 60 m. In this study, the safe flight distance is set as 35 m. The flight height is about 120 m as represented by H in Figure 2a, which is determined by the sum of power pylon height and trajectory height offset value 55 m. As illustrated by Formula (1), back camera’s roll angle α can be determined when the safe flight distance w and the flight height H are given. Two situations, including optical axis of camera targeting to both base and top of power pylon, have been considered. Roll angles of 17° and 33° could be calculated accordingly. In order to make sure that power pylon and line locate near center of recorded images, the average value 25° is selected as back camera’s roll angle. Configuration of front camera’s pitch angle β is more flexible because the UAV platform flies in direction of along-corridor and could easily capture photos of transmission line. Additionally, camera capture frequency is another important item. In this study, it is set as 3 s, as a compromise between image overlap and data volume. All of these are the most important settings for the oblique photogrammetric system.
The detailed information for flight configuration is listed in Table 3. Velocity of UAV is about 8 m/s and the flight has been operated using the semi-automatic mode. The manual control mode is utilized for takeoff and landing of UAV platform. In this study, dual-strip flight mode has been chosen, which constructs a closed loop trajectory for transmission line as presented in Figure 4. Trajectory length is approximate 3.26 km. Flight height listed below is relative to the position from which the UAV takes off. The most important configuration parameters for data acquisition are pitch and roll angles of the dual-camera system. For this test site, pitch and roll angles for front camera are 25° and −15°, respectively, which would make sure that images taken by front camera contain as much information of power pylons and lines as possible in direction of along-track. The same situation is for back camera, whose roll angle of −25° can facilitate obtaining sufficient information in direction of across-track.
542 images are acquired after a flight of 10 min and the GSD is about 3.67 cm. Figure 5 shows some photos taken from the test site. The images in Figure 5a,b are taken in direction of along-track. The imagines in Figure 5c,d are captured in across-track direction. All these photos contain much information from different viewpoints of power pylon, which would dramatically facilitate 3D reconstruction of power pylon and line. With the aid of ground master GNSS station in fixed position, the geo-referencing accuracy of UAV trajectory could be increased by differential GNSS. In this study, differential GNSS processing has been conducted using GrafNav GNSS post-processing software [36]. The final geo-location precision of differential GNSS could be sub-meter because the ground master station is about 64 km away from this test site and only float solution is achieved. Additionally, relative orientation accuracy is more important for transmission line inspection. Thus, high precision geo-location technique, such as PPK-GNSS (Post Processed Kinematic GNSS), has not been utilized in this test.

3.1.2. GCP and Key Point Survey

Ground control points make it possible to achieve the best geo-referencing accuracy for image orientation. Comparison between ground true coordinates and measured model coordinates could be used to assess accuracy of reconstructed models. Usually, ground control points are focused on obvious feature points which could be easily identified and measured in photographs, such as road intersections and building corners. Considering that four roads go across the whole test site and many water pools exist with nearly right angles, 43 ground control points have been selected on these two types of regions, which are uniformly distributed in test site as shown by Figure 6. The accurate coordinate measurements are performed with a Trimble R8 GNSS receiver, whose horizontal and vertical accuracies are 1 and 2 cm in RTK-GNSS mode, respectively. Meanwhile, the CORS system named GDCORS has been used, which has five CORS base stations in the city of Shenzhen and could provide geo-referencing services with centimeter-level precision.
In order to assess accuracy of reconstructed models, some model key points have also been selected and surveyed, as shown by Figure 7a,b, where red rectangles stand for desired model key points. For one power line between two successive power pylons, the hanging points and three sample points are recorded as key points, which would be used to assess accuracy of reconstructed power line model. In this study, two power pylons labeled 54 and 55 and two power lines connecting these two pylons have been selected for model accuracy assessment. These two selected power lines are labeled phase A and phase C. All these model key points are measured using a reflectorless Topcon GPT-3102N total station with an accuracy of 5 mm for distance measurement and 2 s for angle.

3.2. GNSS and Camera Installation Angle Assisted Image Orientation

3.2.1. Tie-Point Extraction

Tie-point extraction is the most time consuming step during image orientation. There are four tie-point extraction modes:
(1)
Exhausted mode: Each image would be matched with all the other images. This mode should be avoided because it is the most time consuming.
(2)
Multi-scale mode: All images are first down-sampled and then exhausted matching is performed on down-sampled images. If tie-point number exceeds a predefined count, such as two, original images would be labeled as an image pair.
(3)
Linear mode: This mode assumes that one image just overlaps the N-nearest images. All the other images can be ignored for image matching.
(4)
GNSS assisted mode: Image pairs would be determined by image position.
The first two tie-point extraction modes can be used in any data acquisition situation. However, they are the most time consuming. Linear matching mode can only be used if image acquisition has been conducted in a regular rule, such as all images captured sequentially. The last mode calculates ground extent of each image based on approximate imaging model and determines whether two images are overlapped. Figure 8a shows only GNSS assisted matching mode, which indicates image 1 and image 2 are not overlapped. For oblique photogrammetry, extra attention should be paid to camera’s mounting angles. Figure 8b shows the overlapped region drawn as yellow line if image 1 and image 2 are taken obliquely. This image pair would not be found if only GNSS information is used for searching. In this study, GNSS information and mounting angles of dual-camera, including pitch and roll angles, would be seriously considered to both accelerate tie-point extraction and find as many image pairs as possible.
The SIFT algorithm has been used for feature detection and matching in this study. This algorithm is invariant to scale, illumination change and affine transformation to some extent as well as rotation and translation [38,39], which is suitable for feature detection and matching of oblique images. The tie-point extraction procedure consists of feature detection and feature matching. In processing of feature detection, a local extreme pixel in DoG scale space is regarded as a candidate key-point and its sub-pixel location is calculated by a quadratic approximation. Key-points with low contrast are removed and then each key-point is assigned a 128-dimension descriptor. Therefore, feature matching is implemented by comparing two sets of feature descriptors using nearest neighbor method, in which correspondence can be identified when the ratio between the shortest distance and the second shortest distance is smaller than a given threshold. Additionally, wrong matches are discarded by RANSAC (RANdom SAmple Consensus) where fundamental matrix has been selected as geometric constrain model in this study.

3.2.2. Image Orientation

Some free and open-source packages have been released, such as Bundler [40] and CMVS [41,42], Apero and MicMac [43]. These free software packages can be used with flexibility to output middle results and add project-dependent modules. Considering the needs of flexible configuration for image orientation, such as camera mounting angles for prediction of image pair and image orientation results in PMVS format for discrete model key points digitization of power pylon and line, the free software MicMac has been selected for image orientation, which provides a complete framework for image processing based on principles of photogrammetry and computer vision. Thus, image orientation is made based on matched tie-points and orientated images are converted to final results in PMVS format, which generates a projection matrix for each image.
Commands for tie-point extraction, camera calibration, lever-arm estimation and image orientation based on MicMac are listed in Table 4. Among these tools, GrapheHom is used to search image pairs for corresponding point matching by using GNSS and camera mounting angles, which is the key point for successful image orientation. Cmapari provides the ability to estimate lever-arm vector for each camera, which stands for offset vector between phase center of GNSS antenna and optical center of camera. For detailed usage of MicMac software, referring to the project website [44] and the most related papers [45,46,47].

3.3. Methods for Accuracy Assessment

3.3.1. Absolute Accuracy Assessment

In order to evaluate geo-referencing accuracy of image orientation result, absolute accuracy assessment has been conducted using GCPs. The basic principle for model point measurement is co-linear equation as presented by Formula (3). After successful orientation, external orientation parameters of each image could be obtained. Cooperating with camera’s calibration parameters, 3D coordinates in object space can be calculated from at least two image correspondences.
x = f a 1 ( X X S ) + b 1 ( Y Y S ) + c 1 ( Z Z S ) a 3 ( X X S ) + b 3 ( Y Y S ) + c 3 ( Z Z S ) y = f a 2 ( X X S ) + b 2 ( Y Y S ) + c 2 ( Z Z S ) a 3 ( X X S ) + b 3 ( Y Y S ) + c 3 ( Z Z S )
Appropriate image selection is the first step for accuracy assessment. The main principle for image selection is that each model point corresponding to a GCP should be measured from images in four directions. In this experiment, two images from front camera in opposite directions and two images from back camera in opposite directions are selected. This configuration can improve measurement precision because of relative larger ray intersection angle for stereo measurement.
Considering that transmission line is usually located in mountainous regions, it is very hard to survey GCPs in the field. Thus, three tests are configured to verify geo-referencing accuracy without or with few GCPs as presented in Table 5. The first test is used to assess geo-location accuracy under condition that no GCP could be surveyed. This situation could occur because almost no GCP can be surveyed. The second one assumes that there is only one GCP utilized in orientation. A translation operation would be applied to the whole orientation model to decrease offset between reconstructed model and actual geo-location. The last test is an ideal condition that enough GCPs exists, which could provide the most accurate orientation result. Rules for GCPs distribution follow the principle of traditional photogrammetry, which ensures that GCPs are evenly distributed in test site.

3.3.2. Relative Accuracy Assessment

Relative accuracy indicates measurement precision of models from oriented images. There are two kinds of relative accuracy assessment: assessment for power pylon measurement and assessment for power line measurement. Based on stereo measurement principle, ten model points from power pylon, illustrated by Figure 7a, could be calculated. Then, three width values and two height values can be deduced from these measured points and in-field surveyed key points. The width value is calculated by two end points of each cross arm as presented by Formula (4), where H i 1 and H i 2 are 3D coordinates of two end points, d i s is the function for Euclid distance calculation, and i = 1 , 2 , 3 . The height value is deduced from one vertex at top of pylon and another from base of pylon as presented by Formula (5), where T j is the top vertex, B j is the base vertex, and j = 1 , 2 . Thus, relative accuracy for power pylon can be assessed by these five values.
W i d t h i = d i s ( H i 1 , H i 2 )
H e i g h t j = d i s ( T j , B j )
In order to assess relative accuracy for power line measurement, two or three sample points have been measured near each endpoint of a power line, which are usually closed to hanging points. There are at least six digitized points for each power line, including two hanging points. Based on all these measured points, a power line model could be fitted by a catenary curve and a straight line model can be deduced from these two digitized hanging points. Three evenly distributed target points have been selected from the straight line model and vertical distances from target points to the power line model could be calculated, represented by V i , i = 1 , 2 , 3 .
The same model fitting operations have conducted for in-field surveyed key points. Firstly, a power line model could be fitted by five in-field surveyed key points, corresponding to points G 1 , G 2 , L 1 , L 2 and L 3 as illustrated by Figure 7b. Secondly, a straight line model can be fitted by two in-filed surveyed key points, which correspond to points G 1 and G 2 . Finally, three vertical distances with the same distribution as described above could be calculated, represented by V i , i = 1 , 2 , 3 .
In this study, these three vertical distances have been used to assess measurement accuracy for power line model as presented by Formula (6), where a b s is the function for absolute value calculation.
D V i = a b s ( V i V i )

4. Experiment and Results

4.1. Tie-Point Extraction and Stability Analysis of Image Connection Network

4.1.1. Tie-Point Extraction

Original images have been down-sampled to half size to accelerate tie-point extraction. Firstly, SIFT features are detected for each image. Secondly, all candidate image pairs are matched. Then, outliers are removed using a classical RANSAC algorithm based on fundamental matrix estimation. Figure 9 shows a portion of tie-point extraction result of two successive images. Tie-points are labeled using identical numbers. By visual inspection, almost all tie-points are matched correctly, especially in vegetation regions where erroneous matches tend to occur because of very similar local texture.
Statistic results for tie-point extraction of two configurations, including only GNSS assisted mode and GNSS-angle assisted mode, are represented in Table 6. In this test, 402 images have been used. The number of images used is smaller than number of images taken because images captured during takeoff and landing of UAV platform do not have regular overlap, and they are removed from rest processing workflow. With assistant of GNSS and camera mounting angles, approximate 23% more image pairs are searched, which generates more matched tie-points.

4.1.2. Stability Analysis of Image Connection Network

By further checking of tie-point extraction results, some findings are identified: (1) the extra image pairs mainly consist of combinations of front–front cameras in the opposite direction, front–back cameras in the same direction and front–back cameras in the opposite direction; and (2) image pairs from combinations of front–back cameras in the same direction contain more tie-points than the other combinations among these extra image pairs. Least tie-points are detected from combinations of front–front cameras in opposite direction.
The first finding could be explained by mounting angles of cameras, which has been shown in Figure 8. The second finding is illustrated in Figure 10, where six combinations between front and back cameras are represented. The yellow quadrangle stands for overlap region of two corresponding images. Figure 10c,d contains maximum number of tie-points for image pairs in the same and opposite directions, respectively. Image pairs in the same direction, including Figure 10a,c,e, could extract more tie-points than combinations in the opposite direction. The main reason is that perspective transformation has less influence on SIFT feature matching for image pairs in the same direction. Additionally, tie-point extraction results illustrated by Figure 10b,e,f consist of dominant extra image pairs compared to the only GNSS assisted situation. Thus, the image pair represented by Figure 10e could extract more tie-points than all other combinations.
In order to visualize and analyze image connection and network stability for all image pairs, match matrix graphs are generated based on inlier matched feature points. Figure 11a,b shows match matrix graphs for these two test configurations. Before analysis, some matters should be mentioned: (1) Graph color indicates matched point number of each image pair relative to the maximum matched number. For these two match matrix graphs, red color stands for more extracted tie-points. On the contrary, less tie-points have been matched when image pairs are rendered by yellow color. (2) Each matrix graph has been split into four blocks where block 1 indicates match sub-matrix for front–front cameras, block 2 for back–back cameras, block 3 for front–back cameras and block 4 for back–front cameras. (3) For each target image, corresponding images near principle diagonal of each block are captured in the same direction as the target image. Otherwise, these related images are in the opposite direction.
From visual inspection of these two match matrixes, we can see that many more tie-points are extracted based on GNSS and camera mounting angle assisted mode because most elements are rendered by red color. Meanwhile, connection between front and back cameras is stably established, which could be deduced from these four match sub-matrixes. For each sub-matrix, image pairs in both the same and opposite directions have almost equal number of tie-points. Then, stable connection could be established for the whole test site. However, weaker connection is constructed for not only image pairs in the opposite direction but also image pairs captured by different cameras in the same direction when only GNSS information is used in image orientation. Additionally, much less image pairs are matched among images captured by front camera because of sparse elements in the direction orthogonal to principle diagonal in sub-matrix 1. The main reason is that large pitch and roll angles have been configured for the front camera.
With cooperation of front and back cameras, extra image pairs could be searched, especially for combinations in the opposite direction, which strengthens image connection in across-track direction. Thus, more tie-points could be detected for image pairs in both the along-track and across-track directions by considering GNSS and camera mounting angles, which ensure stable connection network for image orientation.

4.2. Image Orientation and Accuracy Assessment

4.2.1. Image Orientation

Image orientation for this test site is performed with full automation, as presented in Table 7. With consideration of GNSS and camera mounting angles for tie-point extraction, image orientation has been achieved with the root mean square error (RMSE) near 0.77 pixels, and all images are connected. However, image orientation is failed when only GNSS information is used for image pair search. The reason is that pitch angle variation between front camera and back camera is about 25°, which would lead to distinctly different ground coverage. Thus, most image pairs predicted only by GNSS information are incorrectly matched, which leads to weak image connection network. Oppositely, with consideration of GNSS and camera mounting angles, the dual-camera system could construct stable image connection using image pairs in the same and opposite directions and ensure high precision and ratio of success for image orientation. Figure 12 shows the image orientation result for test considering GNSS and angle information.

4.2.2. Absolute Orientation Accuracy Analysis

Three tests have been configured for orientation accuracy analysis. In order to verify direct geo-referencing accuracy based on differential GNSS, no ground control points are involved in the first image orientation test. Spatial distribution of orientation residuals calculated by subtraction between model measurements and GCPs is presented in Figure 13. We can see that: (1) the accuracy of image orientation without GCPs could be better than 0.6 m in horizontal direction and 1.2 m in vertical direction; (2) residual vectors in both horizontal and vertical directions indicate that systematic errors exist in the initial orientation result which could be deduced from the nearly coincident offset; and (3) the Y component of residual of three points labeled by red color has opposite direction when compared with the others. This can be explained by weaker constraints because a water region exists near these three points, which leads to fewer number of tie-points for model connection.
Geo-referencing accuracy could be improved with aid of GCPs [48,49]. In this study, another two experiments have been conducted. Only one GCP located in central region of test site, numbered as 22, is used in the second experiment to apply a translation to the orientation model. For the third test, four GCPs, numbered as 7, 9, 35 and 37, have been utilized in image orientation. Comparison of orientation accuracy of all these three scenarios is list in Table 8, where test 1 is without GCP, test 2 is with one GCP and test 3 is with four GCPs. Figure 14 shows orientation residuals for these three tests. For test 2, considerable improvement for orientation accuracy, especially in X and Z directions, can be noticed. Maximum residuals decrease to 0.172 m and 0.592 m in X and Z directions, respectively. However, maximum residual in Y direction becomes larger than in test 1. This is caused by these three GCPs near the water region by checking Figure 14b, which stands for residual change trend in Y direction. For test 3, average residuals have been eliminated dramatically, and are better than 0.05 m and 0.15 m in horizontal and vertical directions, respectively.
Through analysis of change trend of orientation residual, some conclusions can be made: (1) There is an almost consistent offset in across-track direction, which is indicated by the nearly horizontal plot of residual change in X direction as presented by Figure 14a. Residual caused by pitch and roll angles in along-track and across-track can be observed from change trend of residual in Y and Z directions as presented by Figure 14b,c. (2) Translation based on one GCP could almost eliminate residual in X direction, whose geo-referencing accuracy reaches to 0.2 m. However, systematic residual still exists in Y and Z directions. (3) Orientation residual has been nearly removed with four GCPs in this study, where average residuals are better than 0.05 m and 0.15 m in horizontal and vertical directions, respectively. (4) Thus, four GCPs should be surveyed to satisfy requirement for high accurate orientation, and orientation accuracy with one GCP is satisfying for applications with low geo-referencing requirement.

4.2.3. Relative Model Accuracy Analysis

Table 9 presents a qualitative assessment for model measurement accuracy of power pylons. Dimensions of power pylons, including width and height, have been analyzed. The maximum differences of width for power pylons are 0.206 and 0.181 m, respectively. Almost all differences of height are below 0.04 m, except for a height difference for power pylon numbered as 55, which is about 0.243 m. Considering that base of power pylon has complex steel structure, this relative larger difference is likely to be caused by false survey. Thus, relative model accuracy for power pylons is about 0.3 m in horizontal direction and 0.04 m in vertical direction.
Vertical distances between model points from the fitted power line and the straight line connecting two hanging points are compared as shown by Table 10. In this study, three points uniformly distributed in power lines, which connect power pylons of number 54 and 55, are compared. The result demonstrates that relative model accuracy of power lines is better than 0.2 m, which ensures high precision for applications based on these models.

5. Discussion

5.1. UAV Photogrammetry with Dual-Camera System

In this research, a UAV-based oblique photogrammetry system with two cameras has been used for data acquisition of transmission line (Section 2.3). In the literature, some relevant strategies have also been proposed by other researchers [14,15,50], which contains single-camera, dual-camera and triple-camera designs. The single-camera solution is only used to capture images in the direction along transmission corridor and no 3D geometry information has been extracted [14]. Comparing to the single camera system, the oblique photogrammetric system in this research has some advantages. Firstly, it facilitates obtaining sufficient images in along-track and across-track directions, which provides necessary information for 3D reconstruction of power pylon. Secondly, the dual-camera system makes it possible to construct stable image connection network, especially for image pairs captured in opposite directions.
The dual-camera system [15] and triple-camera [50] system are designed to provide 3D stereo measurement ability. However, photogrammetric platforms fly overhead transmission corridor in the single-strip mode. Thus, all these strategies are limited because of weaker image connection network in the across-track direction and smaller intersection angle of corresponding points. On the contrary, the dual-camera system cooperating with the dual-strip flight mode is utilized in this study, which could construct the stable image connection network due to the fact that more image pairs are searched in both along-track and across-track directions by considering GNSS and camera mounting angles.

5.2. Absolute Orientation Accuracy and Relative Model Accuracy

Through the analysis of absolute orientation accuracy and relative model accuracy in Section 4.2.2 and Section 4.2.3, some conclusions could be made:
(a)
Without ground control points, the mean residuals of absolute image orientation in horizontal and vertical directions are 0.4 and 0.9 m, respectively.
(b)
With one ground control point, the mean residuals of absolute image orientation in horizontal and vertical directions decrease to 0.2 and 0.3 m, respectively.
(c)
With four evenly distributed ground control points, the mean residuals of absolute image orientation in horizontal and vertical directions decrease to 0.1 and 0.2 m, respectively.
(d)
For model measurement of power pylons, the relative accuracy is about 0.2 m in horizontal direction and 0.04 m in vertical direction.
(e)
For model measurement of power lines, the relative accuracy is better than 0.2 m.
For image orientation without GCP, mean geo-location errors would be less than 0.5 m and 1.0 m in horizontal and vertical directions, respectively. This would dramatically facilitate UAV-based photogrammetric applications for transmission line inspection. When one GCP point participates in image orientation, mean errors decrease to 0.2 m and 0.3 m in horizontal and vertical directions, respectively. This is the best configuration for geo-based applications aimed at higher accuracy.
For transmission line inspection, relative accuracy of 3D models is more important than absolute orientation accuracy because majority of workload for transmission line inspection associates with spatial relationship between power line and surrounding objects, such as vegetation encroachment monitoring. In this research, relative accuracy of 0.2 m for model measurement has been achieved, which satisfies the requirement for vegetation encroachment monitoring.
Comparing to the solution in this study, some approaches have focused on 3D reconstruction of power pylon and line using vehicle-borne and airborne LiDAR system [8,9,10,11]. These researches emphasize correctness and completeness of reconstruction, except that [9] reports the RMSE of 0.12 m for power pylon reconstruction and the maximum distance of 0.32 m from laser points to models. We can see that competitive accuracy for model measurement could be achieved compared to the laser-based methods.

5.3. Practicality and Scalability of the Proposed Solution

The UAV-based oblique photogrammetric system is characterized with lower operational complexity and less economic cost when comparing to helicopter-based or LiDAR systems, especially for short range corridor inspection. For one outdoor campaign, instrument configuration and data acquisition could be completed within half an hour. This is crucial for inspection of transmission line because of rapid and convenient data acquisition ability.
Some other experiments have been conducted to explore potential usage for 3D reconstruction. The first one is terrain reconstruction. The conventional procedures are conducted to generate 3D terrain models of transmission corridor by using Smart3DCapture software, which includes dense matching, triangular meshing and texture mapping. Terrain reconstruction models are presented in Figure A1. From the reconstructed terrain model, farmlands, roads and some water pools could be observed with details. Additionally, Semi-automatic digitization of power pylon and line is another attempt for model reconstruction relating to electric power facilities. The digitization procedure is very similar to model point measurement as described in Section 3.3.2, except for some useful tools for rapid processing. Reconstructed models of power pylon are illustrated in Figure A2. Successful 3D model reconstruction demonstrates potential practicality of the technical framework for transmission line inspection. Some test sites for 3D reconstruction are shown in Figure S1.
In addition to traditional optical image, UAV-based laser scanning is an emerging survey technique. By combining structural and spectral information from different data sources, the automatic extraction and reconstruction ability of the proposed framework can be enhanced with cooperation with the laser scanning system.

6. Conclusions

This research introduces a UAV-based oblique photogrammetric system for image data acquisition and conducts necessary accuracy assessment experiments for image orientation and model measurement to explore the potential usage for offsite inspection of transmission line. Comparing to single camera system, the oblique photogrammetric system facilitates obtaining sufficient transmission line images in both along-track and across-track directions. Meanwhile, the front and back cameras make it possible to construct stable image connection network, especially for image pairs captured in opposite directions.
Combining open-source and commercial software, fully automatic tie-point extraction and image orientation have been achieved in this paper. Accuracy assessment, including absolute orientation precision and relative model measurement precision, has been conducted. Experiments demonstrate that absolute accuracy for image orientation without GCP in horizontal and vertical directions are better than 0.4 m and 0.9 m, which would increase to 0.3 m and 0.2 m with only one GCP for high precision geo-based application. Additionally, relative accuracy for model measurement of power pylon and line is about 0.2 m. The designed dual-camera photogrammetric system shows potential usage for 3D model reconstruction and offsite inspection of transmission line. Comparing to traditional LiDAR technique for 3D reconstruction of transmission line, the integrated instrument is low cost and easy to operate.
Although mounting angles of cameras could provide useful information for tie-point extraction, the pitch and roll angles of captured images are not stable because the stability of UAV platform in outdoor flights are frequently and dramatically subject to operational and natural factors, mainly the inertia and wind. In future work, improvements can come from more accurate geo-referencing systems, such as light inertial navigation system, cradle head that could provide better attitude information or make imaging instrument more stable. Additionally, fixed wing UAV with higher flight height and speed and longer duration should be exploited to increase efficiency for data acquisition and degree of overlap of images, which may decrease difficulty in image orientation. Besides, more 3D reconstruction tests and automatic extraction methods should be exploited, especially for automatic power line extraction and reconstruction because major inspection schedules are related to spatial relationship analysis between power lines and surrounding environments.

Supplementary Materials

The following are available online at www.mdpi.com/2072-4292/9/3/278/s1, Figure S1: Image orientation and 3D reconstruction of power pylons and lines of some test sites.

Acknowledgments

The authors would like to thank authors who have made their algorithms of SIFT and MicMac free and open-source software packages, which is really helpful to the research in this paper.

Author Contributions

San Jiang and Wanshou Jiang conceived and designed the experiments; San Jiang and Wanshou Jiang performed the experiments; Wanshou Jiang contributed semi-automated modeling tools; Wei Huang collected the data and reconstructed power pylons; Liang Yang contributed power line fitting tools; and San Jiang wrote the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Terrain model reconstruction of the test site: (a) left part of the whole test site; and (b,c,d) detailed reconstruction models corresponding to 1, 2 and 3 labeled in (a).
Figure A1. Terrain model reconstruction of the test site: (a) left part of the whole test site; and (b,c,d) detailed reconstruction models corresponding to 1, 2 and 3 labeled in (a).
Remotesensing 09 00278 g015aRemotesensing 09 00278 g015b
Figure A2. Detailed reconstruction models of power pylons and lines of test site.
Figure A2. Detailed reconstruction models of power pylons and lines of test site.
Remotesensing 09 00278 g016

References

  1. Katrasnik, J.; Pernus, F.; Likar, B.; Likar, B. A survey of mobile robots for distribution power line inspection. IEEE Trans. Power Deliv. 2010, 25, 485–493. [Google Scholar] [CrossRef]
  2. Aggarwal, R.K.; Johns, A.T.; Jayasinghe, J.; Su, W. An overview of the condition monitoring of overhead lines. Electr. Power Syst. Res. 2000, 53, 15–22. [Google Scholar] [CrossRef]
  3. Ahmad, J.; Malik, A.S.; Xia, L.K.; Ashikin, N. Vegetation encroachment monitoring for transmission lines right-of-ways: A survey. Electr. Power Syst. Res. 2013, 95, 339–352. [Google Scholar] [CrossRef]
  4. Ackermann, F. Airborne laser scanning—Present status and future expectations. ISPRS J. Photogramm. Remote Sens. 1999, 54, 64–67. [Google Scholar] [CrossRef]
  5. Baltsavias, E.P. Airborne laser scanning: Existing systems and firms and other resources. ISPRS J. Photogramm. Remote Sens. 1999, 54, 164–198. [Google Scholar] [CrossRef]
  6. Jwa, Y.; Sohn, G.; Kim, H. Automatic 3D powerline reconstruction using airborne lidar data. Int. Arch. Photogramm. Remote Sens. 2009, 38, 105–110. [Google Scholar]
  7. McLaughlin, R.A. Extracting transmission lines from airborne LIDAR data. IEEE Geosci. Remote Sens. Lett. 2006, 3, 222–226. [Google Scholar] [CrossRef]
  8. Zhu, L.; Hyyppä, J. Fully-automated power line extraction from airborne laser scanning point clouds in forest areas. Remote Sens. 2014, 6, 11267. [Google Scholar] [CrossRef]
  9. Guo, B.; Huang, X.; Li, Q.; Zhang, F.; Zhu, J.; Wang, C. A stochastic geometry method for pylon reconstruction from airborne LIDAR data. Remote Sens. 2016, 8, 243. [Google Scholar] [CrossRef]
  10. Guo, B.; Li, Q.; Huang, X.; Wang, C. An improved method for power-line reconstruction from point cloud data. Remote Sens. 2016, 8, 36. [Google Scholar] [CrossRef]
  11. Cheng, L.; Tong, L.; Wang, Y.; Li, M. Extraction of urban power lines from vehicle-borne LIDAR data. Remote Sens. 2014, 6, 3302. [Google Scholar] [CrossRef]
  12. Li, Z.R.; Bruggemann, T.S.; Ford, J.J.; Mejias, L.; Liu, Y. Toward automated power line corridor monitoring using advanced aircraft control and multisource feature fusion. J. Field Robot. 2012, 29, 4–24. [Google Scholar] [CrossRef] [Green Version]
  13. Mills, S.J.; Castro, M.P.G.; Li, Z.R.; Cai, J.H.; Hayward, R.; Mejias, L.; Walker, R.A. Evaluation of aerial remote sensing techniques for vegetation management in power-line corridors. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3379–3390. [Google Scholar] [CrossRef] [Green Version]
  14. Ahmad, J.; Malik, A.S.; Abdullah, M.F.; Kamel, N.; Xia, L.K. A novel method for vegetation encroachment monitoring of transmission lines using a single 2D camera. Pattern Anal. Appl. 2015, 18, 419–440. [Google Scholar] [CrossRef]
  15. Sun, C.M.; Jones, R.; Talbot, H.; Wu, X.L.; Cheong, K.; Beare, R.; Buckley, M.; Berman, M. Measuring the distance of vegetation from powerlines using stereo vision. ISPRS J. Photogramm. Remote Sens. 2006, 60, 269–283. [Google Scholar] [CrossRef]
  16. Remondino, F.; Barazzetti, L.; Nex, F.; Scaioni, M.; Sarazzi, D. UAV photogrammetry for mapping and 3D modeling—Current status and future perspectives. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, C22. [Google Scholar] [CrossRef]
  17. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  18. Qin, R. An object-based hierarchical method for change detection using unmanned aerial vehicle images. Remote Sens. 2014, 6, 7911. [Google Scholar] [CrossRef]
  19. Wu, X.; Kumar, V.; Ross Quinlan, J.; Ghosh, J.; Yang, Q.; Motoda, H.; McLachlan, G.J.; Ng, A.; Liu, B.; Yu, P.S.; et al. Top 10 algorithms in data mining. Knowl. Inf. Syst. 2008, 14, 1–37. [Google Scholar] [CrossRef]
  20. Chiang, K.-W.; Tsai, M.-L.; Chu, C.-H. The development of an UAV borne direct georeferenced photogrammetric platform for ground control point free applications. Sensors 2012, 12, 9161–9180. [Google Scholar] [CrossRef] [PubMed]
  21. Harwin, S.; Lucieer, A. Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar] [CrossRef]
  22. Boccardo, P.; Chiabrando, F.; Dutto, F.; Tonolo, F.G.; Lingua, A. UAV deployment exercise for mapping purposes: Evaluation of emergency response applications. Sensors 2015, 15, 15717–15737. [Google Scholar] [CrossRef] [PubMed]
  23. Aicardi, I.; Nex, F.; Gerke, M.; Lingua, A.M. An image-based approach for the co-registration of multi-temporal UAV image datasets. Remote Sens. 2016, 8, 779. [Google Scholar] [CrossRef]
  24. Bendig, J.; Bolten, A.; Bareth, G. Introducing a low-cost mini-UAV for thermal-and multispectral-imaging. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 345–349. [Google Scholar] [CrossRef]
  25. Choi, K.; Lee, I. A UAV-based close-range rapid aerial monitoring system for emergency responses. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, 247–252. [Google Scholar] [CrossRef]
  26. Matsuoka, R.; Nagusa, I.; Yasuhara, H.; Mori, M.; Katayama, T.; Yachi, N.; Hasui, A.; Katakuse, M.; Atagi, T. Measurement of large-scale solar power plant by using images acquired by non-metric digital camera on board UAV. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 435–440. [Google Scholar] [CrossRef]
  27. Xu, Z.; Wu, L.; Shen, Y.; Li, F.; Wang, Q.; Wang, R. Tridimensional reconstruction applied to cultural heritage with the use of camera-equipped UAV and terrestrial laser scanner. Remote Sens. 2014, 6, 10413–10434. [Google Scholar] [CrossRef]
  28. Sona, G.; Pinto, L.; Pagliari, D.; Passoni, D.; Gini, R. Experimental analysis of different software packages for orientation and digital surface modelling from UAV images. Earth Sci. Inform. 2014, 7, 97–107. [Google Scholar] [CrossRef]
  29. Mancini, F.; Dubbini, M.; Gattelli, M.; Stecchi, F.; Fabbri, S.; Gabbianelli, G. Using unmanned aerial vehicles (UAV) for high-resolution reconstruction of topography: The structure from motion approach on coastal environments. Remote Sens. 2013, 5, 6880–6898. [Google Scholar] [CrossRef] [Green Version]
  30. Irschara, A.; Kaufmann, V.; Klopschitz, M.; Bischof, H.; Leberl, F. Towards fully automatic photogrammetric reconstruction using digital images taken from UAVs. In Proceedings of the Symposium of Commission VII of the ISPRS—100 Years ISPRS, Vienna, Austria, 5–7 July 2010.
  31. Eltner, A.; Schneider, D. Analysis of different methods for 3D reconstruction of natural surfaces from parallel-axes UAV images. Photogramm. Rec. 2015, 30, 279–299. [Google Scholar] [CrossRef]
  32. Gonçalves, J.A.; Henriques, R. UAV photogrammetry for topographic monitoring of coastal areas. ISPRS J. Photogramm. Remote Sens. 2015, 104, 101–111. [Google Scholar] [CrossRef]
  33. Diaz-Varela, R.A.; de la Rosa, R.; Leon, L.; Zarco-Tejada, P.J. High-resolution airborne UAV imagery to assess olive tree crown parameters using 3D photo reconstruction: Application in breeding trials. Remote Sens. 2015, 7, 4213–4232. [Google Scholar] [CrossRef]
  34. Rinaudo, F.; Chiabrando, F.; Lingua, A.; Spanò, A. Archaeological site monitoring: UAV photogrammetry can be an answer. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 583–588. [Google Scholar] [CrossRef]
  35. Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A.M.; Noardo, F.; Spanò, A. UAV photogrammetry with oblique images: First analysis on data acquisition and processing. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 835–842. [Google Scholar] [CrossRef]
  36. Novatel. Available online: http://www.novatel.com (accessed on 30 November 2016).
  37. Micmac Documentation. Available online: http://logiciels.ign.fr/IMG/pdf/docmicmac-2.pdf (accessed on 2 March 2016).
  38. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the 7th IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–25 September 1999; pp. 1150–1157.
  39. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  40. Snavely, N.; Seitz, S.M.; Szeliski, R. Photo tourism: Exploring photo collections in 3D. ACM Trans. Graph. 2006, 25, 835–846. [Google Scholar] [CrossRef]
  41. Furukawa, Y. Clustering Views for Multi-View Stereo (CMVS). Available online: http://grail.cs.washington.edu/software/cmvs2012 (accessed on 15 March 2017).
  42. Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1362–1376. [Google Scholar] [CrossRef] [PubMed]
  43. Deseilligny, M.P.; Clery, I. Apero, an open source bundle adjusment software for automatic calibration and orientation of set of images. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, 5. [Google Scholar] [CrossRef]
  44. MicMac. Available online: http://www.tapenade.gamsau.archi.fr (accessed on 12 October 2016).
  45. Cléry, I.; Pierrot-Desseilligny, M. An ergonomic interface to compute 3D models using photogrammetry. In Proceedings of the XXIIIe Symposium de la CIPA, Prague, Czech, 12–16 September 2011.
  46. Friedt, J.-M. Photogrammetric 3D Structure Reconstruction Using Micmac. 2014. Available online: http://jmfriedt.free.fr/lm_sfm_eng.pdf (accessed on 15 March 2017).
  47. Mouget, A.; Lucet, G. Photogrammetric archaeological survey with UAV. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 1, 251–258. [Google Scholar] [CrossRef]
  48. Rupnik, E.; Nex, F.; Toschi, I.; Remondino, F. Aerial multi-camera systems: Accuracy and block triangulation issues. ISPRS J. Photogramm. Remote Sens. 2015, 101, 233–246. [Google Scholar] [CrossRef]
  49. Ostrowski, W.; Bakuła, K. Towards efficiency of oblique images orientation. ISPRS Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 91–96. [Google Scholar] [CrossRef]
  50. Yan, G.J.; Wang, J.F.; Liu, Q.; Su, L.; Wang, P.X.; Liu, J.M.; Zhang, W.M.; Mao, Z.Q. An airborne multi-angle power line inspection system. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 2913–2915.
Figure 1. Technique framework for outdoor data acquisition and model accuracy assessment of transmission line based on UAV oblique photogrammetry.
Figure 1. Technique framework for outdoor data acquisition and model accuracy assessment of transmission line based on UAV oblique photogrammetry.
Remotesensing 09 00278 g001
Figure 2. Imaging geometry for UAV-based oblique photogrammetry: (a) imaging geometry for roll angle; and (b) imaging geometry for pitch angle.
Figure 2. Imaging geometry for UAV-based oblique photogrammetry: (a) imaging geometry for roll angle; and (b) imaging geometry for pitch angle.
Remotesensing 09 00278 g002
Figure 3. Design and integration of photogrammetric system: (a) design of oblique photogrammetric system; and (b) integration of oblique photogrammetric system with UAV platform.
Figure 3. Design and integration of photogrammetric system: (a) design of oblique photogrammetric system; and (b) integration of oblique photogrammetric system with UAV platform.
Remotesensing 09 00278 g003
Figure 4. Location of test site. The red polygon indicates region of test site. Image positions calculated by differential GNSS are rendered by red circles. Yellow pentagons stand for pylons.
Figure 4. Location of test site. The red polygon indicates region of test site. Image positions calculated by differential GNSS are rendered by red circles. Yellow pentagons stand for pylons.
Remotesensing 09 00278 g004
Figure 5. Images taken in test site: (a,b) taken by front camera in direction of along-track; and (c,d) taken by back camera in direction of across-track.
Figure 5. Images taken in test site: (a,b) taken by front camera in direction of along-track; and (c,d) taken by back camera in direction of across-track.
Remotesensing 09 00278 g005
Figure 6. GCPs (Ground Control Points) distribution. The red polygon indicates the approximate test region. GCPs are rendered as blue triangles.
Figure 6. GCPs (Ground Control Points) distribution. The red polygon indicates the approximate test region. GCPs are rendered as blue triangles.
Remotesensing 09 00278 g006
Figure 7. Model key points survey: (a) key points of power pylon; and (b) key points of power line.
Figure 7. Model key points survey: (a) key points of power pylon; and (b) key points of power line.
Remotesensing 09 00278 g007
Figure 8. Image pair search: (a) only GNSS used for image pair search; and (b) combination of GNSS and mounting angles used for image pair search.
Figure 8. Image pair search: (a) only GNSS used for image pair search; and (b) combination of GNSS and mounting angles used for image pair search.
Remotesensing 09 00278 g008
Figure 9. Tie-point extraction of two images. Correspondences are linked by identical numbers: (a) tie-points of the first image and (b) tie-points of the second image.
Figure 9. Tie-point extraction of two images. Correspondences are linked by identical numbers: (a) tie-points of the first image and (b) tie-points of the second image.
Remotesensing 09 00278 g009
Figure 10. Tie-points extraction for image pairs. F and B stand for front and back cameras, respectively. The yellow quadrangle stands for overlap region. (a) Front–front cameras in the same direction with 3497 tie-points; (b) front–front cameras in the opposite direction with 496 tie-points; (c) back–back cameras in the same direction with 5001 tie-points; (d) back–back cameras in the opposite direction with 1162 tie-points; (e) front–back cameras in the same direction with 1848 tie-points; and (f) front–back cameras in the opposite direction with 852 tie-points.
Figure 10. Tie-points extraction for image pairs. F and B stand for front and back cameras, respectively. The yellow quadrangle stands for overlap region. (a) Front–front cameras in the same direction with 3497 tie-points; (b) front–front cameras in the opposite direction with 496 tie-points; (c) back–back cameras in the same direction with 5001 tie-points; (d) back–back cameras in the opposite direction with 1162 tie-points; (e) front–back cameras in the same direction with 1848 tie-points; and (f) front–back cameras in the opposite direction with 852 tie-points.
Remotesensing 09 00278 g010
Figure 11. Match matrix graphs: (a) graph for tie-point extraction using GNSS and camera angles; and (b) graph for tie-point extraction using only GNSS information.
Figure 11. Match matrix graphs: (a) graph for tie-point extraction using GNSS and camera angles; and (b) graph for tie-point extraction using only GNSS information.
Remotesensing 09 00278 g011
Figure 12. Image orientation of the test site using GNSS and camera mounting angles.
Figure 12. Image orientation of the test site using GNSS and camera mounting angles.
Remotesensing 09 00278 g012
Figure 13. Spatial distribution of orientation residual without GCP: (a) horizontal residual distribution; and (b) vertical residual distribution.
Figure 13. Spatial distribution of orientation residual without GCP: (a) horizontal residual distribution; and (b) vertical residual distribution.
Remotesensing 09 00278 g013
Figure 14. Orientation residuals for these three scenarios: (a) residual in X direction; (b) residual in Y direction; and (c) residual in Z direction.
Figure 14. Orientation residuals for these three scenarios: (a) residual in X direction; (b) residual in Y direction; and (c) residual in Z direction.
Remotesensing 09 00278 g014aRemotesensing 09 00278 g014b
Table 1. Ground coverage and GSD (Ground Sample Distance) under different flight heights.
Table 1. Ground coverage and GSD (Ground Sample Distance) under different flight heights.
Flight Height (m)Ground CoverageGSD (cm)
Long Side (m)Short Side (m)
5051.1434.140.85
7576.7151.211.28
100102.2868.281.70
125127.8685.362.13
150153.43102.432.56
Table 2. Ground coverage and degree of overlap of images acquired by the integrated photogrammetric system under flight height of 100 m.
Table 2. Ground coverage and degree of overlap of images acquired by the integrated photogrammetric system under flight height of 100 m.
Oblique Angle (Degree)Ground CoverageImage Overlay
Long Side (m)Short Side (m)Along-Track (%)Across-Track (%)
5103.2468.8670.9463.31
10106.3070.6671.7885.75
15111.6973.8073.1492.04
20119.9678.5474.9970.72
25131.9985.2977.2750.96
Table 3. Configuration of UAV photogrammetric system.
Table 3. Configuration of UAV photogrammetric system.
Item NameValue
Flight height (m)120
Flight speed (m/s)8.15
Camera capture frequency (s)3
Control modesemi-automatic
Flight modeDual-strip
Flight duration (min)10
Front camera mount angle (°)pitch: 25 roll: −15
Back camera mount angle (°)pitch: 0 roll: −25
GSD (cm)3.67
Table 4. Commands for tie-point extraction and image orientation based on MicMac.
Table 4. Commands for tie-point extraction and image orientation based on MicMac.
CommandDescription
ImageOriCmdGather GNSS and camera’s mounting angles
OriConvertConvert external orientation data to internal format
GrapheHomSearches image pairs based on transformed position of each image
TapiocaTie-point detection and matching
AperoCamera calibration, relative and absolute orientation tool
CampariLever-arm vector estimation
Apero2PMVSConvert image orientation results to PMVS format
Table 5. Test configurations without or with GCPs (“-” stands for no GCP label).
Table 5. Test configurations without or with GCPs (“-” stands for no GCP label).
TestNo. GCPs UsedGCP Label
10-
2126
3411, 13, 39 and 41
Table 6. Statistic for tie-point extraction.
Table 6. Statistic for tie-point extraction.
TestNo. Images UsedNo. Image PairsNo. Tie-Points
GNSS40211,0036,119,023
GNSS and angle40213,4916,831,144
Table 7. Image orientation result for the test site (“-” stands for failure).
Table 7. Image orientation result for the test site (“-” stands for failure).
TestNo. Images ConnectedRMSE (Pixel)
GNSS--
GNSS and angle4020.7684
Table 8. Comparison of orientation residual of three experiments.
Table 8. Comparison of orientation residual of three experiments.
TestMaxAverageStdev
|X|(m)|Y|(m)|Z|(m)|X|(m)|Y|(m)|Z|(m)X(m)Y(m)Z(m)
10.5090.3881.1630.3460.1990.8170.0790.1330.244
20.1720.4810.5920.0620.1050.2060.0790.1330.244
30.1510.0940.2950.0460.0410.1230.0560.0510.150
Table 9. Comparison of relative model accuracy of power pylons.
Table 9. Comparison of relative model accuracy of power pylons.
IDSurvey ValueModel ValueDifference
Width (m)Height (m)Width (m)Height (m)Width (m)Height (m)
5416.65945.63316.58245.597−0.077−0.036
17.65845.58917.45245.597−0.2060.008
17.649-17.479-−0.170-
5517.28045.54917.09945.574−0.1810.025
18.29845.77418.17945.531−0.119−0.243
18.739-18.800-0.061-
Table 10. Comparison of relative model accuracy of power lines.
Table 10. Comparison of relative model accuracy of power lines.
PhaseSurvey ValueModel ValueDifference
Height (m)Height (m)Height (m)
A7.4757.6740.200
7.5517.7010.150
6.0766.2210.146
C7.4657.6620.197
7.5067.6870.181
6.0876.2270.139

Share and Cite

MDPI and ACS Style

Jiang, S.; Jiang, W.; Huang, W.; Yang, L. UAV-Based Oblique Photogrammetry for Outdoor Data Acquisition and Offsite Visual Inspection of Transmission Line. Remote Sens. 2017, 9, 278. https://0-doi-org.brum.beds.ac.uk/10.3390/rs9030278

AMA Style

Jiang S, Jiang W, Huang W, Yang L. UAV-Based Oblique Photogrammetry for Outdoor Data Acquisition and Offsite Visual Inspection of Transmission Line. Remote Sensing. 2017; 9(3):278. https://0-doi-org.brum.beds.ac.uk/10.3390/rs9030278

Chicago/Turabian Style

Jiang, San, Wanshou Jiang, Wei Huang, and Liang Yang. 2017. "UAV-Based Oblique Photogrammetry for Outdoor Data Acquisition and Offsite Visual Inspection of Transmission Line" Remote Sensing 9, no. 3: 278. https://0-doi-org.brum.beds.ac.uk/10.3390/rs9030278

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop