Next Article in Journal
Enhanced Auditory Steady-State Response Using an Optimized Chirp Stimulus-Evoked Paradigm
Previous Article in Journal
Three-Dimensional Visualization System with Spatial Information for Navigation of Tele-Operated Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis and Evaluation of the Image Preprocessing Process of a Six-Band Multispectral Camera Mounted on an Unmanned Aerial Vehicle for Winter Wheat Monitoring

1
National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing 210095, China
2
Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Nanjing Agricultural University, Nanjing 210095, China
3
Jiangsu Key Laboratory for Information Agriculture, Nanjing Agricultural University, Nanjing 210095, China
4
Jiangsu Collaborative Innovation Center for Modern Crop Production, Nanjing Agricultural University, Nanjing 210095, China
5
Mechanical Engineering Department, University of California-Merced, Merced, CA 95343, USA
*
Author to whom correspondence should be addressed.
Submission received: 14 November 2018 / Revised: 30 January 2019 / Accepted: 3 February 2019 / Published: 12 February 2019
(This article belongs to the Section Remote Sensors)

Abstract

:
Unmanned aerial vehicle (UAV)-based multispectral sensors have great potential in crop monitoring due to their high flexibility, high spatial resolution, and ease of operation. Image preprocessing, however, is a prerequisite to make full use of the acquired high-quality data in practical applications. Most crop monitoring studies have focused on specific procedures or applications, and there has been little attempt to examine the accuracy of the data preprocessing steps. This study focuses on the preprocessing process of a six-band multispectral camera (Mini-MCA6) mounted on UAVs. First, we have quantified and analyzed the components of sensor error, including noise, vignetting, and lens distortion. Next, different methods of spectral band registration and radiometric correction were evaluated. Then, an appropriate image preprocessing process was proposed. Finally, the applicability and potential for crop monitoring were assessed in terms of accuracy by measurement of the leaf area index (LAI) and the leaf biomass inversion under variable growth conditions during five critical growth stages of winter wheat. The results show that noise and vignetting could be effectively removed via use of correction coefficients in image processing. The widely used Brown model was suitable for lens distortion correction of a Mini-MCA6. Band registration based on ground control points (GCPs) (Root-Mean-Square Error, RMSE = 1.02 pixels) was superior to that using PixelWrench2 (PW2) software (RMSE = 1.82 pixels). For radiometric correction, the accuracy of the empirical linear correction (ELC) method was significantly higher than that of light intensity sensor correction (ILSC) method. The multispectral images that were processed using optimal correction methods were demonstrated to be reliable for estimating LAI and leaf biomass. This study provides a feasible and semi-automatic image preprocessing process for a UAV-based Mini-MCA6, which also serves as a reference for other array-type multispectral sensors. Moreover, the high-quality data generated in this study may stimulate increased interest in remote high-efficiency monitoring of crop growth status.

1. Introduction

Efficient monitoring of crops is the basis of precision agriculture and helps to identify, analyze, and manage crop variability within farmland [1,2]. The advantages of using unmanned aerial vehicles (UAVs) are high flexibility, high spatial resolution, and ease of operation, and their use in crop monitoring has grown rapidly in recent years, especially in precision agriculture [3,4,5].
In the past, different types of sensors, mounted on UAVs, have been used for monitoring crop growth status [6]. For example, hyperspectral and thermal infrared sensors have been used to estimate leaf area index (LAI), nitrogen content, and biomass [7,8,9]. However, the aforementioned sensors are too heavy and bulky to be carried by UAVs, and their application is also limited by their high cost and low computing efficiency [6,10]. Although consumer-level digital cameras are lightweight and affordable, they are inadequate for more in-depth and extensive research without red edge and near-infrared (NIR) band coverage regions, which are more sensitive for crop monitoring than visible bands [11]. In contrast, multispectral sensors can provide multiple spectral bands (from visible to NIR) with centimeter-level spatial resolution [6,11]. In addition, due to the advantages of being low cost, having compact dimensions, and the ability to do fast frame imaging, multispectral sensors provide an acceptable balance between affordability and usability [11,12,13].
Image preprocessing of UAV-based multispectral sensors is a basic and vital factor that influences spectral accuracy and quantitative analysis [14]. Sensor corrections are the key initial steps required for extraction of geometrically consistent at-sensor data from the raw data. This aspect of the measurement process encompasses noise correction, vignetting correction, and lens distortion correction [15]. Image noise refers to any undesirable signal produced by the sensor [16], while the purpose of noise correction is to eliminate the systematic errors of the multispectral sensors. Vignetting is a spatially dependent light intensity falloff, which leads to a reduction in radiance toward the periphery compared to the image center [17,18,19]. Lens distortion is mainly caused by the differences in magnification across a lens surface and the misalignment between the lens and the detector plane, represented by radial distortion and tangential distortion [20]. Generally, correction coefficients are calculated to correct for noise and vignetting in multispectral images [15], and the Brown model is commonly adopted for lens distortion correction [15,21,22]. In addition to the preliminary corrections, band registration is required to improve spatial consistency between bands. Although commercially available software (e.g., PhotoScan, Airsoft LLC, Russia and Tetracam PixelWrench2) are available for alignment correction, low accuracies have been found in many studies [14,23]. To improve the band misalignment, several studies have proposed the use of algorithms for the correction of misalignment errors. Laliberte et al. [14] employed a Local Weighted Mean Transform (LWMT) method [24] to detect edges; however, only a local translation effect was considered. Radhadevi et al. [25] used a photogrammetric technique to remove unaccounted-for misregistration residuals; however, it was only suitable for small, flat areas. Turner et al. [23] developed new algorithms based on the Scale Invariant Feature Transform (SIFT); however, an error assessment was not undertaken. After band registration, the multispectral images still contain the digital number (DN) values. To convert the DN values into spectral reflectance, radiometric correction is required. Two radiometric correction methods, vicarious correction and preflight correction, are often employed for UAV-based multispectral sensors [23]. As one of the most commonly used vicarious correction methods, the empirical linear calibration (ELC) method depends on a regression analysis of spectral data from images and the real measured values [23], and it has been applied in a variety of crop-monitoring studies [10,26,27]. The preflight calibration used laboratory-calibrated parameters (e.g., calibration coefficients) to characterize the sensor [28]. The two types of radiometric calibration methods have been used in different UAV-based sensors; however, there has been a lack of comparative studies to access accuracy and applicability.
Image preprocessing of UAV-based multispectral sensors basically involves five steps, namely noise correction, vignetting correction, lens distortion correction, band registration, and radiometric correction. Some studies have described procedures for sensor correction [15], and some have attempted to adopt different methods for band registration [14,23,29] or radiometric calibration [30]. However, a quantitative study on the accuracy of the complete preprocessing process has not been performed. Moreover, the performance of calibration methods varies for different UAV onboard sensors. There is, therefore, a timely need in crop monitoring research to analyze each correction procedure for a specific sensor and evaluate the applicability of different calibration methods in crop monitoring.
In this study, a typical six-band multispectral sensor (Mini-MCA6), which has been widely used in crop monitoring [31,32,33,34,35], was adopted to permit an evaluation of the image preprocessing steps. The objectives of the study were as follows: (1) to analyze the components of sensor error or data modification within the UAV-based multispectral sensor; (2) to compare the accuracy of intrinsic (software-based) and extrinsic (ground control point-based) methods for band registration; (3) to assess the performance of preflight calibration and vicarious calibration, i.e., ELC methods, for radiometric correction of a narrowband multispectral sensor; and (4) to evaluate the applicability and potential of image preprocessing for crop monitoring purposes. The anticipated results would provide guidance on how to select a robust fit-for-purpose method for multispectral imagery preprocessing, and the suggested methods, integrated on ENVI/IDL platforms, provide a semi-automated image preprocessing process of a six-band multispectral camera.

2. Materials and Methods

2.1. Imaging Sensor and UAV System

A six-band multispectral camera (Mini-MCA6 Tetracam, Inc., Chatsworth, CA, USA) was used. The Mini-MCA6 is a miniature array camera with five band channels and an incident light sensor (ILS), which contains a band pass filter and an optical fiber (Figure 1). The basic performance parameters of the Mini-MCA6 are summarized in Table 1.
The multispectral camera was mounted on an ARF-MikroKopter UAV (Mikrokopter Inc., Moormerland, Germany), which had eight rotors. The specifications of the UAV are listed in Table 2. The UAV system was equipped with a MC-32 remote control module and a ThinkPad laptop, as shown in Figure 2.

2.2. Image-Preprocessing Methods

2.2.1. Noise Correction

Noise correction was performed to correct the systematic error of the multispectral sensors in the first step of multispectral image preprocessing. Given that the raw digital number (DNraw) of each pixel is the sum of a noise component (DNnoise) and a radiance component (DNrad), identifying the characteristic components of the noise component is the key to extracting the radiance component [15].
D N r a d = D N r a w D N n o i s e
To appropriately access the contribution of the image noise, the multispectral camera was kept in a fully enclosed black box, which removed the radiance component as a result of the physical isolation of the sensor. The setup would then generate dark offset imagery to characterize the distribution of image noise on a per-pixel basis. For each band channel of the Mini-MCA6, a sensor-specific database of dark offset imagery was conducted at the same three exposure levels (1.0, 1.5, and 2.0 ms) as the field test. According to a trial-and-error method, the number of dark offset images for each exposure time was set as 100 to balance correction accuracy and calculation efficiency. The noise component of each pixel was calculated from the average of the 100 images and stored as separate images [15]. Then, the image noise could be corrected based on Equation (1).

2.2.2. Vignetting Correction

Due to the non-uniformity of the optical lens, the brightness or saturation of an image would decrease toward the periphery compared to the image center, namely vignetting. We used the uniform source system (CSTM-USS-1200C; Labsphere, Inc., North Sutton, NH, USA), as shown in Figure 3, to create vignetting images at three exposure levels (1.0, 1.5, and 2.0 ms). To maximize the noise reduction potential, 100 vignetting images per exposure level were averaged as vignetting samples (Va) for each band channel [15]. Due to the increasing signal-to-noise ratio (SNR) from the center to the edge of images, the periphery of the vignetting samples had high uncertainty [15]. To ensure the data’s quality, the top 5% of DN values in Va were averaged as Vb. The correction coefficient of vignetting (Vc) for each channel was obtained from:
V c = V a / V b .

2.2.3. Lens Distortion Correction

Lens distortion was corrected using the Brown model [21]. The lens distortion correction parameters for each channel were measured using a program supplied by the Beijing Spatial Information Technology Co., Ltd., and the coefficients are given in Table 3. Due to the different distortions, the five channels should be corrected separately based on the appropriate correction coefficients:
Δ x = ( x x 0 ) ( k 1 r 2 + k 2 r 4 ) + p 1 [ r 2 + 2 ( x x 0 ) 2 ] , + 2 p 2 ( x x 0 ) ( y y 0 ) + α ( x x 0 ) + β ( y y 0 )
Δ y = ( y y 0 ) ( k 1 r 2 + k 2 r 4 ) + p 2 [ r 2 + 2 ( y y 0 ) 2 ] + 2 p 1 ( x x 0 ) ( y y 0 ) ,
r = ( x x 0 ) 2 + ( y y 0 ) 2 ,
where ∆x and ∆y are the image correction values along the horizontal and vertical coordinates, respectively, x and y are the coordinates of the image point, x0 and y0 represent the main image point, ki and pi (I = 1, 2) are the coefficients of radial distortion and decentering distortion, respectively, α is the non-square scaling factor, and β is the non-orthogonal distortion factor.

2.2.4. Band Registration

The lenses of the multispectral camera were arranged in two rows as depicted in Figure 1. The misalignment of these lenses leads to displacement between the images from each channel. To eliminate image displacement, band registration was performed. Two methods for band registration were used. One was via the Tetracam PixelWrench2 (PW2) software, which was embedded in the multispectral camera [35]. The other was based on the ground control point-based (GCP-based) method. To distinguish GCPs easily, the GCPs were painted on the road as black annuluses with inner and outer diameters of 10 cm and 50 cm, respectively (Figure 4). The geographic coordinates of GCPs were measured by RTK-GPS (X900 GNSS, Huace., Beijing, China). The image of MCA-0 was used as the reference image and the images of the other channels were registered with the reference image via the image registration tool (i.e., “select GCPs: image to image”) in ENVI 5.2.

2.2.5. Radiometric Correction

Radiation correction was necessary to eliminate distortion information in the radiance data. Two methods for radiation correction were employed: the ELC method [36] and the light intensity sensor calibration (ILSC) method.
The ELC is a typical vicarious correction method based on a regression analysis of spectral data from images and the real measured values. The standard data were measured by an analytical spectral device (ASD) FieldSpec Pro spectrometer (Analytical Spectral Devices, Boulder, CO, USA) from four pieces of the calibration canvas with different reflectance values of 3%, 22%, 48%, and 82% (Figure 5). As shown in Figure 5b, the spectral signatures of the calibration canvas were stable and in the range of 490–800 nm, including all the wavelengths of the band channels (see Table 1).
The principle of ILSC is to multiply the ratio of the downward intensity to the reflected light intensity by the correction factors. The information of downward intensity was captured through the optical fiber (see Figure 1) and stored in the ILS. The corresponding correction file derived from the light intensity sensor can be uploaded into the PW2 software as depicted in Figure 6.

2.3. Winter Wheat Case Study

The study area was located in the town of Baipu, which is in the city of Rugao, Jiangsu Province, China (120°45’E, 32°16’N). Two winter wheat (Triticum aestivum L.) field experiments were designed involving different nitrogen (N) application rates, planting densities (D), and varieties (V) in two growing seasons. Details of the field experiments are given in Table 4. To avoid the uncertainty from image mosaicking, the multispectral images were captured at an altitude of 150 m, resulting in covering all 36 field plots within a single image and a spatial resolution of 8.125 cm. All flights were carried out between 11:00 and 13:00 in stable ambient light conditions together with field measurement of the LAI and leaf biomass on the same day.
The LAI was measured as follows. For each plot, 30 wheat stems were considered as one sample, and the number of stems in one meter (B) was counted manually. The green leaf area was scanned using the LI-3000 (LI-COR Inc., Lincoln, NE, USA). The LAI of the population was calculated by:
L A I = 1 / D × B × A / C × 10 4 ,
where D was the distance between two rows of wheat, and A and C were the leaf area and number of stems, respectively.
The leaf biomass was measured as follows: For each plot, 30 hills of plants were cut above the ground surface. All green leaves and panicles were separated from the stems. All components were oven-dried at 105 °C for 30 min and then at 80 °C for about 24 h until a constant weight was obtained.
To analyze the performance of the different image-preprocessing methods, the data of Experiment 1 were analyzed. Based on the results, an appropriate image preprocessing process was proposed for the UAV-based multispectral sensor. To evaluate the feasibility and stability of the optimal image-preprocessing methods for crop monitoring, the estimation models of LAI and leaf biomass for winter wheat were validated with the data from Experiment 2. Eight commonly used vegetation indexes (VIs) were selected for the estimation of LAI and leaf biomass as indicated in Table 5.

3. Results

3.1. Sensor Error Analysis

3.1.1. Noise Correction Factor for Each Band

The average noise images for the five channels of the Mini-MCA6 and the variation of these noise values (DNnoise) with the exposure time are presented in Table 6. For each band, the noise images were not exactly the same under different exposure times, and the values did not change regularly as the exposure time increased. For the same exposure time, the DNnoise varied from band to band within the range 0–15. Compared to the other four bands, MCA-2 had the lowest noise value. According to the DNnoise for each band at specific exposure times in Table 6, the systematic error of the Mini-MCA6 caused by noise would be eliminated by Equation (1).

3.1.2. Vignetting Correction for Each Band

As shown in Table 7, the images for the vignetting correction factor for each band were not symmetric circular distributions. For the same exposure time, the factor images and the offsets of the circle center for the five bands were different. For the same band, the vignetting correction factors were different with different exposure times. The distributions of Vc in the middle row of the images for each channel were accordingly listed in Table 7. The range of Vc was from 1 to 1.5 for MCA-0, MCA-1, MCA-3, and MCA-4, while the value of Vc for MCA-2 was between 1 and 2.4. It is obvious that the peak of Vc was not in the middle for all channels.
Based on the images of correction factors, the vignetting could be eliminated. Taking the correction result of MCA-0 as an example, the DN values of the corrected image were distributed evenly after correction (Figure 7). The correction results for other band channels of the Mini-MCA6 were similar to that of MCA-0.

3.1.3. Brown Model for Lens Distortion Correction

Taking the MAC-4 images at the booting stage as an example, Figure 8 shows the results before and after correction of lens distortion via the Brown model. The correction results for other images of the Mini-MCA6 were found to be similar. In a visual assessment, the lens distortion was effectively eliminated based on processing based on the Brown model.

3.2. Comparison of PW2-Based and GCP-Based Methods for Band Registration

Figure 9 shows one example of (a) the original, (b) the PW2-based, and (c) the GCP-based results. A visual inspection of the data revealed that pixel displacement of different bands was eliminated by both the PW2-based and GCP-based methods after band registration between the reference channel (MCA-0) and the other four bands. Compared to the multispectral image from the PW2-based method (see Figure 9b), the boundaries of features in the corrected image from the GCP-based method (see Figure 9c) gave a better match between each band. The statistical results also indicated that the GCP-based method was superior to the PW2-based method (Figure 10). Although the root-mean-square errors (RMSEs) of mismatched pixels for MCA-1, MCA-2, and MCA-3 decreased after PW2-based registration, the match between MAC-4 and the master channel was not improved. On the contrary, the RMSEs for all slaves could be reduced to about one pixel by the GCP-based method.

3.3. Comparison of the ILSC and ELC Methods for Radiometric Correction

The reflectances of the four correction canvases from the ASD spectrometer were used to evaluate the performances of the ILSC and ELC methods. As shown in Figure 11, the ELC method performed significantly better than the ILSC method. For the ILSC method, the results of the heading stage were generally underestimated. Compared to the ILSC method, the RMSEs of the ELC method were reduced by an average of 65.4% for all bands, and the largest improvement was in MCA-1, with RMSE values ranging from 0.18 to 0.04.

3.4. Estimation of LAI and Leaf Biomass

To evaluate the feasibility and stability of the optimal image-preprocessing methods (noise correction, vignetting correction, lens distortion correction with the Brown model, GCP-based band registration, and radiometric correction with the ELC method), the LAI and the leaf biomass of winter wheat were estimated for 2014–2015. As shown in Table 8, the VIs extracted from the corrected multispectral images were of satisfactory performance for the estimations of both LAI (R2 > 0.76 and RMSE < 1.13) and leaf biomass (R2 > 0.73 and RMSE ≤ 0.051). The highest accuracies for the LAI and the leaf biomass estimations were produced by MTVI2 (modified triangular vegetation index). Based on the MTVI2 images at different growth stages, the mappings of LAI and leaf biomass were generated as shown in Figure 12 and Figure 13, respectively. At the same growth stage, the LAI and leaf biomass of winter wheat increased with a rise in plant density and nitrogen level. Under the same treatment regime, the LAI and the leaf biomass increased at an early stage and then decreased. The results were reasonable and consistent with a visual inspection.

4. Discussion

4.1. Correction of Noise, Vignetting, and Lens Distortion

Noise, vignetting, and lens distortion generated by sensors are important factors that dictate the quality of multispectral images from the Mini-MCA6. In this study, we identified and quantified those components of sensor error for each spectral band at the different growth stages of wheat.
To acquire uncontaminated information, noise images were generated for the noise correction in this study. The results indicated that the noise components of the different spectral bands at the different exposure times should be corrected separately and specifically, which is consistent with the results of an earlier study [15]. Additionally, due to the different observation conditions, real-time corrections were required for multispectral images acquired at the different growth stages of winter wheat.
For the vignetting correction, Laliberte et al. [14] used the correction files of the multispectral camera in PW2. However, the vignetting correction factors were not fixed, but gradually changed in a circular ring (Table 7). Therefore, it would be more appropriate to determine the vignetting correction factors of each spectral band on the exposure time setting of the UAV-based sensors.
The Brown model has been frequently used for lens distortion correction in digital photography and multispectral cameras [15]. In this study, the robustness of the Brown model was verified, and the results showed that the model effectively reduced the lens distortion of the Mini-MCA6. Since the performance of the Brown model was evaluated by a visual assessment, a quantitative comparison will be performed in future work for the purpose of in-depth discussion.

4.2. Selection of Appropriate Methods for Spectral Band Registration and Radiometric Correction

Given that the UAV-based multispectral images have a significant misregistration error between different spectral bands, various methods have been applied to spectral band registration. However, assessment of accuracy has rarely been discussed. In this study, both intrinsic (PW2-based) and extrinsic (GCP-based) methods were evaluated. Although the PW2-based method was automated, our results showed that misregistration errors could not be effectively eliminated (an average RMSE of 1.82 pixels was obtained), which was consistent with earlier work [14]. Another highly automated method was proposed by Turner et al. [23], who used the SIFT algorithm to find feature points in the image for band registration, and obtained an average accuracy of 1.78 pixels. However, the feature points of crops might not be precisely detected by such feature-based methods, especially for cropland with indistinct boundaries. In contrast, the GCP-based method was performed artificially for band registration and produced better results, with an average RMSE of 1.02 pixels. It is noted that it was difficult to distinguish GCPs when the flight height was higher than 400 m [29]. Given the above, the GCP-based band registration method is recommended for the acquisition of high-resolution Mini-MCA6 images in crop monitoring.
The purpose of radiometric correction is to convert the DN values of each pixel to a reflectance value in the image. Two methods (ILSC and ELC) were used for preprocessing the Mini-MCA6 images in this study. Compared with the reflectance spectra acquired by the ASD spectrometer, the performance of the ELC method was better than that of the ILSC method. This might be because the correction files of the ILSC method were produced directly at a height of 150 meters and the light and solar radiation intensities would have been different when the height of the UAV was changed. In the case of the ELC method, the correction coefficients were determined by the use of real measured values. Therefore, the ELC method is considered to be appropriate and fit-for-purpose for radiometric correction of the Mini-MCA6.

4.3. Application of Image Preprocessing in Crop Monitoring

Based on the validation and evaluation of the above methods, image preprocessing of UAV-based Mini-MCA6 data should consist of the following: (1) noise correction; (2) vignetting correction; (3) lens distortion correction via the Brown model; (4) GCP-based band registration; and (5) radiometric correction with the ELC method. To evaluate the feasibility and stability of the image preprocessing process, the LAI and leaf biomass of winter wheat were estimated based on the post processed Mini-MCA6 images. The results clearly demonstrated the applicability and potential for image preprocessing in winter wheat monitoring. Moreover, the utility and practicality of those methods have been increased by developing all of the procedures on ENVI/IDL platforms.
The present study was based on field plot experiments of winter wheat. Although the field experiments were conducted over two years with specific wheat varieties, planting densities, and N application rates, the spatial heterogeneity and spectral differences of plots had some regularity. Therefore, other crops grown under different conditions should be used to test the applicability of the proposed image preprocessing process. The proposed image preprocessing process can also serve as a reference for other array-type multispectral sensors.

5. Conclusions

In this study, an analysis and evaluation of the image preprocessing of a UAV-based multispectral sensor, which included noise correction, vignetting correction, lens distortion correction, spectral band registration, and radiometric correction, has been performed. The results showed that the difference (or ratio) between the noise (or vignetting) images of the Mini-MCA6 and the raw images could effectively reduce the error caused by noise (or vignetting). The Brown model proved to be highly versatile and robust for lens distortion correction. For spectral band registration, the GCP-based method is recommended if the GCPs can be clearly observed in high-resolution images. Regarding radiometric correction, the ELC method performed better than ILSC; thus, the ELC method is appropriate for narrowband multispectral images.
In practical use, the proposed procedures can be processed on the ENVI/IDL platforms. Based on the field plot experiments of winter wheat, the LAI and leaf biomass at four critical growth stages were estimated to evaluate the feasibility of the image-preprocessing process. The results demonstrated that the image-preprocessing process for a UAV-based Mini-MCA6 was reliable for winter wheat monitoring, and also serves as a reference for other array-type multispectral sensors. The proposed image-preprocessing process should be extended to diverse farmlands for other crop monitoring investigations.

Author Contributions

J.J., X.Y., and T.C. developed the concepts; J.J., X.Y., and H.Z. designed the algorithms; H.Z. and X.J. performed the experiments; J.J. and X.Y. wrote the paper; H.Z., X.J., T.C., Y.T., Y.Z., W.C., and R.E. revised the paper.

Funding

This work was supported by grants from the National Key Research and Development Program of China (2016YFD0300601), the National Natural Science Foundation of China (31671582), the Jiangsu Qinglan Project, the 111 project (B16026), the Jiangsu Collaborative Innovation Center for Modern Crop Production (JCICMCP), the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD), the Qinghai Project of Transformation of Scientific and Technological Achievements (2018-NK-126), and the Jiangsu Province Key Technologies R&D Program (BE2016375).

Acknowledgments

We would like to thank the former graduate students Ni Wang and Yong Liu for their help with field data collection. We are grateful to the reviewers for their suggestions and comments, which significantly improved the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Banu, S. Precision agriculture: Tomorrow’s technology for today’s farmer. J. Food Preprocess. Technol. 2015, 6. [Google Scholar] [CrossRef]
  2. Zhang, Y.; Su, Z.; Shen, W.; Jia, R.; Luan, J. Remote monitoring of heading rice growing and nitrogen content based on UAV Images. Int. J. Smart Home 2016, 10, 103–114. [Google Scholar] [CrossRef]
  3. Von Bueren, S.; Burkart, A.; Hueni, A.; Rascher, U.; Tuohy, M.; Yule, I. Comparative validation of UAV based sensors for the use in vegetation monitoring. Biogeosci. Discuss. 2014, 11, 3837–3864. [Google Scholar] [CrossRef]
  4. Lu, D.; Chen, Q.; Wang, G.; Moran, E.; Batistella, M.; Zhang, M.; Vaglio Laurin, G.; Saah, D. Aboveground forest biomass estimation with Landsat and LiDAR data and uncertainty analysis of the estimates. Int. J. For. Res. 2012, 2012, 436537. [Google Scholar] [CrossRef]
  5. Calderón, R.; Navas-Cortés, J.A.; Lucena, C.; Zarco-Tejada, P.J. High-resolution airborne hyperspectral and thermal imagery for early detection of Verticillium wilt of olive using fluorescence, temperature and narrow-band spectral indices. Remote Sens. Environ. 2013, 139, 231–245. [Google Scholar] [CrossRef]
  6. Yang, G.; Liu, J.; Zhao, C.; Li, Z.; Huang, Y.; Yu, H.; Xu, B.; Yang, X.; Zhu, D.; Zhang, X. Unmanned aerial vehicle remote sensing for field-based crop phenotyping: Current status and perspectives. Front. Plant Sci. 2017, 8, 1111. [Google Scholar] [CrossRef] [PubMed]
  7. Baresel, J.P.; Rischbeck, P.; Hu, Y.; Kipp, S.; Hu, Y.; Barmeier, G.; Mistele, B.; Schmidhalter, U. Use of a digital camera as alternative method for non-destructive detection of the leaf chlorophyll content and the nitrogen nutrition status in wheat. Comput. Electron. Agric. 2017, 140, 25–33. [Google Scholar] [CrossRef]
  8. Liu, S.; Li, L.; Gao, W.; Zhang, Y.; Liu, Y.; Wang, S.; Lu, J. Diagnosis of nitrogen status in winter oilseed rape (Brassica napus L.) using in-situ hyperspectral data and unmanned aerial vehicle (UAV) multispectral images. Comput. Electron. Agric. 2018, 151, 185–195. [Google Scholar] [CrossRef]
  9. Liang, L.; Di, L.; Zhang, L.; Deng, M.; Qin, Z.; Zhao, S.; Lin, H. Estimation of crop LAI using hyperspectral vegetation indices and a hybrid inversion method. Remote Sens. Environ. 2015, 165, 123–134. [Google Scholar] [CrossRef]
  10. Deng, L.; Mao, Z.; Li, X.; Hu, Z.; Duan, F.; Yan, Y. UAV-based multispectral remote sensing for precision agriculture: A comparison between different cameras. ISPRS J. Photogramm. Remote Sens. 2018, 146, 124–136. [Google Scholar] [CrossRef]
  11. Zheng, H.; Cheng, T.; Li, D.; Zhou, X.; Yao, X.; Tian, Y.; Cao, W.; Zhu, Y. Evaluation of RGB, color-infrared and multispectral images acquired from unmanned aerial systems for the estimation of nitrogen accumulation in rice. Remote Sens. 2018, 10, 824. [Google Scholar] [CrossRef]
  12. Zheng, H.; Li, W.; Jiang, J.; Liu, Y.; Tao, C.; Tian, Y.; Zhu, Y.; Cao, W.; Zhang, Y.; Yao, X. A comparative assessment of different modeling algorithms for estimating leaf nitrogen content in winter wheat using multispectral images from an unmanned aerial vehicle. Remote Sens. 2018, 12, 2026. [Google Scholar] [CrossRef]
  13. Candiago, S.; Remondino, F.; De Giglio, M.; Dubbini, M.; Gattelli, M. Evaluating multispectral images and vegetation indices for precision farming applications from UAV images. Remote Sens. 2015, 7, 4026–4047. [Google Scholar] [CrossRef]
  14. Laliberte, A.S.; Goforth, M.A.; Steele, C.M.; Rango, A. Multispectral remote sensing from unmanned aircraft: Image preprocessing workflows and applications for rangeland environments. Remote Sens. 2011, 3, 2529. [Google Scholar] [CrossRef]
  15. Kelcey, J.; Lucieer, A. Sensor correction of a 6-band multispectral imaging sensor for UAV remote sensing. Remote Sens. 2012, 4, 1462–1493. [Google Scholar] [CrossRef]
  16. Salem Saleh Al-amri, N.V.K.; Khamitkar, S.D. A comparative study of noise removal from high resolution remote sensing images. Int. J. IT Eng. 2010, 7, 5. [Google Scholar]
  17. Zheng, Y.; Lin, S.; Kang, S.B. Single-image vignetting correction. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 2243–2256. [Google Scholar] [CrossRef] [PubMed]
  18. Kim, S.J.; Pollefeys, M. Robust radiometric calibration and vignetting correction. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 562–576. [Google Scholar] [CrossRef]
  19. Dan, B.G.; Chen, J.H. Vignette and exposure calibration and compensation. In Proceedings of the Tenth IEEE International Conference on Computer Vision, Beijing, China, 17–20 October 2005; Volume 891, pp. 899–906. [Google Scholar]
  20. Wang, A.; Qiu, T.; Shao, L. A simple method of radial distortion correction with centre of distortion estimation. J. Math. Imaging Vis. 2009, 35, 165–172. [Google Scholar] [CrossRef]
  21. Villiers, J.P.D. Modeling of radial asymmetry in lens distortion facilitated by modern optimization techniques. Proc. SPIE 2010, 7539, 75390–75398. [Google Scholar]
  22. Wang, J.; Shi, F.; Zhang, J.; Liu, Y. A new calibration model and method of camera lens distortion. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 5713–5718. [Google Scholar]
  23. Turner, D.; Lucieer, A.; Malenovský, Z.; King, D.; Robinson, S. Spatial co-registration of ultra-high resolution visible, multispectral and thermal images acquired with a micro-UAV over antarctic moss beds. Remote Sens. 2014, 6, 4003–4024. [Google Scholar] [CrossRef]
  24. Goforth, M.A. Sub-pixel registration assessment of multispectral imagery. Proc. SPIE 2006, 6302. [Google Scholar] [CrossRef]
  25. Radhadevi, P.V.; Solanki, S.S.; Jyothi, M.V.; Nagasubramanian, V.; Varadan, G. Automated co-registration of images from multiple bands of Liss-4 camera. ISPRS J. Photogramm. Remote Sens. 2009, 64, 17–26. [Google Scholar] [CrossRef]
  26. Berni, J.A.J.; Zarco-Tejada, P.J.; Suarez, L.; Fereres, E. Thermal and narrowband multispectral remote sensing for vegetation monitoring from an unmanned aerial vehicle. IEEE Trans. Geosci. Remote Sens. 2009, 47, 722–738. [Google Scholar] [CrossRef]
  27. Suárez, L.; Zarcotejada, P.J.; Gonzálezdugo, V.; Berni, J.A.J.; Sagardoy, R.; Morales, F.; Fereres, E. Detecting water stress effects on fruit quality in orchards with time-series PRI airborne imagery. Remote Sens. Environ. 2010, 114, 286–298. [Google Scholar] [CrossRef]
  28. Dinguirard, M.; Slater, P.N. Calibration of space-multispectral imaging sensors: A review. Remote Sens. Environ. 1999, 68, 194–205. [Google Scholar] [CrossRef]
  29. Jhan, J.-P.; Rau, J.-Y.; Huang, C.-Y. Band-to-band registration and ortho-rectification of multilens/multispectral imagery: A case study of MiniMCA-12 acquired by a fixed-wing UAS. ISPRS J. Photogramm. Remote Sens. 2016, 114, 66–77. [Google Scholar] [CrossRef]
  30. Del Pozo, S.; Rodríguez-Gonzálvez, P.; Hernández-López, D.; Felipe-García, B. Vicarious radiometric calibration of a multispectral camera on board an unmanned aerial system. Remote Sens. 2014, 6, 1918–1937. [Google Scholar] [CrossRef]
  31. Yao, X.; Wang, N.; Liu, Y.; Cheng, T.; Tian, Y.; Chen, Q.; Zhu, Y. Estimation of wheat LAI at middle to high levels using unmanned aerial vehicle narrowband multispectral imagery. Remote Sens. 2017, 9, 1304. [Google Scholar] [CrossRef]
  32. Zheng, H.; Cheng, T.; Li, D.; Yao, X.; Tian, Y.; Cao, W.; Zhu, Y. Combining unmanned aerial vehicle (UAV)-based multispectral imagery and ground-based hyperspectral data for plant nitrogen concentration estimation in rice. Front. Plant Sci. 2018, 9. [Google Scholar] [CrossRef]
  33. Aldana-Jague, E.; Heckrath, G.; Macdonald, A.; Van Wesemael, B.; Van Oost, K. UAS-based soil carbon mapping using VIS-NIR (480–1000 nm) multi-spectral imaging: Potential and limitations. Geoderma 2016, 275, 55–66. [Google Scholar] [CrossRef]
  34. Jeong, S.; Ko, J.; Kim, M.; Kim, J. Construction of an unmanned aerial vehicle remote sensing system for crop monitoring. J. Appl. Remote Sens. 2016, 10, 026027. [Google Scholar] [CrossRef]
  35. Torressánchez, J.; Lópezgranados, F.; Castro, A.I.D. Configuration and specifications of an unmanned aerial vehicle (UAV) for early site specific weed management. PLoS ONE 2013, 8, e58210. [Google Scholar]
  36. Smith, G.; Milton, E. The use of the empirical line method to calibrate remotely sensed data to reflectance. Int. J. Remote Sens. 1999, 20, 2653–2662. [Google Scholar] [CrossRef]
  37. Jordan, C.F. Derivation of leaf-area index from quality of light on the forest floor. Ecology 1969, 50, 663–666. [Google Scholar] [CrossRef]
  38. Rouse, J.W. Monitoring the Vernal Advancement and Retrogradation (Green Wave Effect) of Natural Vegetation; NASA/GSFCT Type III Final Report; National Aeronautics and Space Administration (NASA): Washington, DC, USA, 1974. [Google Scholar]
  39. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  40. Birth, G.S.; McVey, G.R. Measuring the color of growing turf with a reflectance spectrophotometer. Agron. J. 1968, 60, 640–643. [Google Scholar] [CrossRef]
  41. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  42. Haboudane, D.; Miller, J.R.; Pattey, E.; Zarco-Tejada, P.J.; Strachan, I.B. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture. Remote Sens. Environ. 2004, 90, 337–352. [Google Scholar] [CrossRef]
  43. Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the radiometric and biophysical performance of the MODIS vegetation indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
  44. Wang, F.M.; Huang, J.F.; Tang, Y.L.; Wang, X.Z. New vegetation index and its application in estimating leaf area index of rice. Rice Sci. 2007, 14, 195–203. [Google Scholar] [CrossRef]
Figure 1. A photo of the Tetracam Mini-MCA6. ILS, incident light sensor.
Figure 1. A photo of the Tetracam Mini-MCA6. ILS, incident light sensor.
Sensors 19 00747 g001
Figure 2. A photo of the UAV system equipped with (a) a ThinkPad laptop; (b) a Graupner MC-32 control module; and (c) an ARF-MikroKopter UAV.
Figure 2. A photo of the UAV system equipped with (a) a ThinkPad laptop; (b) a Graupner MC-32 control module; and (c) an ARF-MikroKopter UAV.
Sensors 19 00747 g002
Figure 3. A photo of the uniform source system (CSTM-USS-1200C; Labsphere, Inc., North Sutton, NH, USA).
Figure 3. A photo of the uniform source system (CSTM-USS-1200C; Labsphere, Inc., North Sutton, NH, USA).
Sensors 19 00747 g003
Figure 4. The distribution and the size of the ground control points (GCPs).
Figure 4. The distribution and the size of the ground control points (GCPs).
Sensors 19 00747 g004
Figure 5. The calibration canvas with different reflectance values (a) and the corresponding spectral signatures (b).
Figure 5. The calibration canvas with different reflectance values (a) and the corresponding spectral signatures (b).
Sensors 19 00747 g005
Figure 6. The correction file from the light intensity sensor.
Figure 6. The correction file from the light intensity sensor.
Sensors 19 00747 g006
Figure 7. A comparison of the MCA-0 images (a) before and (c) after vignetting correction based on the correction factor image (b).
Figure 7. A comparison of the MCA-0 images (a) before and (c) after vignetting correction based on the correction factor image (b).
Sensors 19 00747 g007
Figure 8. A comparison of the MCA-4 images (a) before and (b) after lens distortion correction at the booting stage.
Figure 8. A comparison of the MCA-4 images (a) before and (b) after lens distortion correction at the booting stage.
Sensors 19 00747 g008
Figure 9. Mini-MCA6 multispectral images from different band registration methods. (a) Original image stacking; (b) PW2-based method; and (c) GCP-based method.
Figure 9. Mini-MCA6 multispectral images from different band registration methods. (a) Original image stacking; (b) PW2-based method; and (c) GCP-based method.
Sensors 19 00747 g009
Figure 10. The root-mean-square error (RMSE) of mismatched pixels between each slave and the main channel for different band registration methods.
Figure 10. The root-mean-square error (RMSE) of mismatched pixels between each slave and the main channel for different band registration methods.
Sensors 19 00747 g010
Figure 11. Comparison of the analytical spectral device (ASD) reflectance with the calibrated values for five bands at four growth stages with the light intensity sensor correction (ILSC) and empirical linear correction (ELC) radiometric correction methods. Panels (ab), (cd), (ef), (gh), and (ij) are the plots for the MAC-1, MCA-0, MAC-2, MAC-3, and MAC-4, respectively. The two columns from left to right correspond to the radiometric correction methods ILSC and ELC, respectively.
Figure 11. Comparison of the analytical spectral device (ASD) reflectance with the calibrated values for five bands at four growth stages with the light intensity sensor correction (ILSC) and empirical linear correction (ELC) radiometric correction methods. Panels (ab), (cd), (ef), (gh), and (ij) are the plots for the MAC-1, MCA-0, MAC-2, MAC-3, and MAC-4, respectively. The two columns from left to right correspond to the radiometric correction methods ILSC and ELC, respectively.
Sensors 19 00747 g011
Figure 12. Leaf area index (LAI) mappings using the relationship between MTVI2 and LAI at different growth stages: (a) jointing stage; (b) booting stage; (c) heading stage; (d) anthesis stage; and (e) filling stage. N is nitrogen rate (N0 = 0, N1 = 150, N2 = 300 kg/ha). D is row spacing (D1 = 25, D2 = 40 cm). V represents the wheat varieties Shengxuan 6 (V1) and Yangmai 18 (V2).
Figure 12. Leaf area index (LAI) mappings using the relationship between MTVI2 and LAI at different growth stages: (a) jointing stage; (b) booting stage; (c) heading stage; (d) anthesis stage; and (e) filling stage. N is nitrogen rate (N0 = 0, N1 = 150, N2 = 300 kg/ha). D is row spacing (D1 = 25, D2 = 40 cm). V represents the wheat varieties Shengxuan 6 (V1) and Yangmai 18 (V2).
Sensors 19 00747 g012
Figure 13. Leaf biomass mappings using the relationship between MTVI2 and leaf biomass at different growth stages: (a) jointing stage; (b) booting stage; (c) heading stage; (d) anthesis stage; and (e) filling stage. N is nitrogen rate (N0 = 0, N1 = 150, N2 = 300 kg/ha). D is row spacing (D1 = 25, D2 = 40 cm). V represents the wheat varieties Shengxuan 6 (V1) and Yangmai 18 (V2).
Figure 13. Leaf biomass mappings using the relationship between MTVI2 and leaf biomass at different growth stages: (a) jointing stage; (b) booting stage; (c) heading stage; (d) anthesis stage; and (e) filling stage. N is nitrogen rate (N0 = 0, N1 = 150, N2 = 300 kg/ha). D is row spacing (D1 = 25, D2 = 40 cm). V represents the wheat varieties Shengxuan 6 (V1) and Yangmai 18 (V2).
Sensors 19 00747 g013
Table 1. The performance parameters of the Tetracam Mini-MCA6.
Table 1. The performance parameters of the Tetracam Mini-MCA6.
ChannelCentral Wavelength (nm)Bandwidth (nm)Exposure Proportion (%)
Master (MCA-0)5501070
Slave 1 (MCA-1)49010120
Slave 2 (MCA-2)67110100
Slave 3 (MCA-3)70010100
Slave 4 (MCA-4)8001080
Incident light sensor (ILS) 130
Table 2. The specifications of the parameters for the ARF-MikroKopter unmanned aerial vehicle (UAV).
Table 2. The specifications of the parameters for the ARF-MikroKopter unmanned aerial vehicle (UAV).
ParameterValue
Weight2050 g
Size73 (width) × 73 (length) × 36 (height) cm
Battery 4s/5000520 g
Maximum payload2500 g
Flight duration8–41 min
Temperature range−5–35 °C
Table 3. The lens distortion correction coefficients for the Mini-MCA6.
Table 3. The lens distortion correction coefficients for the Mini-MCA6.
Correction CoefficientsChannel 1Channel 2Channel 3Channel 4Channel 5
Main image pointx0459.2210457.1416459.5102458.7621447.4720
y0553.2878553.6993553.4641558.4854554.4125
Radial distortionk15.943 × 10−85.396 × 10−85.836 × 10−84.322 × 10−85.635 × 10−8
k2−7.420 × 10−153.817 × 10−151.007 × 10−147.428 × 10−155.563 × 10−16
Decentering distortionp19.080 × 10−69.836 × 10−69.064 × 10−61.015 × 10−59.353 × 10−6
p23.876 × 10−72.110 × 10−75.372 × 10−71.763 × 10−81.307 × 10−7
Non-square scalingα9.220 × 10−38.82 × 10−31.012 × 10−29.240 × 10−31.028 × 10−2
Non-orthogonal distortionβ9.554 × 10−38.941 × 10−39.234 × 10−38.603 × 10−39.920 × 10−3
Table 4. Experimental design and sampling dates for the two growing seasons.
Table 4. Experimental design and sampling dates for the two growing seasons.
ExperimentExperiment 1Experiment 2
Year2013–20142014–2015
Wheat cultivarNingmai 13, Xumai 30Shengxuan 6, Yangmai 18
Row spacing (cm)2525, 40
N application rates (kg/ha)0, 75, 150, 225, 3000, 150, 300
Sampling datesApril 9/15/23, 2014
May 6, 2014
March 13, 2015
April 9/17/24, 2015
May 9, 2015
Table 5. The formulas for the vegetation indexes (VIs).
Table 5. The formulas for the vegetation indexes (VIs).
VIs1AlgorithmReference
DVIR800 – R700[37]
NDVI(R800 – R700)/(R800 + R700)[38]
GNDVI(R800 – R550)/(R800 + R550)[39]
RVIR800 / R700[40]
SAVI1.5 × (R800 – R700)/(R800 + R700 + 0.5)[41]
MTVI21.5 × (1.2 × (R800 – R550) – 2.5 × (R700 – R550))/(2 × (R800 + 1)2 – (6 × R800 – 5 × R700)0.5 – 0.5)0.5[42]
EVI2.5 × (R800 – R700)/(1 + R800 + 6 × R700 – 7.5 × R490)[43]
GBNDVI(R800 – (R550 + R490))/(R800 + R550 + R490)[44]
1 DVI = Difference Vegetation Index, NDVI = Normalized Difference Vegetation Index, GNDVI = Green Normalized Difference Vegetation Index, RVI = Ratio Vegetation Index, SAVI = Soil Adjusted Vegetation Index, MTVI = Modified Triangular Vegetation Index, EVI = Enhanced Vegetation Index, GBNDVI = Green-Blue Normalized Difference Vegetation Index.
Table 6. The average noise images for the five channels of the Mini-MCA6 at different exposure times.
Table 6. The average noise images for the five channels of the Mini-MCA6 at different exposure times.
BandExposure Time
1.0 ms1.5 ms2.0 ms
MCA-0
(550 nm)
Sensors 19 00747 i001 Sensors 19 00747 i002 Sensors 19 00747 i003
MCA-1
(490 nm)
Sensors 19 00747 i004 Sensors 19 00747 i005 Sensors 19 00747 i006
MCA-2
(671 nm)
Sensors 19 00747 i007 Sensors 19 00747 i008 Sensors 19 00747 i009
MCA-3
(700 nm)
Sensors 19 00747 i010 Sensors 19 00747 i011 Sensors 19 00747 i012
MCA-4
(800 nm)
Sensors 19 00747 i013 Sensors 19 00747 i014 Sensors 19 00747 i015 Sensors 19 00747 i016
Table 7. The images for the vignetting correction factor (Vc) for the five channels of the Mini-MCA6 at different exposure times.
Table 7. The images for the vignetting correction factor (Vc) for the five channels of the Mini-MCA6 at different exposure times.
BandExposure TimeVC with the Column Number for the Middle Row 1
1.0 ms1.5 ms2.0 ms
MCA-0
(550 nm)
Sensors 19 00747 i017 Sensors 19 00747 i018 Sensors 19 00747 i019 Sensors 19 00747 i020
MCA-1
(490 nm)
Sensors 19 00747 i021 Sensors 19 00747 i022 Sensors 19 00747 i023 Sensors 19 00747 i024
MCA-2
(671 nm)
Sensors 19 00747 i025 Sensors 19 00747 i026 Sensors 19 00747 i027 Sensors 19 00747 i028
MCA-3
(700 nm)
Sensors 19 00747 i029 Sensors 19 00747 i030 Sensors 19 00747 i031 Sensors 19 00747 i032
MCA-4
(800 nm)
Sensors 19 00747 i033 Sensors 19 00747 i034 Sensors 19 00747 i035 Sensors 19 00747 i036
1 The horizontal and vertical coordinates of the plots are column number and Vc, respectively.
Table 8. Statistical assessments for the estimation of leaf area index (LAI) and leaf biomass with vegetation indices (VIs) for the Mini-MCA6 images. R2 is the determination coefficient, and RMSE is the root-mean-square error. Values in bold indicate the best accuracy (column basis).
Table 8. Statistical assessments for the estimation of leaf area index (LAI) and leaf biomass with vegetation indices (VIs) for the Mini-MCA6 images. R2 is the determination coefficient, and RMSE is the root-mean-square error. Values in bold indicate the best accuracy (column basis).
VIsLAILeaf Biomass
EquationR2RMSEEquationR2RMSE
DVIy = 0.4862e5.6717x0.8081.131y = 0.029e4.8811x0.7570.050
NDVIy = 0.0818e4.9699x0.7901.086y = 0.0062e4.2863x0.7440.045
GNDVIy = 0.0248e6.3034x0.7781.195y = 0.0021e5.4789x0.7440.048
RVIy = 0.707e0.1917x0.8331.130y = 0.0393e0.1671x0.8010.044
SAVIy = 0.1188e5.3964x0.8390.955y = 0.0086e4.6535x0.7900.044
MTVI2y = 0.5147e3.837x0.8550.870y = 0.0303e3.3102x0.8060.041
EVIy = 0.2409e3.8655x0.7831.127y = 0.0159e3.3185x0.7310.050
GBNDVIy = 0.1047e4.7458x0.7651.294y = 0.0074e4.1367x0.7360.051

Share and Cite

MDPI and ACS Style

Jiang, J.; Zheng, H.; Ji, X.; Cheng, T.; Tian, Y.; Zhu, Y.; Cao, W.; Ehsani, R.; Yao, X. Analysis and Evaluation of the Image Preprocessing Process of a Six-Band Multispectral Camera Mounted on an Unmanned Aerial Vehicle for Winter Wheat Monitoring. Sensors 2019, 19, 747. https://0-doi-org.brum.beds.ac.uk/10.3390/s19030747

AMA Style

Jiang J, Zheng H, Ji X, Cheng T, Tian Y, Zhu Y, Cao W, Ehsani R, Yao X. Analysis and Evaluation of the Image Preprocessing Process of a Six-Band Multispectral Camera Mounted on an Unmanned Aerial Vehicle for Winter Wheat Monitoring. Sensors. 2019; 19(3):747. https://0-doi-org.brum.beds.ac.uk/10.3390/s19030747

Chicago/Turabian Style

Jiang, Jiale, Hengbiao Zheng, Xusheng Ji, Tao Cheng, Yongchao Tian, Yan Zhu, Weixing Cao, Reza Ehsani, and Xia Yao. 2019. "Analysis and Evaluation of the Image Preprocessing Process of a Six-Band Multispectral Camera Mounted on an Unmanned Aerial Vehicle for Winter Wheat Monitoring" Sensors 19, no. 3: 747. https://0-doi-org.brum.beds.ac.uk/10.3390/s19030747

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop