Next Article in Journal
Impact of Astrocytic Coverage of Synapses on the Short-Term Memory of a Computational Neuron-Astrocyte Network
Previous Article in Journal
Data Depth and Multiple Output Regression, the Distorted M-Quantiles Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Video Distance Measurement Technique Using Least Squares Based Sharpness Cost Function

1
Faculty of Electrical Engineering, “Gheorghe Asachi” Technical University of Iași, 700050 Iași, Romania
2
Faculty of Industrial Design and Business Management, “Gheorghe Asachi” Technical University of Iași, 700050 Iași, Romania
*
Author to whom correspondence should be addressed.
Submission received: 26 July 2022 / Revised: 30 August 2022 / Accepted: 7 September 2022 / Published: 9 September 2022

Abstract

:
A wide range of precision applications requires video measuring systems that achieve a large number of successive measurements and deliver fast results. Their efficiency is essentially given by the technical performances of the used equipment and by the measurement technique on which they operate. In order to enhance the reliability of such a system, the paper presents a new method of measuring the distance with a single video camera intended to assess the distance at which the object of interest to the camera is located. The technique makes use of a least squares-based sharpness cost function and determines the distance between the camera and the object of interest by minimizing the least squares deviation of the current sharpness values from the sharpness values obtained by calibration. It involves the current sharpness calculation phase, the normalization phase, the phase of calculating the deviations of the current sharpness from the dependencies obtained by calibration and the phase of determining the minimum deviation index.

1. Introduction

Optical distance determination can be performed by active and passive methods [1]. Active emitter-receiver methods are based on computing the time of flight for a light, ultrasound or radio beam emitted to the object of interest, which after reflection, is received by the video camera [2]. The beam contains a known pattern, and the method of determining the distance is based on measuring the deformation of the pattern [3,4]. There were several products developed that were subjected to this active method, each with its complexity and hardware limits: ARCore, a Google platform for building augmented reality [5], Occipital’s Structure Sensor for 3D scanning [6], Kinovea analysis software for sport and medical professionals [7] or Microsoft’s Kinect, a motion controller for gaming peripherals [8]. Research and advances in optical systems [9,10,11] regarding optical transmitter-receiver application in video communication are also acquainted [12,13,14] but preserve the potential confusion of echoes from previous or subsequent beam pulses or from other systems. Moreover, these applications require expensive and specialized equipment to operate, depending on calibration accuracy [15,16] and a limit of 4 m distance to be measured [17].
Passive methods for distance measuring are based on cameras and computer vision techniques, generally using two-chamber binocular systems between which the distance is accurately known [18]. These methods are based on information about the object’s position, thus requiring a precise alignment of the two cameras’ captured images to identify the correspondence between the object of interest’s images [19,20,21]. The issue of disparity between corresponding pixels in the two images was removed through the adoption of deep convolutional neural networks, whose gain depends on the images’ resolution and on the number of iterations the application requires [22,23]. Another approach for image stitching is a feature-based technique, which extracts point-by-point characteristics from two or more images using different algorithms to compose a panoramic image. They compare all pixels’ intensities with each other or determine a correspondence between images through distinct features extracted from the processing [24,25]. One popular IT device built on this principle is Dell Venue 8 7000 Series, whose stereoscopic cameras allow measuring length, width, and height and calculating particular areas through photos. A derived variant of the binocular method uses a single camera moving over a known distance, obtaining the binocular effect. This concept finds most of its applications in the optical 3D shape reconstruction field [26,27,28] or for obstacle avoidance [29], with the downside of a large amount of data that have to be captured and processed. The precision and test–retest reliability of image reconstruction have been improved by distance measurement methods based on the correspondence between the position of the focus lens and the distance the object is from the camera, known as shape from focus or depth from focus [30,31,32]. To accurately generate a map of the scene, the distance from the camera must be calculated for every point by measuring its relative degree of focus in the images where the point appears [33], so the performance of using the focus in determining the distance depends on the number of positions that the focus lens can have. In shape from focus applications, the process of Gaussian interpolation for creating new data points using a given discrete set of data has the widest practice. Gaussian interpolation uses three focus values (the maximum and two adjacent values) and has been proposed since 1994 [34]. To some extent, linear and quadratic interpolation are adopted for measuring processing [35], with the advantage of speed and simplicity.
A new passive method for visual distance measurement with a single camera is proposed in this paper. It requires a fixed camera and involves the processing of all the images of the object of interest (for each position of the focus lens). In order to overcome the high computational demand of the shape from focus and derived methods, the quality of the image is evaluated globally and not on the pixel granularity level. Since sharpness and contrast define the quality of a perceived image [36,37], a least squares-based sharpness cost function is minimized, yielding the distance to the object of interest.

2. Materials and Methods

2.1. Method Description

The proposed distance measurement method, making use of a least squares-based sharpness cost function, comprises two steps: a calibration step and a measurement one. Calibration is performed only once for a camera type and involves:
  • establishing the dependence of the sharpness, for each position of the lens, on the position of the object of interest. A (n,m) mapping matrix is obtained with n and m being the number of the positions of the focus lens and the number of positions of the object of interest, respectively;
  • approximating each dependency by a polynomial function Si and identification of the function coefficients.
Measuring the distance to the object of interest involves:
  • taking a set of images of the object (stack) and calculating the sharpness for each position of the lens, Si_measured;
  • calculating the Cost Function as being the square deviations between the measured sharpness and the sharpness obtained by calibration for each of the m possible positions of the object;
  • C F j = i = 0 n 1 S i x j S i _ m e a s u r e d 2 ;
  • establishing the index of the minimum of the square deviations, which yield the calculated distance;
  • e s t i m a t e d   i n d e x = min j = 1 ; m C F j .

2.2. Experimental Setup

For testing, the proposed measurement technique used was a Logitech Pro C920 webcam equipped with Carl Zeiss® optics, with a 4-bit focus, equivalent to 16 distinct lens positions. The autofocusing process is completed within a fraction of a second (0.38 s in good light; 0.89 s in low light conditions), set by default responsive, suitable for short or medium distances (this mode is useful when aiming to capture a quick-moving visual task or when aiming to capture multiple visual tasks while switching the focus continually from one to another). The webcam is integrated on an experimental linear displacement stand that allows object movement relative to the camera with a high resolution. Structurally, the stand (Figure 1) is composed of an axis for linear motion with a slider (1), a camera test chart/array of 140 colors (2), a single-ended optical shaft encoder with 0.0235 mm resolution (4096 counts per revolution in quadrature mode/1024 lines per revolution), used for position feedback (3), a DC brushed micromotor, with 0.7 Nm precision planetary gearheads (4), single channel linear voltage-controlled power amplifier (5), a data acquisition board (6) with closed-loop control, connected with (3), and the webcam (7). In order to drive the system toward a target position, a feedback loop with a PID controller is implemented. LabView 2021 software is used for command, control, data acquisition and image processing with Vision Development Module 2021 National Instruments toolkit.

3. Results

3.1. Calibration Stage

First, to determine the evolution of the clarities according to displacement, the calibration phase is performed, as rendered in Figure 2. Two cycles are involved: the cycle of the object of interest (1) and the cycle of the focus lens (2). The lens cycle is performed only once for each position (3) of the object cycle and leads to obtaining the image set (stack) on which the focus is established, acquiring the image with the best sharpness. The object cycle involves moving the object of interest (3) along the axis of symmetry of the camera, with a resolution equal to the desired resolution to be obtained when measuring. For each position of the object, the sharpness values (4) obtained from the lens cycle are normalized (5). From the matrix of the obtained data, the dependence of the sharpness on the position of the object (6) is extracted for each position of the lens, and each dependence is polynomially approximated (7).
From step (6) the dependencies of the sharpness on the position of the object were approximated for each of the 16 positions of the focus lens by a polynomial function:
Si = a0 + a1x + a2x2 + a3x3 + a4x4 + a5x5 + a6x6
where i represents the lens’ position, i = 0…15, x is the objects’ position and Si is the sharpness value. The coefficients of the 16 associated polynomial functions of 6 degree are given in Table 1 and their graphical approximation functions to the data obtained by calibration are rendered in Figure 3. For method accurately accomplishment, all decimals must be used.

3.2. Measurement Stage

The measurement phase, performed following the operations from Figure 4, involves taking the image set (stack) through the lens cycle (2), calculating the sharpness corresponding to each position of the lens (4) and normalizing the sharpness values (5). Then, one calculates the deviations of the current clarities from the dependencies resulting from the calibration for each position of the object (8), and the index of the minimum deviations is determined (9), which will correspond to the measured distance.
For an object placed, for example, at a distance of 104 mm away, the set of 16 images is taken for which the normalized clarities are calculated (Figure 5). For each possible position of the object, the deviation between the current clarities and the ones obtained by calibration is calculated (Figure 6). The minimum deviation is set, which will correspond to a measured distance of 103 mm (Figure 7).

4. Discussion

The object has been displaced in the range of [25 mm; 620 mm], passing through 341 positions. For each position, the relative error was calculated as the difference between the prescribed distance (d) and the distance calculated by the proposed method (D). A graphical render of their values is shown in Figure 8. Moreover, the relative error to the prescribed distance was calculated and displayed in Figure 9.
A qualitative analysis of the obtained results reveals that the relative error falls within a narrow range of values, both for small and large distance values. This is a great advantage when dealing with industrial applications where the measurement range is wide.
A further quantitative evaluation of the reported errors is somehow difficult to be done since there are no established benchmarks for video distance measurement. Moreover, the results are strongly determined by the optical system performance and properties, as highlighted in [38].
A comparative analysis with the best results reported in mainstream publications is presented in Table 2.
By using the proposed method, improved accuracy is obtained compared to the methods that used dedicated and expensive equipment such as plenoptic (array of microlenses) cameras [40,41] and stereo vision cameras [42,44], or make use of a known dimensional reference in the optical path [43].

5. Conclusions

In this study, a new distance measurement method is proposed, with a single video camera for assessing the distance of an object from the camera. The method comprises a calibration step and a measurement step.
The first step contains the cycle of the object of interest (involving the movement of the object of interest along the axis of symmetry of the camera) and the cycle of the focus lens (performed only once for each position in the object cycle). For each position of the object, the sharpness values obtained from the lens cycle are normalized. The dependence of the sharpness on the position of the object is extracted from the bidimensional data matrix for each lens’ position and polynomially approximated.
In the measurement step, for a certain position of the object, a set of images is taken characteristic of the lens cycle, and the sharpness corresponding to each position of the lens is determined and normalized. A least squares-based sharpness cost function is then computed and minimized, yielding the distance to the object of interest.
In terms of accuracy, the proposed method gives similar or better results when compared with reported methods making use of dedicated and expensive equipment such as plenoptic or stereo setups.

Author Contributions

Conceptualization, C.D. and M.C.T.; methodology, M.C.T.; software, C.D.; validation, M.C.T., M.P., C.D. and E.S.; formal analysis, C.D. and M.C.T.; investigation, M.P., M.C.T. and C.D.; resources, C.D. and M.P.; data curation, C.D. and E.S.; writing—original draft preparation, C.D. and M.P.; writing—review and editing, E.S.; visualization, C.D. and E.S.; supervision, E.S.; project administration, M.C.T.; funding acquisition, M.C.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by European Regional Development Fund, grant number 7386/27.12.2021 (SMIS code 137825)—“Video system for evaluating the standardized dimensions for customers of online clothing stores”. The APC was funded by “Gheorghe Asachi” Technical University of Iași, România.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable here.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Hamzah, R.A.; Ibrahim, H. Literature survey on stereo vision disparity map algorithms. J. Sens. 2016, 2016, 8742920. [Google Scholar] [CrossRef]
  2. Zaarane, A.; Slimani, I.; Al Okaishi, W.; Atouf, I.; Hamdoun, A. Distance measurement system for autonomous vehicles using stereo camera. Array 2020, 5, 100016. [Google Scholar] [CrossRef]
  3. Li, Q.; Chen, G. Fault distance location method of transmission line based on binocular vision technology. In Proceedings of the 7th International Conference on Intelligent Computing and Signal Processing, Xi’an, China, 15–17 April 2022; pp. 1807–1811. [Google Scholar]
  4. Wu, Y.; Shao, S.; Li, Y.; Chen, X.; Che, D.; Chen, J.; Du, K.; Jiang, R.; Huang, X.; Kan, D. Multi-beam optical phase array for long-range LiDAR and free-space data communication. Opt. Laser Technol. 2022, 151, 108027. [Google Scholar] [CrossRef]
  5. Overview of ARCore and Supported Development Environments. Available online: https://developers.google.com/ar/develop/ (accessed on 9 August 2022).
  6. How Does Structure Sensor Work? Available online: https://support.canvas.io/article/7-how-does-structure-sensor-work (accessed on 9 August 2022).
  7. Kinovea Features. Available online: https://www.kinovea.org/features.html (accessed on 9 August 2022).
  8. Kinect for Windows. Available online: https://docs.microsoft.com/en-us/windows/apps/design/devices/kinect-for-windows (accessed on 9 August 2022).
  9. de Arruda Mello, D.A.; Barbosa, F.A. The Optical Transmitter. In Digital Coherent Optical Systems, Optical Networks, 1st ed.; Mukherjee, B., Tomkos, I., Eds.; Springer: Cham, Switzerland, 2021; pp. 19–46. [Google Scholar]
  10. Jahid, A.; Alsharif, M.H.; Hall, T.J. A contemporary survey on free space optical communication: Potentials, technical challenges, recent advances and research direction. J. Netw. Comput. Appl. 2022, 200, 103311. [Google Scholar] [CrossRef]
  11. Tran, T.; Zhang, X. Process monitoring and inspection. In Digital Manufacturing; Trombaco, R.G., Ed.; Elsevier: Amsterdam, The Netherlands, 2022; pp. 387–442. [Google Scholar]
  12. Jo, K.; Gupta, M.; Nayar, S.K. SpeDo: 6 DOF ego-motion sensor using speckle defocus imaging. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 4319–4327. [Google Scholar]
  13. Lee, J.; Gupta, M. Blocks-World Cameras. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11412–11422. [Google Scholar]
  14. Zhang, X.G.; Sun, Y.L.; Zhu, B.; Jiang, W.X.; Yu, Q.; Tian, H.W.; Qiu, C.-W.; Zhang, Z.; Cui, T.J. A metasurface-based light-to-microwave transmitter for hybrid wireless communications. Light Sci. Appl. 2022, 11, 126. [Google Scholar] [CrossRef]
  15. Kirrbach, R.; Faulwaßer, M.; Schneider, T.; Meißner, P.; Noack, A.; Deicke, F. Monolitic Hybrid Transmitter-Receiver Lens for Rotary On-Axis Communications. Appl. Sci. 2020, 10, 1540. [Google Scholar] [CrossRef]
  16. Nagashima, K.; Ishikawa, Y.; Izawa, A.; Nishita, M.; Matsubara, N.; Ishii, H.; Saeyang, T.; Ogiso, Y.; Ueda, Y.; Kohtoku, M. Transmitter-receiver optical sub assembly using ultra-compact tunable DBR/ring laser. In Proceedings of the IEEE Optical Fiber Communications Conference and Exhibition, Washington, DC, USA, 6–11 June 2021; pp. 1–3. [Google Scholar]
  17. Yang, C.; Huang, X.; Zheng, Y.; Xie, Y.; Duan, X. Non-contact Breathing Rate Detection Based on Time of Flight Sensor. In Proceedings of the 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Guadalajara, Mexico, 1–5 November 2021; pp. 7284–7287. [Google Scholar]
  18. Zhang, X.; Shao, W.; Zhou, M.; Tan, Q.; Li, J. A scene comprehensive safety evaluation method based on binocular camera. Rob. Auton. Syst. 2020, 128, 103503. [Google Scholar] [CrossRef]
  19. Wu, C.; Yang, L.; Luo, Z.; Jiang, W. Linear Laser Scanning Measurement Method Tracking by a Binocular Vision. Sensors 2022, 22, 3572. [Google Scholar] [CrossRef]
  20. Xie, Q.; Hu, X.; Ren, L.; Qi, L.; Sun, Z. A Binocular Vision Application in IoT: Realtime Trustworthy Road Condition Detection System in Passable Area. IEEE Trans. Ind. Inform. 2022. [Google Scholar] [CrossRef]
  21. Xiang, H.; Cheng, L.; Wu, H.; Chen, Y.; Gao, Y. Mobile Robot Automatic Aiming Method Based on Binocular Vision. In Proceedings of the 40th Chinese Control Conference, Shanghai, China, 26–28 July 2021; pp. 4150–4156. [Google Scholar]
  22. Wang, Y.; Lai, Z.; Huang, G.; Wang, B.H.; van der Maaten, L.; Campbell, M.; Weinberger, K.Q. Anytime stereo image depth estimation on mobile devices. In Proceedings of the International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 5893–5900. [Google Scholar]
  23. Dai, H.; Zhang, X.; Zhao, Y.; Sun, H.; Zheng, N. Adaptive disparity candidates prediction network for efficient real-time stereo matching. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 3099–3110. [Google Scholar] [CrossRef]
  24. Huang, H.; Chen, F.; Cheng, H.; Li, L.; Wang, M. Semantic segmentation guided feature point classification and seam fusion for image stitching. J. Algorithm Comput. Technol. 2021, 15, 17483026211065399. [Google Scholar] [CrossRef]
  25. Zhou, C.; Yu, H.; Yuan, B.; Wang, L.; Yang, Q. Three-Dimensional Stitching of Binocular Endoscopic Images Based on Feature Points. Photonics 2021, 8, 330. [Google Scholar] [CrossRef]
  26. Yu, F.; Gallup, D. 3D reconstruction from accidental motion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 3986–3993. [Google Scholar]
  27. Im, S.; Ha, H.; Choe, G.; Jeon, H.-G.; Joo, K.; Kweon, I.S. Accurate 3D reconstruction from small motion clip for rolling shutter cameras. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 775–787. [Google Scholar] [CrossRef] [PubMed]
  28. Im, S.; Ha, H.; Jeon, H.-G.; Lin, S.; Kweon, I.S. Deep Depth from Uncalibrated Small Motion Clip. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 1225–1238. [Google Scholar] [CrossRef] [PubMed]
  29. Aswini, N.; Uma, S.V. Obstacle avoidance and distance measurement for unmanned aerial vehicles using monocular vision. Int. J. Electr. Comput. 2019, 9, 3504. [Google Scholar]
  30. Ali, U.; Mahmood, M.T. Robust focus volume regularization in shape from focus. IEEE Trans. Image Process. 2021, 30, 7215–7227. [Google Scholar] [CrossRef]
  31. Li, Y.; Fu, Y.; Zhong, K.; Ma, B.; Yan, Z. A virtual binocular line-structured light measurement method based on a plane mirror. Opt. Commun. 2022, 510, 127974. [Google Scholar] [CrossRef]
  32. Gladines, J.; Sels, S.; Blom, J.; Vanlanduit, S. A Fast Shape-from-Focus-Based Surface Topography Measurement Method. Sensors 2021, 21, 2574. [Google Scholar] [CrossRef]
  33. Vignesh, S.M.; Senthilnathan, R. A Focus-Measurement Based 3D Surface Reconstruction System for Dimensional Metrology. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1012, 012038. [Google Scholar] [CrossRef]
  34. Nayar, S.K.; Nakagawa, Y. Shape from focus. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 24–831. [Google Scholar]
  35. Subbarao, M.; Choi, T. Accurate recovery of three-dimensional shape from image focus. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 266–274. [Google Scholar] [CrossRef]
  36. Asokan, A.; Anitha, J.; Ciobanu, M.; Gabor, A.; Naaji, A.; Hemanth, D.J. Image Processing Techniques for Analysis of Satellite Images for Historical Maps Classification—An Overview. Appl. Sci. 2020, 10, 4207. [Google Scholar] [CrossRef]
  37. Prajapati, P.; Narmawala, Z.; Darji, N.P.; Moorthi, S.M.; Ramakrishnan, R. Evaluation of perceptual contrast and sharpness measures for meteorological satellite images. Procedia Comput. Sci. 2015, 57, 17–24. [Google Scholar] [CrossRef]
  38. Skibicki, J.; Golijanek-Jędrzejczyk, A.; Dzwonkowski, A. The Influence of Camera and Optical System Parameters on the Uncertainty of Object Location Measurement in Vision Systems. Sensors 2020, 20, 5433. [Google Scholar] [CrossRef]
  39. Zuckerman, M.; Kolberg, E. Distance Estimation to Image Objects Using Adapted Scale. Int. J. Eng. Sci. 2017, 6, 39–50. [Google Scholar] [CrossRef]
  40. Chen, Y.; Jin, X.; Dai, Q. Distance measurement based on light field geometry and ray tracing. Opt. Express 2017, 25, 59–76. [Google Scholar] [CrossRef]
  41. Hahne, C.; Aggoun, A.; Haxha, S.; Velisavljevic, V.; Fernández, J.C.J. Light field geometry of a standard plenoptic camera. Opt. Express 2014, 22, 26659–26673. [Google Scholar] [CrossRef]
  42. Setyawan, R.A.; Soenoko, R.; Mudjirahardjo, P.; Choiron, M.A. Measurement accuracy analysis of distance between cameras in stereo vision. In Proceedings of the IEEE Electrical Power, Electronics, Communications, Controls and Informatics Seminar (EECCIS), Batu, Indonesia, 9–11 October 2018; pp. 169–172. [Google Scholar]
  43. Megalingam, R.K.; Shriram, V.; Likhith, B.; Rajesh, G.; Ghanta, S. Monocular distance estimation using pinhole camera approximation to avoid vehicle crash and back-over accidents. In Proceedings of the IEEE 10th International Conference on Intelligent Systems and Control (ISCO), Coimbatore, India, 7–8 January 2016; pp. 1–5. [Google Scholar]
  44. Dragne, C.; Todiriţe, I.; Iliescu, M.; Pandelea, M. Distance Assessment by Object Detection—For Visually Impaired Assistive Mechatronic System. Appl. Sci. 2022, 12, 6342. [Google Scholar] [CrossRef]
Figure 1. Experimental stand for video distance measurement testing.
Figure 1. Experimental stand for video distance measurement testing.
Mathematics 10 03273 g001
Figure 2. Block diagram of the calibration phase.
Figure 2. Block diagram of the calibration phase.
Mathematics 10 03273 g002
Figure 3. Polynomial approximations of clarities to the position of the object, for each of the 16 focus lens’ positions (f0–f15).
Figure 3. Polynomial approximations of clarities to the position of the object, for each of the 16 focus lens’ positions (f0–f15).
Mathematics 10 03273 g003
Figure 4. Block diagram of the measurement phase.
Figure 4. Block diagram of the measurement phase.
Mathematics 10 03273 g004
Figure 5. The clarities of a set of images (stack) for an object at 104 mm distance from the camera.
Figure 5. The clarities of a set of images (stack) for an object at 104 mm distance from the camera.
Mathematics 10 03273 g005
Figure 6. The dependence of the sharpness on the position of the object, for each of the focus lens positions.
Figure 6. The dependence of the sharpness on the position of the object, for each of the focus lens positions.
Mathematics 10 03273 g006
Figure 7. Sharpness cost function for the current measurement (d = 104 mm).
Figure 7. Sharpness cost function for the current measurement (d = 104 mm).
Mathematics 10 03273 g007
Figure 8. The graphical variation of absolute errors on distance.
Figure 8. The graphical variation of absolute errors on distance.
Mathematics 10 03273 g008
Figure 9. The graphical variation of relative errors on distance.
Figure 9. The graphical variation of relative errors on distance.
Mathematics 10 03273 g009
Table 1. Polynomial coefficients determined for each lens position.
Table 1. Polynomial coefficients determined for each lens position.
Lens’ Position (i)Polynomial Coefficients
a0a1a2a3a4a5a6
01.00242−3.41879 × 10−3−292321 × 10−44.05166 × 10−6−2.16663 × 10−85.22761 × 10−11−4.74343 × 10−14
10.8987216.36388 × 10−3−4.76928 × 10−45.53824 × 10−6−2.7626 × 10−86.40214 × 10−11−5.65194 × 10−14
20.8028941.44673 × 10−2−5.99068 × 10−46.25795 × 10−6−2.94182 × 10−86.53407 × 10−11−5.57859 × 10−14
30.7040341.85123 × 10−2−5.59871 × 10−45.0166 × 10−6−2.07422 × 10−84.09188 × 10−11−3.11744 × 10−14
40.6728471.47299 × 10−2−3.34911 × 10−42.04935 × 10−6−4.44685 × 10−83.8416 × 10−116.60522 × 10−15
50.6844776.57673 × 10−3−2.49564 × 10−5−1.41218 × 10−61.24868 × 10−8−3.79673 × 10−113.95845 × 10−14
60.707126−1.54146 × 10−32.0778 × 10−4−3.24671 × 10−61.81618 × 10−8−4.42197 × 10−113.99154 × 10−14
70.745179−9.34635 × 10−33.99719 × 10−4−4.82945 × 10−62.47073 × 10−8−5.8496 × 10−115.28991 × 10−14
80.831732−2.12925 × 10−26.80613 × 10−4−7.49486 × 10−63.79759 × 10−8−9.16576 × 10−118.50388 × 10−14
90.706765−4.9561 × 10−31.48598 × 10−4−1.44829 × 10−67.39629 × 10−9−1.89172 × 10−111.82406 × 10−14
100.704984−6.61971 × 10−31.71546 × 10−4−1.6138 × 10−67.1569 × 10−9−1.44787 × 10−111.04764 × 10−14
110.690874−4.97126 × 10−31.07935 × 10−4−8.17178 × 10−72.66015 × 10−9−2.95716 × 10−12−3.20389 × 10−16
120.687389−4.74209 × 10−39.63782 × 10−5−6.6829 × 10−71.82623 × 10−9−8.70554 × 10−13−2.20633 × 10−15
130.682394−4.54304 × 10−38.85567 × 10−5−5.72535 × 10−71.29942 × 10−94.56257 × 10−13−3.44156 × 10−15
140.677695−4.47291 × 10−38.77463 × 10−5−5.72904 × 10−71.33118 × 10−93.31304 × 10−13−3.29169 × 10−15
150.674817−4.50933 × 10−38.79664 × 10−5−5.6798 × 10−71.27092 × 10−95.60098 × 10−13−3.57643 × 10−15
Table 2. The relative error in video distance measurement.
Table 2. The relative error in video distance measurement.
Distance
[mm]
Zuckerman M., et al. 2017 [39]Yankin C., et al. 2017 [40]Hahne C., et al. 2014 [41]Setyawan R.A., et al. 2018 [42]Megalingam R.K., et al. 2016 [43]Dragne C., et al. 2022 [44]Proposed Method
50 6% 2%
10023 % 6.3% 1%
1500.2% 2.7%10.6%0.7%
2007%1.5%4.7% 3.5%2.72%2.5%
2507.2% 5.3%1.2%
30011%1.7%5.9% 6.22%0.4%
3507% 1.5%
4002% 3.7%
4501.3%2%6.7% 0.3%
5002% 2.5% 4%
5506.1% 0.3%
6009%2.4%7%2.1% 1.7%
6505.3%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Serea, E.; Penciuc, M.; Temneanu, M.C.; Donciu, C. Video Distance Measurement Technique Using Least Squares Based Sharpness Cost Function. Mathematics 2022, 10, 3273. https://0-doi-org.brum.beds.ac.uk/10.3390/math10183273

AMA Style

Serea E, Penciuc M, Temneanu MC, Donciu C. Video Distance Measurement Technique Using Least Squares Based Sharpness Cost Function. Mathematics. 2022; 10(18):3273. https://0-doi-org.brum.beds.ac.uk/10.3390/math10183273

Chicago/Turabian Style

Serea, Elena, Mihai Penciuc, Marinel Costel Temneanu, and Codrin Donciu. 2022. "Video Distance Measurement Technique Using Least Squares Based Sharpness Cost Function" Mathematics 10, no. 18: 3273. https://0-doi-org.brum.beds.ac.uk/10.3390/math10183273

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop