Next Article in Journal
Wall-Climbing Mobile Robot for Inspecting DEMO Vacuum Vessel
Previous Article in Journal
Design of an Affordable Cross-Platform Monitoring Application Based on a Website Creation Tool and Its Implementation on a CNC Lathe Machine
Previous Article in Special Issue
Target-Oriented High-Resolution and Wide-Swath Imaging with an Adaptive Receiving–Processing–Decision Feedback Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Re-Calibration and Lens Array Area Detection for Accurate Extraction of Elemental Image Array in Three-Dimensional Integral Imaging

1
Department of Computer Science, Sangmyung University, Seoul 110-743, Korea
2
Department of Intelligent IOT, Sangmyung University, Seoul 110-743, Korea
*
Author to whom correspondence should be addressed.
Submission received: 16 July 2022 / Revised: 9 September 2022 / Accepted: 13 September 2022 / Published: 15 September 2022
(This article belongs to the Special Issue Computational Sensing and Imaging)

Abstract

:
This paper presents a new method for extracting an elemental image array in three-dimensional (3D) integral imaging. To reconstruct 3D images in integral imaging, as the first step, a method is required to accurately extract an elemental image array from a raw captured image. Thus, several methods have been discussed to extract an elemental image array. However, the accuracy is sometimes degraded due to inaccurate edge detection, image distortions, optical misalignment, and so on. Especially, small pixel errors can deteriorate the performance of an integral imaging system with a lens array. To overcome the problem, we propose a postprocessing method for the accurate extraction of an elemental image array. Our method is a unified version of an existing method and proposed postprocessing techniques. The proposed postprocessing consists of re-calibration and lens array area detection. Our method reuses the results from an existing method, and it then improves the results via the proposed postprocessing techniques. To evaluate the proposed method, we perform optical experiments for 3D objects and provide the resulting images. The experimental results indicate that the proposed postprocessing techniques improve an existing method for extracting an elemental image array in integral imaging. Therefore, we expect the proposed techniques to be applied to various applications of integral imaging systems

1. Introduction

Integral imaging was initially introduced by G. Lippmann in 1908, which is one of the techniques for recording the information of 3D objects and displaying 3D images [1]. The technique has the advantages of full-color 3D images without eyeglasses, horizontal and vertical parallaxes, and continuous viewpoints. The technique not only displays 3D images in physical space but also reconstructs 3D images in computer space through computational integral imaging reconstruction (CIIR). These merits enable integral imaging to be employed in various applications using real 3D images, such as 3D imaging, depth extraction, medical imaging, 3D games, and automation systems. Thus, many studies on integral imaging have been actively conducted [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23].
Integral imaging consists of a pickup process and a reconstruction process, as depicted in Figure 1. A pickup process captures the light photons from 3D objects and records them as an image array that is called the elemental image array (EIA). Optical devices to pick up an elemental image array are classified into a camera array, a moving camera, and a lens array. A camera array requires huge space and high cost. A moving camera has limitations such that it requires scanning time and a motorized device, thus it is hard to pick up a moving object. A lens array with a camera requires less cost and space than the two devices. Moreover, it can capture moving objects without time delay. Accordingly, a lens array is likely employed as a pick-up method in integral imaging [17,18,19,20,21,22,23].
In integral imaging, the setup of a pick-up process consists of a lens array and a capturing device such as a camera, as shown in Figure 1a. The process converts a raw image from a capturing device into an elemental image array (EIA). To reconstruct 3D images effectively and to improve the accuracy of the reconstruction, the process requires an accurate method to extract elemental images without distortion or loss of data. Thus, various methods to extract an elemental image array in 3D integral imaging have been discussed [24,25,26,27,28,29,30]. Among them, a technique using projection profiles was introduced to extract the two-dimensional (2D) lattice line structure, where the boundaries of the element images are estimated through projection profiles [24,25]. A projective transformation was applied to reduce errors by correcting the perspective distortion [26]. Moreover, a calibration pattern image was addressed to extract an elemental image array [27]. To increase the robustness of detecting the lens array lattice, a line segment detection algorithm was studied [28]. In addition, some methods using markers attached to the surface of a lens array were discussed to detect an elemental image array [29,30]. These existing methods can extract an elemental image array from a raw image of a lens array. However, the methods can suffer from inaccurate edge detection, image distortions, and optical misalignment, thus the performance of extracting elemental images can be degraded. Especially, the lens distortion or the inaccurate edge detection likely occurs on the boundaries of each lens within EIA. In this case, the size and location of each elemental image are inaccurate. Therefore, a study is still required for improving the accuracy of extracting elemental images.
In this paper, we propose a postprocessing method to accurately extract an elemental image array in 3D integral imaging. The proposed method improves the accuracy of an existing method by use of re-calibrating the results of the existing method. The proposed re-calibration algorithm calibrates the size and location of each elemental image by estimating the number of lenses accurately. In addition, we propose an algorithm to detect the lens array area via reusing intermediate data of an existing method. Thus, the extraction performance of an existing method can be improved by adding the proposed postprocessing techniques. To evaluate the proposed method, optical experiments are conducted. According to the results of the experiments, it turns out that our method improves the accuracy of extracting an elemental image array.

2. Existing Method of Extracting Elemental Image Array

Figure 2 shows an existing method of extracting an elemental image array using Canny edge detection and edge map projection [24]. The method is chosen as our reference method to improve its accuracy since it is one of the state-of-the-art techniques in terms of accuracy in the literature. The method employed Canny edge detection [31] to find a candidate set of the lens boundaries, as with other methods. Then, the edge map is separated into the horizontal and vertical edges through horizontal and vertical one-dimensional median filters, as shown in Figure 2. Using those edge map images, the projection profiles are calculated in each direction, as depicted in the middle of Figure 2. Those projection profiles have many periodical peaks which are very likely to be locations of lens boundaries. Some peaks are selected by thresholding with a threshold of 20% of the maximum value of the profile. Through the selected peaks, candidate locations for the lens lattice lines in the horizontal direction {lh} = <lh1, lh2, …, lhN> and in the vertical direction {lv} = <lv1, lv2, …, lvN> are obtained. Next, the positions of the candidate lattice lines are sequentially subtracted to obtain dih and div , which are the horizontal and vertical distances between the expected lattice lines. Since the lenses are square, one joint histogram can be prepared by combining dih and div in the horizontal and vertical directions. Then, the most accumulated peak in the histogram is most likely to be the size of each elemental image. With the elemental image size w0, the locations of the lattice lines can be expressed as
l i = p 0 + i w 0 = \ r )                                   ( 3
Here, p0 is an integer between [0, w0). For each horizontal and vertical direction, {lbest,h} and {lbest,v} can be obtained by setting p0 to an integer between [0, w0) so that the matching error between l i and the projection profile is minimized.

3. Proposed Method of Extracting Elemental Image Array

The proposed method consists of an existing method and two postprocessing techniques for extracting an EIA. Moreover, the method can utilize any existing method to improve its accuracy. Normally, existing methods detect the size of an elemental image and the lattice structure or locations of the lens boundaries in EIA effectively [24,25,26,27,28]. However, the accuracy is sometimes deteriorated by the lens distortion and the inaccurate edge detection at the boundary of each lens. Even a small misalignment such as 1 or 2 pixels negatively affects the image quality in computational reconstruction. Thus, the size and location of each elemental image can be contaminated by errors from those problems. To minimize the errors, postprocessing is very useful to improve an existing method for EIA extraction since any state-of-the-art method can be integrated with postprocessing.
Here, we propose two postprocessing techniques that consist of re-calibration and lens array area detection, as shown in Figure 3. Our re-calibration technique is carried out independently using the size and locations of elemental images from an existing EIA extraction method. An existing method provides the size and location of each elemental image, which should be real numbers to be accurate. Some problems mentioned above can cause the size of each elemental image to be inaccurate. Moreover, the locations of elemental images can take the same problems. However, the number of elemental images is quite robust since it should be an integer value. In our re-calibration, the number is calculated by quantizing its intermediate value. Here, the quantizing process can eliminate the error in the intermediate value. Thus, the number is more accurate than the size and the locations. This property is therefore utilized in our recalibration technique. In our lens array area (LAA) detection, the intermediate edge map is reused in our method. Usually, some existing methods employ markers to determine the lens array area [29,30]. The existing methods without markers use a cropped raw image so that the lens array area is the whole area of the input raw image. This may be a limitation to set up the optical configuration of the pickup process. On the other hand, our LAA detection technique can provide the lens array area without markers, and it can offer an efficient optical configuration.

3.1. Re-Calibration

The proposed re-calibration algorithm is shown in Figure 3. First, the reference lattice lines are sorted in the order of the largest projection profile value. This is because the greater the value obtained from the projection profile, the higher the probability that an accurate line is detected. Let the reference lines be R i and the number of reference lines be N . Then, the possible distances between two chosen reference lines Dk are defined as
D k = R m R n     k = 1 , 2 , , N N 1 2 , = \ r                                 3 ,
where the maximum of k is calculated by N-choose-2 and R m > R n .
The locations of reference lattice lines obtained from larger projection profiles are relatively robust, and there should be an integer number of lenses between the lattice lines. The number of lenses between the reference lines can be easily estimated. The distance Dk is divided by the initial value of the elemental image size, w0, obtained by the existing method. By rounding this value, the number of lenses is estimated. The estimated number of lenses is very accurate since some errors in w0 can be eliminated by the rounding or quantizing process. Thus, a new size of the elemental image can be calculated by dividing the distance Dk by the number of lenses. An updated size of the elemental image wk can be written as
w k   = D k R o u n d   D k   w 0
Here, it can be said that wk is more accurate than w0 since the number is more accurate and the distances Dk are more accurately chosen than the existing method. Note that the ideal distance of Dk is a multiple of the ideal size of wk. Moreover, wk can still have an error due to an inaccurate Dk. Thus, we introduce a method to extract the robust size from a set of wk.
To determine the final version of the elemental image size wf, we employ a continuous probability density function along the size space. The data wk in the size space are discrete and sparse, thus a continuous window is required to prepare a continuous probability density function [32]. Here, we use a Gaussian window which can be written by
G w k , σ = 1 2 π σ 2 e w w k 2 2 σ 2
where the sigma σ is a parameter of the Gaussian function and we fixed the sigma to be 2 in our method. Now, a linear combination of the Gaussian function for all wk provide us with the continuous probability function or likelihood function for wk. Taking the argument of maximizing the probability yields the final version of elemental image size, which is written by
w f   = arg max k = 1 M G w k , σ
For example, a set of Gaussian density functions for wk = {216, 216.5, 217.625, 219.875, 228.25, 240} using Equation (4) and their linear combination are depicted in Figure 4. Moreover, the argument of maximizing the likelihood function is calculated as wf =217.03 by using Equation (5). It is seen that a group of wk which is the set of candidates of the accurate size provides the final version of wf whereas some of wk which is considered to be an inaccurate size are discarded. Therefore, our re-calibration technique is an effective method to suppress the noise due to inaccuracy in the size of the elemental image.

3.2. Lens Array Area Detection

The second postprocessing technique is an algorithm for detecting the boundary of the lens array. As shown in Figure 3, a raw image obtained from a capture device can include the area of the elemental image array. In this case, false elemental images affect the reconstruction images, thus a detection method for the lens array area is required to utilize 3D applications in integral imaging.
To localize a lens array area, we reuse the horizontal and vertical edge images which are intermediate images as shown in Figure 2. For the vertical edge image, the horizontal direction scan is performed to find position indices of the first and last edges. Among those indices, the most frequently occurring index is very likely to be the boundary line of the lens array. To find the index, the histogram of the indices is useful, as shown in Figure 5. Here, the index of the peak in the histogram is set to the initial value of the start and end boundaries of the lens array. Similarly, in the case of the horizontal edge image, the process above is repeated with a vertical direction scan to set the initial values of the top and bottom boundaries.
To improve the accuracy of the initial value of the boundary obtained from the histogram, we reuse the lattice line locations detected previously. The location of the lattice line closest to the initial value of the boundary index is determined to be the boundary of the lens array. Thus, the vertical and horizontal processing can finally determine the area of a lens array in the raw image.

4. Experimental Results and Discussion

In this paper, an experiment was conducted to evaluate the performance of the proposed method. The lens array used in the experiment is a 17 × 20 array of square lenses with a focal length of 73 mm and a size of 7.47 mm × 7.47 mm. As shown in Figure 6, three test images such as a diagonal pattern, car and bear toys, and a ruler were used for the experiment. A DSLR camera with pixels of 4000 × 3000 was employed to capture a raw image, which is also shown in Figure 6. Typical images seen through the lens array are prepared as the inputs of proposed and existing methods. To evaluate objectively, we determined the ground truth of the sizes and positions through careful manual operations. We compared the size and position results detected by each method through the ground truth of the elemental images. The ground truth is set by hand to determine the exact size and position. For the ground truth size, we manually counted the number of lenses in the picked-up image. Then four corner points are also manually determined by careful visual inspection of the raw image. The number and four corner points yield the size of the elemental image by dividing the distances of corner points by the number. Moreover, the ground truth positions can be determined from the positions of the counted segment lines between corner points.
Table 1 and Figure 7 show the experimental results in terms of accuracy. Table 1 says that some poor results can be made from the existing method, especially the diagonal pattern object. The size error is 6.43 pixels for this object image, compared with the ground truth. The strong edges inside of elemental images from the diagonal pattern affect the projection profiles. However, our method improved the existing extraction method by adding our postprocessing technique so that the size error is improved to 0.29 pixels. Thus, the poor results of the existing method were converted to acceptable results with high accuracy, as shown in Figure 7a. Here, the ground truth is highlighted in green, the grid extracted by the existing method is in blue, and the grid is in red for the proposed method.
Similarly, for the car and bear and ruler images, the element image size errors from the existing method are 1.38 and 1.20, respectively. Moreover, the left images of Figure 7b,c shows the moderate results of the existing method. However, the position errors cannot be acceptable since their averages of position errors are 7.37 and 10.10, respectively. On the other hand, the proposed processing method turned the averages of position errors into 1.87 and 2.86, respectively, which are much smaller than the existing method. Moreover, Figure 7b,c depicted this improvement of our method visually.
Figure 8 shows the detected area of the lens array to bound EIA. In the experimental images, some regions are not in the lens array area. These regions should be discarded in detected EIA. The proposed method for a lens array area provides acceptable results according to Figure 8 which shows three resulting images from the detection of the lens array areas. Here, the detected EIA area is highlighted in the red box. The zoomed areas in Figure 8 clearly show the performance of the lens array area detection. These results also indicate our method is very effective in taking elemental images maximally.
Based on the experimental results, we verified that the proposed method can be applied to the extraction of an elemental image array, and it improved the existing method significantly by suppressing errors in the results of the existing method. Moreover, it is confirmed that the proposed lens array area detection works well.
As shown in Figure 9, we reconstructed 3D images for integral image display to show the effectiveness of the misaligned size of elemental images. The standard computational integral imaging reconstruction (CIIR), which is used in [9,16,23], is employed to show integral imaging. The method reconstructs 2D images according to depth based on back projection. Figure 9 shows the reconstructed images with different elemental image sizes. These resulting images were focused on the bear object. It is seen that the bear object is well focused with the correct elemental image size, as shown in Figure 9a, whereas imaging for the object fails with the misaligned elemental image size, as shown in Figure 9b.

5. Conclusions

In this paper, we have proposed an accurate extraction method for an elemental image array. The proposed method consists of an existing method and two postprocessing techniques of recalibration and lens array area detection. The recalibration technique can update the results of an existing method; thus, the size and locations of elemental images can be more accurate than before. In addition, the lens array area detection technique can detect the lens array boundaries in a raw image captured through a lens array. The experimental results indicated the proposed method improves an existing method for extracting an elemental image array. The proposed postprocessing techniques can be integrated with an existing method to improve the accuracy of an integral imaging system. This has the advantage of improving an existing integral imaging system with minimal cost. Therefore, it is expected that the proposed method can be applied to various integral imaging systems.

Author Contributions

Conceptualization, H.Y.; methodology, H.J. and H.Y.; software, H.J. and H.Y.; validation, H.J. and H.Y.; formal analysis, H.Y.; investigation, H.J., E.L., and H.Y.; resources, H.J.; data curation, H.J., E.L. and H.Y.; writing—original draft preparation, H.Y.; writing—review and editing, E.L. and H.Y.; visualization, H.J. and E.L.; supervision, H.Y.; project administration, H.Y.; funding acquisition, H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a 2021 research Grant from Sangmyung University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lippmann, G. Epreuves reversibles donnant la sensation du relief. J. Phys. Theor. Appl. 1908, 7, 821–825. [Google Scholar] [CrossRef]
  2. Javidi, B.; Carnicer, A.; Arai, J.; Fujii, T.; Hua, H.; Liao, H.; Martinez-Corral, M.; Pla, F.; Stern, A.; Waller, L.; et al. Roadmap on 3D integral imaging: Sensing, processing, and display. Opt. Express 2020, 28, 32266–32293. [Google Scholar] [CrossRef] [PubMed]
  3. Joshi, R.; O’Connor, T.; Shen, X.; Wardlaw, M.; Javidi, B. Optical 4D signal detection in turbid water by multi-dimensional integral imaging using spatially distributed and temporally encoded multiple light sources. Opt. Express 2020, 28, 10477–10490. [Google Scholar] [CrossRef] [PubMed]
  4. Hotaka, H.; O’Connor, T.; Ohsuka, S.; Javidi, B. Photon-counting 3D integral imaging with less than a single photon per pixel on average using a statistical model of the EM-CCD camera. Opt. Lett. 2020, 45, 2327–2330. [Google Scholar] [CrossRef]
  5. Martínez-Corral, M.; Javidi, B. Fundamentals of 3D imaging and displays: A tutorial on integral imaging, light-field, and plenoptic systems. Adv. Opt. Photonics 2018, 10, 512–566. [Google Scholar] [CrossRef]
  6. Arai, J.; Nakasu, E.; Vamashita, T.; Hiura, H.; Miura, M.; Nakamura, T.; Funatsu, R. Progress overview of capturing method for integral 3-D imaging displays. Proc. IEEE 2017, 105, 837–849. [Google Scholar] [CrossRef]
  7. Yoo, H.; Jang, J.Y. Intermediate elemental image reconstruction for refocused three-dimensional images in integral imaging by convolution with δ-function sequences. Opt. Lasers Eng. 2017, 97, 93–99. [Google Scholar] [CrossRef]
  8. Jang, J.-Y.; Yoo, H. Computational Three-Dimensional Imaging System via Diffraction Grating Imaging with Multiple Wavelengths. Sensors 2021, 21, 6928. [Google Scholar] [CrossRef]
  9. Yoo, H.; Shin, D.H.; Cho, M. Improved depth extraction method of 3D objects using computational integral imaging reconstruction based on multiple windowing techniques. Opt. Lasers Eng. 2015, 66, 105–111. [Google Scholar] [CrossRef]
  10. Park, S.G.; Yeom, J.W.; Jeong, Y.M.; Chen, N.; Hong, J.Y.; Lee, B.H. Recent issues on integral imaging and its applications. J. Inf. Disp. 2014, 15, 37–46. [Google Scholar] [CrossRef]
  11. Xiao, X.; Javidi, B.; Martinez-Corral, M.; Stern, A. Advances in three-dimensional integral imaging: Sensing, display, and applications. Appl. Opt. 2013, 52, 546–560. [Google Scholar] [CrossRef] [PubMed]
  12. Geng, J. Three-dimensional display technologies. Adv. Opt. Photonics 2013, 5, 456–535. [Google Scholar] [CrossRef] [PubMed]
  13. Cho, M.; Daneshpanah, M.; Moon, I.; Javidi, B. Three-dimensional optical sensing and visualization using integral imaging. Proc. IEEE 2011, 99, 556–575. [Google Scholar] [CrossRef]
  14. Piao, Y.; Shin, D.-H.; Kim, E.-S. Robust image encryption by combined use of integral imaging and pixel scrambling techniques. Opt. Lasers Eng. 2009, 47, 1273–1281. [Google Scholar] [CrossRef]
  15. Martínez-Cuenca, R.; Saavedra, G.; Pons, A.; Javidi, B.; Martínez-Corral, M. Facet braiding: A fundamental problem in integral imaging. Opt. Lett. 2007, 32, 1078–1080. [Google Scholar] [CrossRef]
  16. Shin, D.H.; Yoo, H. Image quality enhancement in 3D computational integral imaging by use of interpolation methods. Opt. Express 2007, 15, 12039–12049. [Google Scholar] [CrossRef]
  17. Fan, Z.; Qiu, H.Y.; Zhang, H.L.; Pang, X.N.; Zhou, L.D.; Liu, L.; Ren, H.; Wang, Q.H.; Dong, J.W. A broadband achromatic metalens array for integral imaging in the visible. Light Sci. Appl. 2019, 8, 67. [Google Scholar] [CrossRef]
  18. Baranski, M.; Rehman, S.; Muttikulangara, S.S.; Barbastathis, G.; Miao, J. Computational integral field spectroscopy with diverse imaging. J. Opt. Soc. Am. A 2017, 34, 1711–1719. [Google Scholar] [CrossRef]
  19. Lee, Y.; Yoo, H. Three-dimensional visualization of objects in scattering medium using integral imaging and spectral analysis. Opt. Lasers Eng. 2016, 77, 31–38. [Google Scholar] [CrossRef]
  20. Yoo, H. Axially moving a lenslet array for high-resolution 3D images in computational integral imaging. OSA Opt. Express 2013, 21, 8876–8887. [Google Scholar] [CrossRef]
  21. Yoo, H. Depth extraction for 3D objects via windowing technique in computational integral imaging with a lenslet array. Opt. Lasers Eng. 2013, 51, 912–915. [Google Scholar] [CrossRef]
  22. Shin, D.H.; Kim, E.S.; Lee, B. Computational reconstruction of three-dimensional objects in integral imaging using lenslet array. Jpn. J. Appl. Phys. 2005, 44, 8016–8018. [Google Scholar] [CrossRef]
  23. Bae, J.; Yoo, H. Image Enhancement for Computational Integral Imaging Reconstruction via Four-Dimensional Image Structure. Sensors 2020, 20, 4795. [Google Scholar] [CrossRef] [PubMed]
  24. Sgouros, N.P.; Athineos, S.S.; Sangriotis, M.S.; Papageorgas, P.G.; Theofanous, N.G. Accurate lattice extraction in integral images. OSA Opt. Express 2006, 14, 10403–10409. [Google Scholar] [CrossRef] [PubMed]
  25. Canada, B.A.; Thomas, G.K.; Wang, K.C.; Liu, Y. Automatic Lattice Detection in Near-Regular Histology Array Images. In Proceedings of the IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; pp. 1452–1455. [Google Scholar] [CrossRef]
  26. Hong, K.; Hong, J.; Jung, J.H.; Park, J.H.; Lee, B. Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging. OSA Opt. Express 2010, 18, 12002–12016. [Google Scholar] [CrossRef]
  27. Jeong, H.; Yoo, H. A fast and accurate method of extracting lens array lattice in integral imaging. J. Korea Inst. Inf. Commun. Eng. 2017, 21, 1711–1717. [Google Scholar] [CrossRef]
  28. Koufogiannis, E.T.; Sgouros, N.P.; Sangriotis, M.S. Robust integral image rectification framework using perspective transformation supported by statistical line segment clustering. Appl. Opt. 2011, 50, 265–277. [Google Scholar] [CrossRef]
  29. Lee, J.J.; Shin, D.H.; Lee, B.G. Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction. Opt. Express 2009, 17, 18026–18037. [Google Scholar] [CrossRef]
  30. Jeong, H.; Yoo, H. Rectification of distorted elemental image array using four markers in three-dimensional integral imaging. Int. J. Appl. Eng. Res. 2017, 12, 15659–15663. [Google Scholar]
  31. Canny, J. A Computational Approach to Edge Detection. Proc. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 6, 679–698. [Google Scholar] [CrossRef]
  32. Parzen, E. On estimation of a probability density function and mode Ann. Math. Stat. 1962, 33, 1065–1076. [Google Scholar] [CrossRef]
Figure 1. Pickup and display/reconstruction of integral imaging.
Figure 1. Pickup and display/reconstruction of integral imaging.
Applsci 12 09252 g001
Figure 2. The existing method of EIA extraction.
Figure 2. The existing method of EIA extraction.
Applsci 12 09252 g002
Figure 3. Proposed enhancement method of EIA extraction.
Figure 3. Proposed enhancement method of EIA extraction.
Applsci 12 09252 g003
Figure 4. Estimated likelihood function using wk and Gaussian windows.
Figure 4. Estimated likelihood function using wk and Gaussian windows.
Applsci 12 09252 g004
Figure 5. Histograms of horizontal scanning results.
Figure 5. Histograms of horizontal scanning results.
Applsci 12 09252 g005
Figure 6. Experimental images: (a) diagonal pattern; (b) car and bear; (c) ruler, raw EIA images of: (d) diagonal pattern; (e) car and bear; (f) ruler.
Figure 6. Experimental images: (a) diagonal pattern; (b) car and bear; (c) ruler, raw EIA images of: (d) diagonal pattern; (e) car and bear; (f) ruler.
Applsci 12 09252 g006
Figure 7. Resulting images through the existing method and proposed method of (a) diagonal pattern; (b) car and bear; (c) ruler.
Figure 7. Resulting images through the existing method and proposed method of (a) diagonal pattern; (b) car and bear; (c) ruler.
Applsci 12 09252 g007
Figure 8. Resulting images of lens array area detection by the proposed method of (a) diagonal pattern; (b) car and bear; (c) ruler.
Figure 8. Resulting images of lens array area detection by the proposed method of (a) diagonal pattern; (b) car and bear; (c) ruler.
Applsci 12 09252 g008
Figure 9. Reconstructed image for a 3D object at 120 mm using the standard CIIR method with: (a) correct EI size; (b) misaligned EI size.
Figure 9. Reconstructed image for a 3D object at 120 mm using the standard CIIR method with: (a) correct EI size; (b) misaligned EI size.
Applsci 12 09252 g009
Table 1. Experimental results in terms of accuracy.
Table 1. Experimental results in terms of accuracy.
Input ImageElemental Image Size (Pixel)Size Error (Pixel)Average of Position Error (Pixel)
Ground TruthExistingProposedExistingProposedExistingProposed
Diagonal pattern187.43181187.72−6.43+0.2935.871.62
Car and bear217.38216217.03−1.38−0.357.371.87
Ruler223.2222223.54−1.20+0.3410.102.86
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jeong, H.; Lee, E.; Yoo, H. Re-Calibration and Lens Array Area Detection for Accurate Extraction of Elemental Image Array in Three-Dimensional Integral Imaging. Appl. Sci. 2022, 12, 9252. https://0-doi-org.brum.beds.ac.uk/10.3390/app12189252

AMA Style

Jeong H, Lee E, Yoo H. Re-Calibration and Lens Array Area Detection for Accurate Extraction of Elemental Image Array in Three-Dimensional Integral Imaging. Applied Sciences. 2022; 12(18):9252. https://0-doi-org.brum.beds.ac.uk/10.3390/app12189252

Chicago/Turabian Style

Jeong, Hyeonah, Eunsu Lee, and Hoon Yoo. 2022. "Re-Calibration and Lens Array Area Detection for Accurate Extraction of Elemental Image Array in Three-Dimensional Integral Imaging" Applied Sciences 12, no. 18: 9252. https://0-doi-org.brum.beds.ac.uk/10.3390/app12189252

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop