Next Article in Journal
“Go Wild for a While!”: A New Test for Forecast Evaluation in Nested Models
Next Article in Special Issue
Voxel-Based 3D Object Reconstruction from Single 2D Image Using Variational Autoencoders
Previous Article in Journal
Asymptotics of the Sum of a Sine Series with a Convex Slowly Varying Sequence of Coefficients
Previous Article in Special Issue
Multimodal Human Recognition in Significantly Low Illumination Environment Using Modified EnlightenGAN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Mathematical Principles of Object 3D Reconstruction by Shape-from-Focus Methods

by
Dalibor Martišek
* and
Karel Mikulášek
Institute of Mathematics, Faculty of Mechanical Engineering, Brno University of Technology, 61669 Brno, Czech Republic
*
Author to whom correspondence should be addressed.
Submission received: 26 July 2021 / Revised: 31 August 2021 / Accepted: 7 September 2021 / Published: 14 September 2021
(This article belongs to the Special Issue Computer Graphics, Image Processing and Artificial Intelligence)

Abstract

:
Shape-from-Focus (SFF) methods have been developed for about twenty years. They able to obtain the shape of 3D objects from a series of partially focused images. The plane to which the microscope or camera is focused intersects the 3D object in a contour line. Due to wave properties of light and due to finite resolution of the output device, the image can be considered as sharp not only on this contour line, but also in a certain interval of height—the zone of sharpness. SSFs are able to identify these focused parts to compose a fully focused 2D image and to reconstruct a 3D profile of the surface to be observed.

1. Introduction

Three-dimensional reconstruction of general surfaces has an important role in a number of fields: the morphological analysis of fracture surfaces, for example, reveals information on the mechanical properties of natural or construction materials.
There are more techniques capable of producing digital three-dimensional (3D) replicas of solid surfaces. In mechanical engineering, contacting electronic profilometers can be used to determine digital two-dimensional (2D) profiles to be combined into 3D surface profiles—see [1,2], for example. The contacting mode of atomic force microscopes is actually in this mechanical category [3]. In addition to the mechanical tools, optical devices exist in diverse modifications [2], light section microscopy [4,5], coherence scanning interferometry [6], speckle metrology [7], stereo projection [8], photogrammetry [9], and various types of light measurement of profiles [10], to mention some of them.
3D laser scanning techniques are among other ways of obtaining 3D data. They have also been tested in some rock engineering projects, such as 3D digital fracture mapping [11,12,13].
These devices are, however, not of universal use, with each of them having its own technical limitations [14,15]. Very rough surfaces, for instance, can hardly be measured by atomic force microscopes working in the nano-regions. On the other hand, it is possible to measure plane surfaces with microscopically small irregularities using a microscopic sectional technique with confocal microscopes [16,17,18,19]. Confocal microscopes, however, are not always suitable for technical purposes due to the small size of their visual fields (maximal visual field is about 2 cm [5,20,21,22]).
The present paper summarized the existing methods of 3D reconstruction of objects by the Shape-From-Focus (SFF) method. This is a method for recovering depth from an image series of the same object taken with different focus settings, referred to as a multifocal image.
It consists of these steps:
(a)
Data acquisition (confocal microscope in the standard mode, CCD camera, or standard camera)—Section 2.
(b)
Image registration (if necessary)—Section 3.
(c)
Choice of the focusing criteria—Section 4.
(d)
The 2D and 3D reconstructions—Section 5.
See [23] for one of the first papers on this subject.

2. Data Acquisition

2.1. Parallel Projection

In technical practice, the confocal microscope serves as a standard instrument imaging microscopic three-dimensional surfaces. It has a very small depth of the optical field with its advanced hardware being capable of removing non-sharp points from the images. The points of the object situated close to the focal plane can be seen as sharp points. The parts lying further above or beneath the focal plane (out of the sharpness zone) are invisible, being represented as black regions if the confocal mode is on. In this way, a so-called optical cut is obtained. In the case of the non-confocal (standard) mode, the areas lying outside the sharpness zone are displayed as blurred, as they would be with a standard camera. With a confocal microscope or CCD camera, one can assume that the field of view is small with the projection used being parallel. In this case, all images are provided in the field of view with identical sizes and the corresponding pixels having the same coordinates in separate partial focused images (see Figure 1).
However, the confocal microscope with its small visual angle is hardly suitable for technical purposes due to the small size of its visual field (maximal visual field is about 2 cm [5,20,21,24]).

2.2. Central Projection

The same output (Figure 1b) can be obtained by the classical microscope or (in a wider field) common camera. The difference between the microscope or CCD camera and standard camera is given by the central projection that varies the scaling of partial images in a series and, further, by the non-sharp regions, displayed by the classic camera, while missing when taken by a confocal microscope in the confocal mode (see [25] for more information). The different image scalings, however, require subsequent corrections (including shifts and rotations).

2.3. Multifocal Image

To create a 2D or 3D reconstruction, it is necessary to obtain a series of images of an identical object, each of them with different focusing with each object point being focused in one of the images (in the ideal case, this is referred to as a multifocal image). For the acquisition of a large multifocal image, the camera must be mounted on a stand so that it can be moved in the direction approximately orthogonal to the surface with a controlled step. Different transformations and different sharp parts must be identified and composed in a 2D or 3D model.

3. Image Registration

With a confocal microscope or CCD camera, we can assume that the field of view is small and the projection used is parallel. In this case, all images are provided in the field of view with identical sizes with the corresponding pixels having the identical coordinates in separate partial focused images. However, this assumption does not hold for larger samples; then, the angle of the projection lines is not negligible with the view fields (and the coordinates of the corresponding pixels) being clearly different for each image (see Figure 2 and Figure 3).
Before reconstruction, all geometric transformations must be identified in the image series to be eliminated. The images are being analyzed as geometrically similar. Generally, similarity is achieved by composing rotation, scale-change, shift, and axial symmetry. Axial symmetry is not possible in this particular case.
If we consider scale-change only assuming that the image size is proportional to the camera shifting (see Figure 4a), different image scaling can be obtained using elementary mathematics. This approach was used in [25]. After the initial elementary reconstruction, a subsequent 3D reconstruction is shown in Figure 5. Huge artefacts caused by inaccurate registration were blurred by brutal low-pass filters with much useful high-frequency information lost.
In practice, the situation may be more sophisticated. The images may differ not just in the scale used but in the content displayed as well (different parts being focused in different images). Due to mechanical inaccuracies, the step in the z axis may be not fully constant, and the images can also be mutually shifted along the x- or y-axis or rotated. Image registration is also complicated by the non-planarity of samples (see Figure 4 on the right). Therefore, sophisticated pre-processing of the image series may be necessary. A method suitable tool for this is the Fourier transform and phase correlation.

3.1. Continuous Two-Dimensional Fourier Transform and Inverse Transform

A continuous standard Fourier transform of a function f ( x ) : is the function
F [ f ( x ) ] = F ( ξ ) = f ( x ) e i x ξ d x
(provided that this integral exists and is finite).
A continuous standard Fourier transform of a function f ( x ; y ) : 2 is the function
F [ f ( x ; y ) ] = F ( ξ ; η ) = 2   f ( x ; y ) e i ( x ξ + y η ) d x d y
(provided that this integral exists and is finite).
Function F is also referred to as the Fourier spectrum of function f . It is possible to obtain the function f from its Fourier spectrum F by the inverse Fourier transform.
A continuous standard inverse Fourier transform of a function G ( ξ ) : is the function
F 1 [ F ( ξ ) ] = f ( x ) = 1 2 π F ( ξ ) e i x ξ d ξ
(provided that the integral exists and is finite).
A continuous standard inverse Fourier transform of a function G ( ξ ; η ) : 2 is the function
F 1 [ F ( ξ ; η ) ] = f ( x ; y ) = 1 4 π 2 2 F ( ξ ; η ) e i ( x ξ + y η ) d ξ d η
(provided that this integral exists and is finite).

3.2. Discrete Two-Dimensional Fourier Transform and Inverse Transform

A discrete standard Fourier transform of a function f ( x ; y ) : { 0 ; 1 ; ; M 1 } × { 0 ; 1 ; ; N 1 } is the function
F [ f ( x ; y ) ] = F ( ξ ; η ) = x = 0 M 1 y = 0 N 1 f ( x ; y ) e 2 π i ( x ξ M + y η N )
A discrete standard inverse Fourier transform of a function G ( ξ ; η ) : { 0 ; 1 ; ; M 1 } × { 0 ; 1 ; ; N 1 } is the function
F 1 [ G ( ξ ; η ) ] = g ( x ; y ) = 1 M N ξ = 0 M 1 η = 0 N 1 G ( ξ ; η ) e 2 π i ( x ξ M + y η N )
If the function f ( x ; y ) in (5) is real, then
F ( N ξ ; N η ) = F ¯ ( ξ ; η )
(where the bar denotes complex conjugation) and the Fast Fourier Transform (FFT) algorithm can be employed to calculate the discrete Fourier transform—see [28] for more information.
The discrete Fourier transform can be used for the image registration applied to functions that are or are assumed periodic. Generally, an image may not have the same values on the edges. Thus, by periodizing an image, the resulting function may have jumps at the edges of the original image. Such jumps are often the most contrasted structures in the function and may lead to incorrect registration. Therefore, such edges used for the shift estimation must be removed from the images. This is carried out by multiplying the image by a suitable function referred to as a window function. Its values must equal zero or almost zero at the image edges and one on a large part of the image. Primarily, the Gaussian and Hanning window functions can be used: Let σ + be a given number and sets
R = a ; a × b ; b ;     a ; b 0 +
C = { ( x ; y ) 2 | x 2 + y 2 r } ;         r 0 +
Let ρ ( X ; S ) be the distance of point X = ( x ; y ) from set S ; i.e.,
ρ ( X ; S ) = inf { d | d = X Y ; Y S }
Functions
g G R ( x ; y ) = e ρ 2 ( X ; R ) σ 2     or     g G C ( x ; y ) = e ρ 2 ( X ; C ) σ 2
are called rectangular or circular Gaussian window functions. Functions
g H R ( x ; y ) = { 1 2 + 1 2 cos π ρ ( X ; R ) σ     if     ( X ; R ) σ 0                     if     ( X ; R ) > σ  
or
g H C ( x ; y ) = { 1 2 + 1 2 cos π ρ ( X ; C ) σ     if     ( X ; C ) σ 0                       if     ( X ; C ) > σ
are called rectangular or circular Hanning window functions.

3.3. δ-Distribution

A one-dimensional δ -distribution δ ( x ) is a limit of a function sequence δ n ( x ) ; n for which
( a ) lim n δ n ( x ) d x =
( b ) lim n δ n ( x 0 ) lim x 0 δ n ( x ) = 0 ;   x 0 { 0 }
A two-dimensional δ -distribution δ ( x ; y ) is a limit of a function sequence δ n ( x ; y ) ; n for which
lim n δ n ( x ; y ) d x d y = 1 ; lim n δ n ( x 0 ; y 0 ) lim ( x ; y ) ( 0 ; 0 ) δ n ( x ; y ) = 0 ; ( x 0 ; y 0 )   2 { ( 0 ; 0 ) }
Example: A well-known example (here demonstrated in 1 D for simplicity) is a series of expanding rectangular signals δ n * ( ξ ) with a unitary intensity constant on ( n ; n ) ; n and zeroed elsewhere. The inverse Fourier transform gives
F 1 ( δ n * ) ( ξ ) = δ n ( x ) = 1 2 π e i x ξ d ξ = 1 2 π n n e i x ξ d ξ = 1 2 π [ e i x ξ i x ] ξ = n n = e i x n e i x n 2 π i x = sin n x π x
As sin n x π x d x = 1 for each n (see [28] for proof), condition (a) is fulfilled. Since lim x 0 sin n x π x = n π lim x 0 sin n x n x = n π for each n ,   we have
lim n δ n ( x 0 ) lim x 0 δ n ( x ) = lim n sin n x 0 n x 0 = 0     for     each     x 0 0
.
This means that condition (b) holds as well and the limit δ ( x ) = lim n δ n ( x ) is the (one-dimensional) δ -distribution. The expanded unitary signal δ 5 * ( ξ )   with its Fourier transform is illustrated in Figure 6.

3.4. Phase Correlation

Image processing requires the images transformed for the structures studied to be at the same position in all of them. The transformation is found by image registration. In some applications, it is possible to assume shift only while, in others, shift, rotation and scale change (i.e., similarity), general linear transformation, or even general transformations may all be present.
The methods used for registration depend on the expected transformation and on the image structures. Some methods, after using the corresponding structures or points in the images, find a global transformation by measuring the positions of the structures or points [29,30,31]. For these methods to be applicable, the structures must be clearly visible. Other correlation-based methods work with the image as a whole. The phase correlation has proved to be a powerful tool (not only) for the registration of particular focused images. For functions f 1 ;   f 2 , it is defined as
P f 1 ; f 2 ( x ; y ) = 1 { F 1 ( ξ ; η ) · F ¯ 2 ( ξ ; η ) | F 1 ( ξ ; η ) | · | F 2 ( ξ ; η ) | }
with its modification being
P f 1 ; f 2 ; ; p ; q ( x ; y ) = 1 { H ( ξ ; η ) · F 1 ( ξ ; η ) · F ¯ 2 ( ξ ; η ) ( | F 1 ( ξ ; η ) | + p ) · ( | F 2 ( ξ ; η ) | + q ) }  
where the bar denotes complex conjugation, H ( ξ ; η ) is a bounded real function such that H ( ξ ; η ) = H ( ξ ; η ) and p ; q > 0 are arbitrary constants. It is not difficult to prove that, for real functions f 1 , f 2 , the phase-correlation function is real [32]. This is very useful since the extremes of the phase-correlation function can be searched for.

3.5. Identical Images

Let F be the infinity periodic expansion of an image. Denote a + b i its value F ( ξ ; η ) at ( ξ ; η ) , a + b i 0 . Clearly, the value of the phase correlation of the F with itself is
F ( ξ ; η ) · F ¯ ( ξ ; η ) | F ( ξ ; η ) · F ( ξ ; η ) | = ( a + b i ) ( a b i ) | ( a + b i ) | · | a + b i | = a 2 + b 2 a 2 + b 2 = 1
By the example in Section 3.3, we have
P f ; f ( x ; y ) = 1 { F ( ξ ; η ) · F ¯ ( ξ ; η ) | F ( ξ ; η ) | · | F ( ξ ; η ) | } = 1 { 1 } = δ ( x ; y )
which means that the inverse Fourier transform of the correlation of two identical images is the two-dimensional δ -distribution δ ( x ; y )

3.6. Shifted Images

If two functions are shifted in arguments, that is, f 2 ( x ; y ) = f 1 ( x x 0 ; y y 0 ) , their Fourier transforms are shifted in phase:
F 2 ( ξ ; η ) = F 1 ( ξ ; η ) · exp ( i ( ξ x 0 + η y 0 ) )
with their phase-correlation function being the δ-distribution shifted in arguments by the opposite shift vector
P f 1 ; f 2 ( x ; y ) = 1 { exp ( i ( ξ x 0 + η y 0 ) ) } = δ ( x + x 0 ; y + y 0 )
This is the principal idea of phase correlation. Using phase correlation, rather than finding a shift between two images, we can just find the only non-zero point in a matrix. If the images are not identical (up to a shift), i.e., if the images are not ideal, the phase-correlation function is more complex, but still has a global maximum at the point whose coordinates correspond to the shift vector.

3.7. Rotated Images

The phase-correlation function can also be used for estimating the image rotation and rescale. Let f 2 be function f 1 rotated and shifted in arguments, i.e.,
f 2 ( x ; y ) = f 1 ( x cos θ y sin θ x 0 ; x sin θ + y cos θ y 0 )
Their Fourier spectra F 1 ; F 2 and amplitude spectra A 1 ; A 2 are related as follows:
F 2 ( ξ ; η ) = exp ( i ( ξ x 0 + η y 0 ) ) · F 1 ( ξ cos θ η sin θ ; ξ sin θ + η cos θ )
A 2 ( ξ ; η ) = A 1 ( ξ cos θ η sin θ ; ξ sin θ + η cos θ )
The shift results in a phase shift and the spectra are rotated the same way as the original functions. A crucial step here is the transformation of the amplitude spectra into polar coordinates to obtain functions A 1 p ; A 2 p : 0 + × 0 ; 2 π ) 0 + such that A 1 p ( ρ ; φ ) = A 2 p ( ρ ; φ + θ ) . The rotation about an unknown centre has been transformed into a shift. This shift is estimated by the standard phase correlation (see the previous paragraph) after a reverse rotation by the angle measured, the shift ( x 0 ; y 0 ) is then measured in another computation of the phase correlation.

3.8. Scaled Images

Let f 2 be function f 1 rotated, shifted, and scaled in arguments, i.e.,
f 2 ( x ; y ) = f 1 ( α ( x cos θ y sin θ ) x 0 ; α ( x sin θ + y cos θ ) y 0 )
Their Fourier spectra and amplitude spectra are related as follows:
F 2 ( ξ ; η ) = 1 α 2 exp ( i ( ξ x 0 + η y 0 ) ) · F 1 ( 1 α ( ξ cos θ η sin θ ) ; 1 α ( ξ sin θ + η cos θ ) )
A 2 ( ξ ; η ) = 1 α 2 A 1 ( 1 α ( ξ cos θ η sin θ ) ; 1 α ( ξ sin θ + η cos θ ) )
The shift results in a phase shift, the spectra are rotated the same way as the original functions and scaled with a reciprocal factor. A crucial step here is the transformation of the amplitude spectra into the logarithmic-polar coordinates
exp ρ = x 2 + y 2 ; x = exp ρ cos φ ; y = exp ρ sin φ
to obtain A 1 p ; A 2 p : 0 + × 0 ; 2 π ) 0 + such that A 2 1 p ( ρ ; φ ) = A 2 1 p ( ρ ln α ; φ + θ ) .
Both rotation and scale change have been transformed to a shift. The unknown angle θ and unknown factor α can be estimated by means of the phase correlation applied to the amplitude spectra in the logarithmic-polar coordinates A 1 1 p ; A 2 1 p . After reverse rotating function f 2 by the estimated angle θ and scaling by the factor, the shift vector ( x 0 ; y 0 ) can be estimated by means of the standard phase correlation.

3.9. Multifocal Registration

Let { P 1 ; P 2 ; ; P n } be the image series to be registered with image P 1 acquired by means of the biggest angle of view. This image will be not transformed or (formally) it will be transformed by the identity mapping into the image P 1 * . Now we must find the transform P 2 P 1 * to obtain image P 2 * which only differs from P 1 * in focussed and blured parts. In the same way, transforms P 3 P 2 * ;…; P k P k 1 * ; …; P n P n 1 * must be found.
After multiplying both images P k ; P k 1 * by the chosen window function, rotation angle θ k and scale factor α k will be determined by the method described in Section 3.8. Then, image P k is rotated by the angle θ k and scaled by the factor α k 1 to compensate for the rotation and scale-change found by the phase correlation, creating image P k ¯ . Between images P k ¯ . and P k 1 * , only the shifted and different focused and blurred parts remain. Now we can apply phase correlation to find the shift ( x 0 ; y 0 ) shifting image P k ¯ by the vector ( x 0 ; y 0 ) to compensate for the shift, creating image P k * which only differs from P k 1 * in the focused and blurred parts.

4. Focusing Criteria

The detectors of blurred areas (sometimes referred to as focusing criteria or sharpness detectors) can be based on different principles. Probably the first attempts to carry out non-confocal reconstructions date back to 1970’s and 1980’s [33,34,35,36].
Tenebaum [37] developed the gradient magnitude maximization method for optimizing the focus quality using the sharpness of edges. Jarvis [38] proposed a sum-modulus-difference computed by summing the first intensity differences between neighbouring pixels along a scan-line using it as a focus quality benchmark. Schlag et al. [39] implemented and tested various self-focusing algorithms. Recently, Krotkov [40] evaluated and compared the performance of different focus criterion functions. In [41], he also proposed a method for estimating the depth of an image area. Pentland [42] suggested evaluating the image blur to determine the depth of image points. Grossmann in [43] proposed estimating the depth of edge points by analyzing the blur of the edges due to defocusing. Darrell and Wohn in [29] developed a depth-from-focus method by which an image sequence can be obtained through varying the focus level using Laplacian and Gaussian pyramids to calculate the depth. Subbarao in [30] suggested changing the intrinsic camera parameters to recover the depth map of a scene. Ohta et al. [31] and Kaneda et al. [44] used images corresponding to different focus levels to obtain a single level of high focus quality.
As follows from the above, the statistical range or the variance of the standard Fourier transform of a certain neighbourhood of pixel [ i ; j ] can serve as sharpness detectors (focussing criteria). A neighbourhood of pixel [ i ; j ] is a square of s × s pixels with s equalling ten to twenty. In the case of the standard Fourier transform, the FFT algorithm is used. Since it requires a square with a side of s = 2 n , s = 8 or s = 16 is used in this case [25,26]. However, because the standard Fourier transform is burdened with jumps along the square edges, it is necessary to use a suitable window function like in Section 3.2. For this reason, the cosine Fourier transform is preferable. It is obtained from the standard Fourier transform applied to an even extension of the neighbourhood to be processed. It is illustrated in Figure 7. This extension eliminates jumps on the edges. Since the sine frequencies in (5) are zeroed, there is no need to apply a window function.
Whereas the low frequencies in the amplitude spectrum detect the blurred parts of the image, the very high ones only mean noise. Therefore, a suitable weight must be assigned to each frequency when the sharpness detector is calculated. In our software, the following detectors may be used:
P a i j k = m = s s n = s s p a i + m ; j + n ; k = m = s s n = s s ( | m | + | n | ) · | F i + m ; j + n ; k |
P b i j k = m = s s n = s s p b i + m ; j + n ; k = 1 S ( A i j ) ( m ; n ) A i j | F m n k |
P c i j k = m = s s n = s s p c i + m ; j + n ; k = m = s s n = s s | F i + m ; j + n ; k | · sin 2 ( π s m 2 + n 2 )
where F is the cosine spectrum of the pixel neighborhood, S ( A i j ) is the volume of annulus A i j with the center at ( i ; j ) . The elements c a ; c b ; c c summed in (26)–(28) are illustrated as pixel values in Figure 8.

5. 2D and 3D Reconstructions

A 2D reconstruction consists of composing a new image that contains only the focused parts of a registered multifocal image. The registration and detection of the focused parts were described in Section 3 and Section 4.
The principal deficiency of most of the current methods of 3D reconstruction is that they assume the profile height at a point to be precisely determined by the value of the given sharpness detector. Based on such an unrealistic hypothesis, then, these values are interpolated. Parabolic interpolation [16,34] or Gaussian interpolation [35,36] is used.
This conclusion, however, is false. We can use the series { P i j k } ; k = 1 ; 2 ; ; n to assess the height of pixel ( i ; j ) . Being of a random rather than deterministic nature, it cannot be interpolated, but must be processed by a statistical method. One of such methods is a regression analysis, which, however, is rather complicated. Direct calculation of the mean value is much easier.
For each pixel ( i ; j ) , in k-th image in the series virtually infinitely many probability distribution functions p i j ( r ) can be constructed using different exponents r applied to the series terms P i j k :
p i j ( r ) ( k ) = P i j k r s = 1 n P i j s r
The mean values of the random variables P i j ( r ) given by these probablity distribution functions estimate the height h i j ( r ) of the surface in its pixel ( i ; j ) :
h i j ( r ) = E ( P i j ( r ) ) = k = 1 n k · p i j ( r ) ( k ) = k = 1 n k · P i j k r s = 1 n C P i j s r

6. Results and Discussion

6.1. Data Acquisition

The following data were used for the purposes of this paper:
  • A series of fifteen partially focused images of blue marble—Olympus camera;
  • A series of eight partially focused limestone images—Canon camera;
  • A series of thirty partially focused images of the surface of a hydrated cement paste —the Olympus confocal microscope in standard mode.

6.2. Image Registration

Figure 9 shows the photos of Figure 2 but this time displayed in supplementary pseudo-colours. If these images were totally identical, then the arithmetic mean of the blue-green image on the left and orange image on the right would be “perfectly gray”. The arithmetic mean of these images is shown in Figure 10a. Clearly, the components of this mean value are very different—the values of the orange image are bigger in the yellow parts of the mean value while the values of the blue-green image are bigger in the blue-violet parts of the mean value.
In Figure 10b, the same construction was used after the registration. The very low colour saturation of the arithmetic mean (of the images) testifies to a very good conformity. Of course, the arithmetic mean cannot be a ”perfect gray“ in this particular case because the original images also differ in the sharp parts.
Table 1 summarizes the indicated and applied transforms in separate images of blue marble (see Figure 2, Figure 9 and Figure 10). All transformations, detected with a sub-pixel precision, are listed with a precision of one thousandth of a pixel. Obviously, the scaling has the most important role; the shifts, however, cannot be neglected either. The rotation angle between the first and the last images is larger than five arcminutes, that is, about one pixel on the image periphery (the data resolution used was 1180   by   885 pixels). Such a transformation is marginal in this particular case.

6.3. Focusing Criteria

As implied by (26)–(28), and Figure 9, detector C a increased the high frequencies (right bottom corner of squares in Figure 9) too much, which means that it is very noise sensitive. Too sharp cuts removing low and high frequencies are among the disadvantages of the detector C b . Although visually minuscule, the differences between detectors C a ; C b ; and C c do exist.
For their quantification, Root Mean Square Error,
R M S E ( h ( 1 ) ;     h ( 2 ) ) = 1 W H · i = 0 W 1 j = 0 H 1 ( h i j ( 1 ) h i j ( 2 ) ) 2
Average Deviation
A D ( h ( 1 ) ;     h ( 2 ) ) = 1 W H · i = 0 W 1 j = 0 H 1 | h i j ( 1 ) h i j ( 2 ) |
Pearson Correlation Coefficient
P C C ( h ( 1 ) ;     h ( 2 ) ) = i = 0 W 1 j = 1 H 1 ( h i j ( 1 ) h ( 1 ) ¯ ) ( h i j ( 2 ) h ( 2 ) ¯ ) i = 0 W 1 j = 0 H 1 ( h i j ( 1 ) h ( 1 ) ¯ ) 2 i = 0 W 1 j = 0 H 1 ( h i j ( 2 ) h ( 2 ) ¯ ) 2
and Difference of surface Information Entropy
D I E ( h ( 1 ) ;     h ( 2 ) ) = | I E ( h ( 1 ) ) I E ( h ( 2 ) ) |
are used. Here, W ;   H denote the width and height of the surface domain in pixels, h i j ( 1 ) ; h i j ( 2 ) stand for the height of the first (second) surface in the pixel ( i ; j ) ; h ( 1 ) ¯ ; h ( 2 ) ¯ is the average height of the first (second) surface and
I E ( h ) = i = 1 W 1 j = 1 H 1 ( h i j m = 1 W 1 n = 1 H 1 h m n log 2 h i j m = 1 W 1 n = 1 H 1 h m n )
is the Information Entropy of the surface h .
Values of these characteristics are summarized in Table 2 for data from Figure 3. Note that we have R M S E = A D = D I E = 0 ; P C C = 1 for a pair of identical surfaces.
A similar summary is given by Table 3 for the present reconstruction methods: parabolic interpolation of the echelon−approximations C a , C b , and C c and in Tab 4 for parabolic, hyperbolic, and Gaussian interpolation of the same echelon−approximation C c .
We can see the optical cuts detected in the data of Figure 2 using focusing criteria (29) (30) in Figure 11, 2D reconstruction (sharp 2D Image) of the same data using the same criterion in Figure 12. Echelon approximation is a simple method for constructing a rough 3D model of the object with all points belonging to the same optical cut having an identical height—the height of the corresponding zone of sharpness—see Figure 13. We can also generalize the notion of low−pass filters used in image processing. These filters are good for smoothing the echelon approximation. This approximation looks much better than the echelon one—see Figure 14.

6.4. 2D and 3D Reconstructions

We can see optical cuts detected in the data of Figure 2 using focusing criteria (28) in Figure 11, 2D reconstruction (sharp 2D Image) of the same data using the same criterion as in Figure 12. Echelon approximation is a simple method for constructing a rough 3D model of the object, where all points belonging to the same optical cut have the same height—the height of the corresponding zone of sharpness—see Figure 13.
We can also generalize the notion of low−pass filters used in image processing. They can be used for smoothing an echelon approximation. This approximation looks much better than the echelon one—see Figure 14.
In Figure 15 and Figure 16, we can see reconstructions of the limestone and blue marble sample by data registration according to Section 3, with focusing criterion (28) and profile height calculation (29) and (30). We can compare a single pore of hydrated Portland cement paste reconstructed by Olympus factory software (Figure 17) and the same pore constructed using focusing criterion (28), probability distribution function (29) and (30) for r = 2 (Figure 18).
Two examples of SFF method results can be downloaded in Supplementary Material.

7. Conclusions

The SFF method based on the Fourier transform can provide correct 3D replicas of rough surfaces. In the case of small samples, a qualified user of this method can obtain results similar to or even better than reconstructions from a confocal microscope. For larger objects, 3D scanners and similar significantly more expensive devices can be simulated by sophisticated mathematical instruments and advanced programming techniques.

Supplementary Materials

Author Contributions

Conceptualization, D.M.; methodology, D.M.; investigation D.M.; software, D.M.; resources, D.M.; data curation, D.M.; writing—original draft preparation, D.M.; visualization, D.M.; project administration, D.M.; funding acquisition, D.M.; validation, K.M.; formal analysis, K.M.; writing—review and editing, K.M.; supervision, K.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been funded by Private Institute of Applied Mathematics, Slapanice, Czech Republic.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest. The funder has had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Halling, J. Introduction to Tribology; John Wiley & Sons: London, UK, 1976. [Google Scholar]
  2. Bennett, J.M.; Matton, L. Introduction to Surface Roughness and Scattering; Optical Society of America: Washington, DC, USA, 1999. [Google Scholar]
  3. Bowen, W.R. Atomic Force Microscopy in Process Engineering; Hilal, N., Ed.; Butterworth-Heinemann: Oxford, UK, 2009. [Google Scholar]
  4. Tolansky, S. A light-profile microscope for surface studies. Z. Elektrochem. 1952, 56, 263–267. [Google Scholar]
  5. Thiery, V.; Green, D.I. The multifocus imaging technique in petrology. CompGeosci 2012, 45, 131–138. [Google Scholar] [CrossRef]
  6. De Groot, P. Principles of interference microscopy for the measurement of surface topography. Adv. Opt. Photonics 2015, 7, 1–65. [Google Scholar] [CrossRef]
  7. Kaufmann, G.H. Advances in Speckle Metrology and Related Techniques; Wiley: Weinheim, Germany, 2011. [Google Scholar]
  8. Mettänen, M.; Hirn, U. A comparison of five optical surface topography measurement methods. TAPPI J. 2015, 14, 27–38. [Google Scholar] [CrossRef]
  9. Bertin, S.; Friedrich, H.; Dekmas, P.; Chan, E.; Gimel’farb, G. Digital stereo photogrammetry for grain-scale monitoring offluvialsurfaces: Error evaluation and work flow optimization. ISPRS J. 2015, 101, 193–208. [Google Scholar]
  10. Tang, S.; Zhang, X.; Tu, D. Micro-phase measuring profilometry: Its sensitivity analysis and phase unwrapping. Opt. Lasers Eng. 2015, 72, 47–57. [Google Scholar] [CrossRef]
  11. Feng, Q. Novel Methods for 3-D Semi-Automatic Mapping of Fracture Geometry at Exposed Rock Surfaces. Ph.D. Thesis, KTH, Stockholm, Sweden, 2001. [Google Scholar]
  12. Slob, S.; Hack, H.R.G.K.; Van Knapen, B.; Turner, K.; Kemeny, J. A method for automated discontinuity analysis of rock slopes with three-dimensional laser scanning. Transp. Res. Rec. J. Transp. Res. Board. 2005, 1913, 187–194. [Google Scholar] [CrossRef]
  13. Slob, S.; Hack, H.R.G.K. 3D terrestrial laser scanning as a new field measurement and monitoring technique. In Engineering Geology for Infrastructure Planning in Europe. A European Perspective; Azzam, R.H.R.a., Charlier, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 179–190. [Google Scholar]
  14. Pawlus, P.; Wieczorowski, M.; Mathia, T. The Errors of Stylus Methods in Surface Topography Measurements; ZAPOL: Szczecin, Poland, 2014. [Google Scholar]
  15. Hoła, J.; Sadowski, Ł.; Reiner, J.; Stach, S. Usefulness of 3D surface roughness parameters for nondestructive evaluation of pull-off adhesion of concrete layers. Constr. Build. Mater. 2015, 84, 111–120. [Google Scholar] [CrossRef]
  16. Agard, D.A.; Hiraoka, Z.; Shaw, P.; Sedat, J. Fluorescence microscopy in three dimensions, in Methods in Cell Biology. In Fluorescence Microscopy of Living Cells in Culture: Part B: Quantitative Fluorescence Microcopy-Imaging and Spectroscopy; Taylor, D.L., Wang, Y., Eds.; Academic Press: San Diego, CA, USA, 1989; Volume 30, pp. 359–362. [Google Scholar]
  17. Wilson, T. (Ed.) Confocal Microscopy; Academic Press Limited: London, UK, 1990. [Google Scholar]
  18. Pawley, J.B. Handbook of Confocal Microscopy; Plenum Press: New York, NY, USA, 1990. [Google Scholar]
  19. Logali, N. Confocal Laser Microscopy-Principles, Applications in Medicine, Biology, and the Food Sciences; InTech-open access publisher: Rijeka, Croatia, 2013. [Google Scholar]
  20. Lange, D.; Jennings, H.M.; Shah, S.P. Analysis of surface roughness using confocal microscopy. J. Mater. Sci. 1993, 28, 3879–3884. [Google Scholar] [CrossRef]
  21. Ichikawa, Y.; Toriwaki, J.-I. Confocal Microscope 3d Visualizing Method for Fine Surface Characterization of Microstructures; International Society for Optics and Photonics: Denver, CO, USA, 1996. [Google Scholar]
  22. Nadolny, K. Confocal laser scanning microscopy for characterization of surface micro discontinuities of vitrified bonded abrasive tools. Int. J. Mech. Eng. Robot. Res. 2012, 1, 14–29. [Google Scholar]
  23. Martišek, D. The 2D and 3D processing of images provided by conventional microscopes. Scanning 2002, 24, 284–296. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Ficker, T.; Martišek, D. Digital fracture surfaces and their roughness analysis: Applications to cement-based materials. Cem. Concreate Res. 2012, 42, 827–833. [Google Scholar] [CrossRef]
  25. Ficker, T. Sectional techniques for 3D imaging of microscopic and macroscopic objects. Optik 2017, 144, 289–299. [Google Scholar] [CrossRef]
  26. Martišek, D. 3D Reconstruction of the Surface Using a Standard Camera. Math. Probl. Eng. 2017, 2017, 1–11. [Google Scholar] [CrossRef]
  27. Martišek, D. Fast Shape-From-Focus method for 3D object reconstruction. Optik 2018, 169, 16–26. [Google Scholar] [CrossRef]
  28. Sorensen, H.; Jones, D.; Heideman, M.; Burrus, C. Real-valued fast Fourier transform algorithms. In Proceedings of the IEEE Transactions on Acoustics, Speech, and Signal Processing; IEEE: Grenoble, France, 1987; Volume 35, pp. 849–863. [Google Scholar] [CrossRef] [Green Version]
  29. Darrell, T.; Wohn, K. Pyramid Based Depth from Focus. In Proceedings of the CVPR’88: The Computer Society Conference on Computer Vision and Pattern Recognition, Ann Arbor, MI, USA, 5–9 June 1988; Volume 2, pp. 504–509. [Google Scholar]
  30. Subbarao, M. Direct Recovery of Depth Map 2: A New Robust Approach Technical Report 87-03; State University of New York: Stony Brook, NY, USA, 1987. [Google Scholar]
  31. Ohta, T.; Sugihara, K.; Sugie, N. A Method for Image Composition Using Image Variance. Trans. IECE J66-D 2016, 66, 1245–1246. [Google Scholar]
  32. Druckmullerova, H. Phase-Correlation Based Image Registration. Master’s Thesis, Brno University of Technology, Brno-střed, Czech Republic, 2010. [Google Scholar]
  33. Gillespie, J.; King, R. The use of self-entropy as a focus measure in digital holography. Patt. Rec. Lett. 1989, 9, 19–25. [Google Scholar] [CrossRef]
  34. Brenner, J.F.; Dew, B.S.; Horton, J.B.; King, T.; Neurath, P.W.; Selles, W.D. An automated microscope for cytologic research a Preliminary evaluation. J. Histochem. Cytochem. 1976, 24, 100–111. [Google Scholar] [CrossRef]
  35. Pieper, R.J.; Korpel, A. Image processing for extended depth of field. Appl. Opt. 1983, 22, 1449–1453. [Google Scholar] [CrossRef] [PubMed]
  36. Sugimoto, S.A.; Ichioka, Y. Digital composition of images with increased depth of focus considering depth information. Appl. Opt. 1985, 24, 2076–2080. [Google Scholar] [CrossRef] [PubMed]
  37. Tenebaum, J.M. Accomodation in Computer Vision. Ph.D. Thesis, Stanford University, Stanford, CA, USA, 1970. [Google Scholar]
  38. Jarvis, R.A. Focus optimization criteria for computer image processing. Microscope 1976, 24, 163–180. [Google Scholar]
  39. Schlag, J.F.; Sanderson, A.C.; Neumann, C.P.; Wimberly, F.C. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control; CMU-RI-TR-83-14; Carnegie Mellon University: Pittsburgh, PA, USA, 1983. [Google Scholar]
  40. Krotkov, E. Focusing. Int. J. Comput. Vis. 1987, 1, 223–237. [Google Scholar] [CrossRef]
  41. Krotkov, E. Exploratory Visual Sensing with an Agile Camera. Ph.D. Thesis, TR-87-29. University of Pennsylvania, Philadelphia, PA, USA, 1987. [Google Scholar]
  42. Pentland, A. A new sense for depth of field. IJCAI 1985, PAMI-9, 988–994. [Google Scholar] [CrossRef] [PubMed]
  43. Grossmann, P. Depth from focus. Pattern Recognit. Lett. 1987, 5, 63–69. [Google Scholar] [CrossRef]
  44. Kaneda, K.; Wakasu, Y.; Nakamae, E.; Tazawa, E. A method of pan-focused and stereoscopic display using a series of optical microscopic images. Proc. of Fourth Sym. 1988, 189–194. [Google Scholar]
  45. Martišek, D.; Procházková, J. The analysis of Rock Surface Asperities. Mendel 2018, 24, 135–142. [Google Scholar] [CrossRef]
Figure 1. Optical cut of fracture surface of hydrated cement paste acquired by confocal microscope Olympus LEXT 1000. Confocal mode (a), standard mode (b). Taken from [24].
Figure 1. Optical cut of fracture surface of hydrated cement paste acquired by confocal microscope Olympus LEXT 1000. Confocal mode (a), standard mode (b). Taken from [24].
Mathematics 09 02253 g001
Figure 2. Different scaling and different sharp and non-sharp regions in images acquired by the classic camera positioned at different distances from the 3D relief of the first (a) and the sixteenth (b) image of a series of sixteen images, blue marble, locality (Nedvedice, Czech Republic, photo Pavel Starha). Taken from [26].
Figure 2. Different scaling and different sharp and non-sharp regions in images acquired by the classic camera positioned at different distances from the 3D relief of the first (a) and the sixteenth (b) image of a series of sixteen images, blue marble, locality (Nedvedice, Czech Republic, photo Pavel Starha). Taken from [26].
Mathematics 09 02253 g002
Figure 3. Different scaling and different sharp and non-sharp regions in images acquired by the classic camera positioned at different distances from the 3D relief—the first (a) and the fourth (b) image of a series of eight images, limestone, locality Brno (Hady, Czech Republic, photo Tomas Ficker). Taken from [26].
Figure 3. Different scaling and different sharp and non-sharp regions in images acquired by the classic camera positioned at different distances from the 3D relief—the first (a) and the fourth (b) image of a series of eight images, limestone, locality Brno (Hady, Czech Republic, photo Tomas Ficker). Taken from [26].
Mathematics 09 02253 g003
Figure 4. The central projection of a large sample—ideal case (a) can be solved by elementary mathematics, real case (b) necessitates sophisticated mathematical tools. Taken from [26].
Figure 4. The central projection of a large sample—ideal case (a) can be solved by elementary mathematics, real case (b) necessitates sophisticated mathematical tools. Taken from [26].
Mathematics 09 02253 g004
Figure 5. 3D reconstruction of the data from Figure 3 after elementary registration by Figure 4a. Reconstruction taken from [27]. Software developed by the author.
Figure 5. 3D reconstruction of the data from Figure 3 after elementary registration by Figure 4a. Reconstruction taken from [27]. Software developed by the author.
Mathematics 09 02253 g005
Figure 6. Fourier transforms of the fifth term of the series of expanding rectangular signals. The series δ n * ( ξ ) converges to the δ -distribution. Taken from [26].
Figure 6. Fourier transforms of the fifth term of the series of expanding rectangular signals. The series δ n * ( ξ ) converges to the δ -distribution. Taken from [26].
Mathematics 09 02253 g006
Figure 7. Even extension of the 32 × 32 neighborhood (framed) of the pixel processed (cross). Taken from [27].
Figure 7. Even extension of the 32 × 32 neighborhood (framed) of the pixel processed (cross). Taken from [27].
Mathematics 09 02253 g007
Figure 8. Graphical representation of sharpness detectors P a (a); P b (b); P c (c). Taken from [26].
Figure 8. Graphical representation of sharpness detectors P a (a); P b (b); P c (c). Taken from [26].
Mathematics 09 02253 g008
Figure 9. The first (a) and the fifteenth (b) image of a series of fifteen images of blue marble (see Figure 3) displayed in supplementary pseudo-colours. The software used has been written by the author. Taken from [26].
Figure 9. The first (a) and the fifteenth (b) image of a series of fifteen images of blue marble (see Figure 3) displayed in supplementary pseudo-colours. The software used has been written by the author. Taken from [26].
Mathematics 09 02253 g009
Figure 10. The arithmetic mean of the images of Figure 10: before registration (a), after registration (b). The software used has been written by the author. Taken from [26].
Figure 10. The arithmetic mean of the images of Figure 10: before registration (a), after registration (b). The software used has been written by the author. Taken from [26].
Mathematics 09 02253 g010
Figure 11. Optical cuts detected on multifocal image of limestone (two images in the series—see Figure 3). The software was written by the first author of this paper. Taken from [45].
Figure 11. Optical cuts detected on multifocal image of limestone (two images in the series—see Figure 3). The software was written by the first author of this paper. Taken from [45].
Mathematics 09 02253 g011
Figure 12. A 2D reconstruction of the limestone image by the optical cuts used in Figure 11. The software was written by the first author. Taken from [45].
Figure 12. A 2D reconstruction of the limestone image by the optical cuts used in Figure 11. The software was written by the first author. Taken from [45].
Mathematics 09 02253 g012
Figure 13. A 3D echelon approximation of the limestone sample by the optical cuts used in Figure 11. The software was written by the first author. Taken from [45].
Figure 13. A 3D echelon approximation of the limestone sample by the optical cuts used in Figure 11. The software was written by the first author. Taken from [45].
Mathematics 09 02253 g013
Figure 14. A 3D echelon approximation of the limestone sample by the optical cuts used in Figure 13 smoothed by 3D low-pass filters. The software was written by the first author. Taken from [45].
Figure 14. A 3D echelon approximation of the limestone sample by the optical cuts used in Figure 13 smoothed by 3D low-pass filters. The software was written by the first author. Taken from [45].
Mathematics 09 02253 g014
Figure 15. A 3D reconstruction of the limestone sample by data registration according to Section 4, with focusing criterion 28 and profile height calculations 29 and 30 (compare with Figure 5). The software was written by the first author. Taken from [45].
Figure 15. A 3D reconstruction of the limestone sample by data registration according to Section 4, with focusing criterion 28 and profile height calculations 29 and 30 (compare with Figure 5). The software was written by the first author. Taken from [45].
Mathematics 09 02253 g015
Figure 16. A 3D reconstruction of the blue marble sample by data registration according to Section 4, with focusing criterion (28) and profile height calculations (29) and (30). The software was written by the first author. Taken from [45].
Figure 16. A 3D reconstruction of the blue marble sample by data registration according to Section 4, with focusing criterion (28) and profile height calculations (29) and (30). The software was written by the first author. Taken from [45].
Mathematics 09 02253 g016
Figure 17. A confocal 3D relief of a single pore of hydrated Portland cement paste; 47 optical cuts with a vertical stepping of 1.2 μm. Olympus LEXT 1000, confocal mode, Olympus factory software. Taken from [27].
Figure 17. A confocal 3D relief of a single pore of hydrated Portland cement paste; 47 optical cuts with a vertical stepping of 1.2 μm. Olympus LEXT 1000, confocal mode, Olympus factory software. Taken from [27].
Mathematics 09 02253 g017
Figure 18. The same single pore of hydrated Portland cement paste as in Figure 17; 47 optical cuts with a vertical stepping of 1.2 μm. Olympus LEXT 1000 again. Non−confocal mode. A 3D reconstruction by data registration according to Section 4, focusing criterion (28), and profile height calculations (29) and (30). The software was written by the first author. Taken from [27].
Figure 18. The same single pore of hydrated Portland cement paste as in Figure 17; 47 optical cuts with a vertical stepping of 1.2 μm. Olympus LEXT 1000 again. Non−confocal mode. A 3D reconstruction by data registration according to Section 4, focusing criterion (28), and profile height calculations (29) and (30). The software was written by the first author. Taken from [27].
Mathematics 09 02253 g018
Table 1. Parameters of the transforms indicated and applied to separate images in the series of blue marble series (stated relative to the first image). Taken from [26].
Table 1. Parameters of the transforms indicated and applied to separate images in the series of blue marble series (stated relative to the first image). Taken from [26].
Transforms IndicatedTransforms Applied
Image No.ScaleRotationShift VectorScaleRotationShift Vector
(Arcmin.)x (Pixels)y (Pixels) (Arcmin.)x (Pixels)y (Pixels)
21.01343−0.554−1.0111.0640.986750.5541.011−1.064
31.02351−0.221−0.9901.8610.977030.2210.990−1.861
41.03796−0.061−1.8693.0270.963430.0611.869−3.027
51.048880.053−2.9033.0320.95340−0.0532.903−3.032
61.06228−0.409−4.0855.1420.941370.4094.085−5.142
71.07055−0.140−4.9474.9870.934100.1404.947−4.987
81.08105−0.027−4.9645.8560.925030.0274.964−5.856
91.091730.134−4.8476.1410.91598−0.1344.847−6.141
101.103404.652−4.9887.0030.90629−4.6524.988−7.003
111.114265.215−5.0037.9340.89746−5.2155.003−7.934
121.125905.728−5.8958.9720.88818−5.7285.895−8.972
131.137845.278−5.9078.8720.87886−5.2785.907−8.872
141.149405.378−5.9288.9800.87002−5.3785.928−8.980
151.160665.275−6.9359.1370.86158−5.2756.935−9.137
161.172975.324−8.0369.0950.85254−5.3248.036−9.095
Table 2. Root Mean Square Error ( R M S E ) , Average Deviation ( A D ), Pearson Correlation Coefficient ( P C C ) and and Difference of surface Information Entropy ( D I E ) for separate pairs of echelon−approximations C a , C b , and C c for the data of Figure 3. Taken from [26].
Table 2. Root Mean Square Error ( R M S E ) , Average Deviation ( A D ), Pearson Correlation Coefficient ( P C C ) and and Difference of surface Information Entropy ( D I E ) for separate pairs of echelon−approximations C a , C b , and C c for the data of Figure 3. Taken from [26].
RMSEaCbCcC ADaCbCcC
PCC DIE
aC-0.341520.46420aC-0.101720.20114
bC0.99320-0.37476bC0.00505-0.13145
cC0.987400.99158-cC0.008120.00307-
Table 3. Root Mean Square Error ( R M S E ) , Average Deviation ( A D ), Pearson Correlation Coefficient ( P C C ), and Difference of surface Information Entropy ( D I E ) for separate pairs of parabolic interpolation approximations C a , C b , and C c fo For data from Figure 2, Figure 8 and Figure 9. Taken from [26].
Table 3. Root Mean Square Error ( R M S E ) , Average Deviation ( A D ), Pearson Correlation Coefficient ( P C C ), and Difference of surface Information Entropy ( D I E ) for separate pairs of parabolic interpolation approximations C a , C b , and C c fo For data from Figure 2, Figure 8 and Figure 9. Taken from [26].
RMSEaCbCcC ADaCbCcC
PCC DIE
aC-0.185000.25909aC-0.100660.18512
bC0.99850-0.16511bC0.00464-0.12144
cC0.996270.99810-cC0.007140.00250-
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Martišek, D.; Mikulášek, K. Mathematical Principles of Object 3D Reconstruction by Shape-from-Focus Methods. Mathematics 2021, 9, 2253. https://0-doi-org.brum.beds.ac.uk/10.3390/math9182253

AMA Style

Martišek D, Mikulášek K. Mathematical Principles of Object 3D Reconstruction by Shape-from-Focus Methods. Mathematics. 2021; 9(18):2253. https://0-doi-org.brum.beds.ac.uk/10.3390/math9182253

Chicago/Turabian Style

Martišek, Dalibor, and Karel Mikulášek. 2021. "Mathematical Principles of Object 3D Reconstruction by Shape-from-Focus Methods" Mathematics 9, no. 18: 2253. https://0-doi-org.brum.beds.ac.uk/10.3390/math9182253

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop