Next Article in Journal
Designing Closed-Loop Brain-Machine Interfaces Using Model Predictive Control
Next Article in Special Issue
The Physics of Turbulence-Free Ghost Imaging
Previous Article in Journal
Latest Advances in the Generation of Single Photons in Silicon Carbide
Previous Article in Special Issue
Magnetic Resonance Lithography with Nanometer Resolution
Article

Correlation Plenoptic Imaging With Entangled Photons

1
Museo Storico della Fisica e Centro Studi e Ricerche “Enrico Fermi”, I-00184 Roma, Italy
2
Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Bari, I-70126 Bari, Italy
3
Dipartimento Interateneo di Fisica, Università degli studi di Bari, I-70126 Bari, Italy
4
Fischell Department of Bioengineering, University of Maryland, College Park, MD 20742, USA
*
Author to whom correspondence should be addressed.
Academic Editors: Yanhua Shih and Ronald E. Meyers
Received: 3 May 2016 / Revised: 30 May 2016 / Accepted: 1 June 2016 / Published: 7 June 2016
(This article belongs to the Special Issue Quantum Imaging)

Abstract

Plenoptic imaging is a novel optical technique for three-dimensional imaging in a single shot. It is enabled by the simultaneous measurement of both the location and the propagation direction of light in a given scene. In the standard approach, the maximum spatial and angular resolutions are inversely proportional, and so are the resolution and the maximum achievable depth of focus of the 3D image. We have recently proposed a method to overcome such fundamental limits by combining plenoptic imaging with an intriguing correlation remote-imaging technique: ghost imaging. Here, we theoretically demonstrate that correlation plenoptic imaging can be effectively achieved by exploiting the position-momentum entanglement characterizing spontaneous parametric down-conversion (SPDC) photon pairs. As a proof-of-principle demonstration, we shall show that correlation plenoptic imaging with entangled photons may enable the refocusing of an out-of-focus image at the same depth of focus of a standard plenoptic device, but without sacrificing diffraction-limited image resolution.
Keywords: entanglement; ghost imaging; three-dimensional imaging entanglement; ghost imaging; three-dimensional imaging

1. Introduction

Plenoptic imaging, also known as light-field or integral imaging, is a novel optical imaging modality [1]. Its key principle is to record the three-dimensional light field of a given scene by measuring both the location and the propagation direction of the incoming light. In particular, several images of the scene, one for each propagation direction of light within the scene, are acquired in a single shot. On one hand, such images correspond to the required viewpoints enabling the three-dimensional reconstruction of the scene. In fact, plenoptic imaging is the simplest method of 3D imaging with the present technological means [2,3,4]. On the other hand, the available angular information also enables the simplification of low-light shooting: The acquired images can be combined, in post-processing, to give an overall image characterized by the same depth of field of the N original images, but a signal-to-noise ratio N times larger [5].
Plenoptic imaging is currently used in digital cameras enhanced by refocusing capabilities [6,7,8]; in fact, in photography, plenoptic imaging highly simplifies both auto-focus and low-light shooting [5]. A plethora of innovative applications in 3D imaging and sensing [2,9], stereoscopy [1,10,11] and microscopy [3,12,13] are also being developed. In particular, high-speed large-scale 3D functional imaging of neuronal activity has been demonstrated [4].
However, the potentials of plenoptic imaging are strongly limited by the inherent inverse proportionality between image resolution and maximum achievable depth of field. In fact, plenoptic imaging has so far been implemented by inserting a microlens array in the native image plane, while moving the sensor array behind the microlenses. The image of the scene is reproduced on the microlenses, which thus define the spatial resolution of the acquired image. Each microlens also serves for reproducing, on the sensor array, an image of the camera lens, thus providing the angular information associated with each imaging pixel [5]. As a result, a trade-off between spatial and angular resolution is built in the plenoptic imaging process. To recover the lost resolution, signal processing and deconvolution have been implemented [3,4,14,15,16,17].
We have recently proposed a novel approach to plenoptic imaging, named correlation plenoptic imaging (CPI), which exploits the spatio-temporal second-order correlation typical of chaotic light sources to beat the strong coupling between spatial and angular resolution, as imposed to standard plenoptic imaging devices [18,19]. From a fundamental standpoint, the plenoptic application has been the first physical context where the counterintuitive properties of chaotic light (namely, the coexistence of momentum and position correlation [20]) are effectively used to beat intrinsic limits of standard imaging systems. From a practical standpoint, our protocol has been shown to dramatically enhance the potentials of plenoptic imaging. However, in contrast with chaotic light [21], correlation imaging based on entangled photons has been shown to enable sub-shot-noise imaging [22], as required by biomedical and security applications. Hence, in this paper, we investigate the possibility of performing CPI with entangled photons, or twin beams, from spontaneous parametric down-conversion (SPDC) [23]. We show that the peculiar momentum-momentum and position-position correlations typical of such EPR entangled systems [24,25] can be simultaneously exploited to substantially weaken the connection between spatial resolution and depth of field typical of standard plenoptic imaging.
The proposed setup for CPI with entangled photons from SPDC is reported in Figure 1. In view of plenoptic imaging, the setup must enable the parallel acquisition of several images of the given scene, one for each propagation direction of light. In fact, as we shall soon demonstrate, the sensor array S a retrieves N coherent ghost images of the object by means of correlation measurement with each of the pixels of the sensor array S b . Such images represent different viewpoints of the desired scene. This is quite intuitive considering sensor S b reproduces the image of the light source. Hence, each coherent ghost image is associated with a different illumination of the object. Interestingly, the single lens L b replaces the microlens array required in standard plenoptic imaging. In summary, the basic idea of CPI is to replace with a single lens and two separate sensors, the complex system composed of the microlens array followed by a single sensor; spatial and angular measurements are thus physically decoupled, enabling a significant weakening of the inverse proportionality between spatial and angular resolution characterizing standard plenoptic imaging devices.

2. Theoretical Analysis

2.1. Background

The coincidence detection of entangled photons from SPDC is described by the second order Glauber correlation function [26]:
G ( 2 ) ( r a , r b ; t a , t b ) = Ψ | E a ( ) ( r a , t a ) E b ( ) ( r b , t b ) E b ( + ) ( r b , t b ) E a ( + ) ( r a , t a ) | Ψ ,
where
E j ( + ) ( r j , t j ) = d ω d κ a k e i ω t j g j ( r j , k ) ,
is the positive-energy part of the electric field at sensor j (with j = a , b ), placed in r j = ( ρ j , z j ) , t j the time of the detection, ω is the frequency and k = ( κ , ω / c ) the wave vector of the detected radiation, g j is the Green’s function propagating the field mode k from the source to the sensor. The negative-energy part E j ( ) ( r j , t j ) of the electric field is the Hermitian conjugate of the field E ( + ) of Equation (2). A scalar approximation for the electric field has been assumed, which physically corresponds to considering a fixed polarization of light. The positive and negative-energy parts of the electric field involve the photon annihilation and creation operators ( a k and a k ), respectively, associated with wave vector k. The expectation value in Equation (1) is taken over the two-photon signal-idler state produced by SPDC [27,28,29]:
| Ψ = N d ν s ( L D ν ) d κ i d κ s h tr ( κ i + κ s ) a k i a k s | 0 ,
where N is a normalization constant, ν is the detuning with respect to the central frequency of signal and idler Ω s = Ω i = ω p / 2 , which is linked by phase matching to the central frequency of the pump laser ω p , L is the length of the SPDC crystal, D is the difference between the inverse group velocities of signal and idler, s ( L D ν ) is the spectrum of the SPDC biphoton [30,31], and h tr is the Fourier transform of the pump transverse profile:
F ( ρ ) = d κ e i κ · ρ h tr ( κ ) .
We have assumed, for simplicity, degenerate SPDC radiation, but the result can be easily generalized to the non-degenerate situation [32,33]. Without loss of generality, we shall further assume the source to be monochromatic, in such a way that the time dependence of the correlation function will not be relevant. By employing the canonical commutation relations [ a k , a k ] = 0 and [ a k , a k ] = δ ( k k ) , with δ the Dirac delta distribution, and the inversion symmetry of the Fourier transform of the transverse pump profile h tr ( κ ) = h tr ( κ ) , the spatial part of the two-photon correlation function reads:
Γ ( ρ a , ρ b ) = d κ a d κ b g a ( ρ a , κ a ) g b ( ρ b , κ b ) h tr ( κ a + κ b ) 2 ,
up to irrelevant constants. This result indicates the strong coupling between the two remote sensors, as enabled by the momentum-momentum entanglement characterizing SPDC biphotons.
Let us now evaluate the propagators in the two arms of the setup depicted in Figure 1; we shall assume for simplicity the lenses to be diffraction-limited. In arm a, light propagates in free space for a distance z a from the source to the lens L a and is then detected by the sensor S a , placed at a distance z a from the lens. In the paraxial approximation, propagation of a field with frequency Ω c k z in free space from ( ρ 1 , z 1 ) to ( ρ 2 , z 2 ) is described by the function [34]:
G ( ρ 2 ρ 1 , z 2 z 1 ) = i Ω e i Ω c ( z 2 z 1 ) 2 π c ( z 2 z 1 ) G ( ρ 2 ρ 1 ) Ω c ( z 2 z 1 )
with G ( x ) [ y ] = e i y | x | 2 / 2 . Knowing the initial field E ( ρ 1 ) , one can determine the final field E ( ρ 2 ) = d ρ 1 E ( ρ 1 ) G ( ρ 2 ρ 1 , z 2 z 1 ) . Propagation through a lens of focal length f is described by G ( ρ l ) [ Ω / ( c f ) ] . Hence, the propagator associated with arm a of the setup reads:
g a ( ρ a , κ a ) = C a ( z a , z a ) d ρ s d ρ e i κ a · ρ s G ( ρ ρ s ) Ω c z a G ( ρ ) Ω c f G ( ρ a ρ ) Ω c z a = C a ( z a , z a ) G ( ρ a ) Ω c 1 z a ζ ( z a , z a ) z a 2 d ρ s e i κ a · ρ s G ( ρ s ) Ω c z a 1 ζ ( z a , z a ) z a e i Ω ζ ( z a , z a ) c z a z a ρ s · ρ a ,
where
ζ ( z a , z a ) = 1 z a + 1 z a 1 f 1 ,
ρ s and ρ are transverse coordinate on the source and the lens L a plane, respectively, and C a , C a contain irrelevant constants. In arm b, light propagates for a distance z b from the source to the object which represents the desired scene to image, then for a distance z b from the object to lens L b , and a further distance z b before being detected by the sensor S b . By indicating with A the aperture function of the object, and assuming the focusing condition 1 / ( z b + z b ) + 1 / z b = 1 / F to be satisfied, the propagator associated with arm b of the setup reads:
g b ( ρ b , κ b ) = C b ( z b , z b ) d ρ s d ρ o d ρ e i κ a · ρ s A ( ρ o ) G ( ρ o ρ s ) Ω c z b G ( ρ ρ o ) Ω c z b × G ( ρ ) Ω c F G ( ρ b ρ ) Ω c z b = C b ( z b , z b ) G ( ρ b ) Ω c z b 1 1 z b 1 z b + 1 z b 1 F 1 d ρ s d ρ o e i κ a · ρ s G ( ρ s ) Ω c z b A ( ρ o ) e i Ω c z b ρ s + ρ b M · ρ o ,
where ρ o and ρ are transverse coordinate on the object and the lens L b planes, respectively, M = z b / ( z b + z b ) is the magnification of the image of the source on the sensor array S b , and C b , C b contain irrelevant constants.
By inserting in Equation (5) the Green’s function given by Equations (7) and (9), and the laser pump profile on the SPDC crystal, as defined in Equation (4), one finds that the second order correlation function associated with signal-idler pairs from SPDC is given by the plenoptic correlation function:
Γ ( ρ a , ρ b ) = K ( z a , z a , z b , z b ) | d ρ o A ( ρ o ) d ρ s F ( ρ s ) G ( ρ s ) Ω c 1 z b + 1 z a 1 ζ ( z a , z a ) z a e i Ω ζ ( z a , z a ) c z a z a ρ s · ρ a e i Ω c z b ρ s + ρ b M · ρ o | 2 ,
where K contains irrelevant constants.

2.2. Plenoptic Properties of the Correlation Function and Refocusing Capability

As shown in Equation (10), the proposed CPI protocol is theoretically described by a second order correlation function encoding both spatial and angular information, hence, characterized by the key re-focusing capability typical of plenoptic imaging.
To develop an intuition about the result of Equation (10), we consider the simple case in which the distance between the object and the source z b = z b F satisfies the two-photon thin lens equation [25,35]:
1 z a + z b F + 1 z a = 1 f .
In this case, by integrating the result of Equation (10) over the whole sensor array S b , one gets the standard (incoherent) ghost image of the object, magnified by a factor of m = z a z a + z b F , namely [25,35],
Σ F ( ρ a ) = d ρ b Γ ( ρ a , ρ b ) d ρ o | A ( ρ o ) | 2 h tr Ω c z b F ρ o + ρ a m 2 ,
where h tr is the Fourier transform of the laser pump profile, as defined in Equation (4). The above result is valid in the hypothesis that h tr is similar to or narrower than the Fourier transform of the imaging lens L a . In fact, such incoherent ghost image is formally equivalent to the incoherent image one would obtain in a standard imaging system characterized by a point-spread function h tr given by the Fourier transform of the imaging lens aperture function.
However, the second order correlation function of Equation (10) can do much better than standard ghost imaging: The deep physical difference arises from the coherent nature of the ghost image it describes,
Γ F ( ρ a , ρ b ) = K ( z a , z a , z b F , z b ) d ρ o A ( ρ o ) h tr Ω c z b F ρ o + ρ a m e i Ω c z b F ρ b M · ρ o 2 ,
that is obtained from the general expression (10), when the focusing condition in Equation (11) holds.
The coherence of such ghost image is the immediate consequence of measuring coincidences between the spatial sensor S a and any single pixel of the angular sensor S b . This can be better understood in terms of the Klyshko picture [35] reported in Figure 2: The light illuminating the object and contributing to the coincidence detection between any two pairs of pixels ρ a and ρ b has a well defined propagation direction (i.e., it is spatially coherent). As made clear from Figure 2, the Klyshko picture also enables the interpretation of the proposed setup for CPI with entangled photons as a sort of correlation pinhole camera. Such a perspective helps developing an intuition about the analogy between the proposed scheme and standard plenoptic imaging, as well as understanding the role played by the sensor S b in retrieving the angular information about the two-photon light field. In fact, due to the quasi one-to-one correspondence between points on the sensor S b and points on the source, one can trace, in post-processing, the geometrical ray connecting each point of the source with each point of the object. This leads to the peculiar refocusing and 3D imaging capabilities of plenoptic imaging.
Now, to explicitly demonstrate this last point and better highlight the plenoptic properties of the second-order correlation function of Equation (10), we shall consider the more general out-of-focus situation ( z b z b F ) and rewrite it as a product of the pump profile F and the object aperture function A with the phase factor e i Ω c φ ( ρ o , ρ s ; ρ a , ρ b ) , with:
φ ( ρ o , ρ s ; ρ a , ρ b ) = 1 z b + 1 z a 1 ζ ( z a , z a ) z a | ρ s | 2 2 ζ ( z a , z a ) z a z a ρ s · ρ a 1 z b ρ s + ρ b M · ρ o ,
namely
Γ ( ρ a , ρ b ) d ρ o A ( ρ o ) d ρ s F ( ρ s ) e i Ω c φ ( ρ o , ρ s ; ρ a , ρ b ) 2 .
The stationary points of the phase defined in Equation (14) enable us to determine the geometrical correspondence between points on the object and the source with points on the sensors S a and S b , respectively. In particular, the stationarity of φ with respect to ρ s determines the object point that gives the predominant contribution to the integral of Equation (15), that is:
ρ o = z b z b F ρ a m ρ b M 1 z b z b F ,
where the identity ζ ( z a , z a ) = ( z b F + z a ) z a / z b F has been used. When the focusing condition of Equation (11) is satisfied, this object point becomes independent of the specific sensor pixel ρ b . Hence, the focused ghost image is not sensitive to the change of perspective enabled by the high resolution of the angular sensor S b . On the other hand, the stationarity of φ with respect to ρ o yields the focusing of the source on the sensor S b :
ρ s = ρ b M .
Thus, in the geometrical optics limit, the second order correlation function of Equation (15) reduces to the product of the tilted and rescaled geometrical image of the object and the source profile:
Γ G ( ρ a , ρ b ) A z b z b F ρ a m ρ b M 1 z b z b F 2 F ρ b M 2 .
Interestingly, by properly rescaling the variable ρ a , the object can be completely decoupled from the source; in fact, the rescaled second order correlation function
Γ G ref z b F z b ρ a + ρ b M m 1 z b F z b , ρ b F ρ b M 2 A ρ a m 2 ,
gives the perfect geometrical image of the desired scene. Such rescaling is formally identical to the one employed both in standard plenoptic imaging [5] and in correlation plenoptic imaging with chaotic light [18,19].
Similar to standard plenoptic imaging, the signal to noise ratio of the refocused image can be improved by integrating the result of Equation (19) over the whole sensor array ρ b , thus employing light coming from the whole light source:
Σ ref ( ρ a ) = d ρ b Γ ref z b F z b ρ a + ρ b M m 1 z b F z b , ρ b .
This result represents the refocused incoherent ghost image of an object placed at a generic distance z b from the source, and is thus the central result of the present paper.
The possibility of reconstructing the light field and refocusing an out-of-focus image, as reported in Equation (20), lies on the accuracy with which both object and source points are in a one-to-one correspondence with points on sensors S a and S b , respectively. We have already demonstrated that the Fourier transform of the transverse pump profile determines the object point spread function (see Equation (12)), with a spot size Δ ρ a m c z b F / ( Ω D s ) , where D s is the diameter of the pump profile. On the other hand, it is easy to check that the source is imaged with a point spread function given by the Fourier transform of the object aperture function. From Equation (10), one can infer that a point on the source corresponds to a spot of width Δ ρ b M c z b / ( Ω d ) on the sensor S b , with d the smallest length scale of the aperture function of the object. Thus, as far as the pixel sizes lie above the resolution limits, the spatial and angular resolution are decoupled. The structure of a standard plenoptic device, instead, entails an inverse proportionality relation between the angular resolution and the spatial resolution of the focused image, also in the geometrical-optics regime [1,5]. Thus, our protocol of plenoptic imaging with entangled photons enables us to beat this intrinsic limitation and achieve a larger depth of field (depending on the angular resolution), by leaving unchanged the resolution on the focused image and the total number of pixels.

3. Simulation of CPI With Entangled Photons From SPDC

In Figure 3, we show the enhanced depth of field induced by the refocusing capability of the SPDC correlation plenoptic protocol. A mask with a transparent letter E, whose thickness is d = 0 . 2 mm , is placed in a setup with z a = 10 mm , z a = 30 mm , and f = 12 mm , which would give a focused ghost image magnified by m = 1 . 5 . The object mask is illuminated by SPDC photons with λ = 1 μ m , generated by a pump whose Gaussian transverse profile has width σ = 0 . 6 mm . With respect to the source, the object is placed at a distance z b = 3 mm , which is less than one third of the focused plane distance z b F = 10 mm . The ghost image of such an object would be focused at z a F = 5 z a . The widths of the sensors S a and S b are fixed to W a = 6 m d = 1 . 8 mm and W b = 4 M σ = 1 . 9 mm , with M = 0 . 8 the magnification of the source image reproduced on S b . Their pixel size δ = 6 μ m is close to both resolution limits, as defined by the source and the object’s aperture. The results reported in Figure 3 clearly indicate that the refocusing procedure enables the recovery of the information on the aperture function of the object, which is completely lost in the misfocused ghost image.
We shall now compare the above results with the one achievable by a standard plenoptic camera having the same pixel size and total number of pixels per side ( N tot = N a + N b = 620 ). To this end, we introduce the parameter α = S i / S i , given by the ratio between the distance S i from the focusing element to the image plane, and the actual distance S i between the focusing element (imaging lens) and the detector. Generally, perfect refocusing is possible if [5]
1 1 α < Δ x Δ u ,
where Δ x is the minimum distance that can be resolved on the image plane, and Δ u the minimum distance that can be resolved on the imaging lens. In a standard plenoptic camera, if the sensor have pixels of size δ, the image resolution is given by Δ x ( p ) = 2 δ N u ( p ) , while each pixel δ coincides with an area of width Δ u ( p ) = 2 D s / N u ( p ) on the lens, with D s the lens diameter. Hence,
Δ x Δ u ( p ) = δ D s N u ( p ) 2 .
In CPI instead, Δ x ( c ) = 2 δ , since pixels of width δ can be used also to retrieve the image. On the other hand, the resolution on the imaging lens is given by Δ u ( c ) = 2 D s / N b , where D s is the effective diameter of the lens L a , that can be obtained by properly scaling the size D s of the pump profile:
D s = D s 1 + z a z b .
In this case, the right-hand side of the perfect refocusing condition given in Equation (21) reads
Δ x Δ u ( c ) = δ D s N b .
Hence, the maximum achievable depth of focus, in the setup employed for the simulation reported in Figure 3, is | 1 1 / α | < 0 . 26 . A standard plenoptic camera with the same pixel size and total number of pixels per side would enable us to achieve this same depth of focus provided N u ( p ) = 18 pixels are employed for the angular resolution; this condition imposes a loss of spatial resolution by a factor 18 ( Δ x ( p ) = 0 . 1 mm ) with respect to the one of the CPI protocol considered above.

4. Discussion

At the heart of the refocusing capability of the second order correlation function of Equation (10), is the larger depth of focus of the coherent ghost image (Equation (13)), with respect to the incoherent ghost image (Equation (12)), as reported in Figure 4. In fact, the maximum achievable depth of focus of the proposed CPI scheme is the result of the increased depth of focus of coherent ghost imaging, with respect to incoherent ghost imaging.
This can be better understood by considering the origin of both the out-of-focus and the refocused image: The first one is obtained by integrating the out-of-focus coherent image (Equations (10), (15), or (18)) over the whole sensor S b , exactly as it would do a bucket detector of standard ghost imaging; the second one is obtained by integrating, over the same sensor S b , the rescaled version of such out-of-focus coherent image, as indicated in Equation (20). Now, as shown in Figure 5, the out-of-focus coherent image is a projection of the focused image (hence, it is either enlarged or reduced with respect to it) as seen by the viewpoint defined by the specific value of ρ b . The integration all such coherent images over the whole sensor S b implies the overlap of all the projections taken from the different viewpoints ρ b ; the resulting incoherent image is thus characterized by a loss of resolution, namely, it appears out of focus. The rescaled coherent image restores the correct size of the focused image and, most important, tilts the image in such a way to cancel the specific viewpoint from which it was taken. As a consequence, the integration of all such rescaled coherent images over the whole sensor S b has no more detrimental effect on the resolution of the resulting incoherent image; the post-processed image thus appears refocused.

5. Conclusions and Outlook

In view of practical applications, it is worth mentioning that all the above results apply to both reflective and transmitting objects. In addition, in contrast with chaotic light, entangled photons from SPDC enable us to employ different wavelengths in the two arms of the setup: Light illuminating the object is not required to have the same spectrum as light being remotely detected by S a to retrieve the desired image [32,33]. This is quite interesting in view of applications requiring specific illumination wavelenghts for the object. In this scenario, one may choose two different sensors for maximizing the detection efficiency.
As plenoptic imaging is being broadly adopted in diverse fields such as digital photography [6,7,8], microscopy [3,4], 3D imaging, sensing and rendering [2], our proposed scheme has direct applications in several biomedical and engineering fields. Interestingly, the coherent nature of the correlation plenoptic imaging technique may lead to innovative coherent microscopy modality.

Acknowledgments

This work has been supported by the MIUR project P.O.N. RICERCA E COMPETITIVITA’ 2007-2013 - Avviso n. 713/Ric. del 29/10/2010, Titolo II - “Sviluppo/Potenziamento di DAT e di LPP” (project n. PON02-00576-3333585), the INFN through the project “QUANTUM”, the UMD Tier 1 program and the Ministry of Science of Korea, under the “ICT Consilience Creative Program” (IITP-2015-R0346-15-1007).

Author Contributions

Milena D’Angelo, Giuliano Scarcelli and Augusto Garuccio conceived and designed the proposed scheme for CPI; Francesco V. Pepe performed the theoretical calculation; Francesco Di Lena performed the simulation; Milena D’Angelo and Francesco V. Pepe wrote the paper. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Adelson, E.H.; Wang, J.Y.A. Single Lens Stereo with a Plenoptic Camera. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 99–106. [Google Scholar] [CrossRef]
  2. Xiao, X.; Javidi, B.; Martinez-Corral, M.; Stern, A. Advances in three-dimensional integral imaging: Sensing, display, and applications. Appl. Opt. 2013, 52, 546–560. [Google Scholar] [CrossRef] [PubMed]
  3. Broxton, M.; Grosenick, L.; Yang, S.; Cohen, N.; Andalman, A.; Deisseroth, K.; Levoy, M. Wave optics theory and 3-D deconvolution for the light field microscope. Opt. Express. 2013, 21, 25418–25439. [Google Scholar] [CrossRef] [PubMed]
  4. Prevedel, R.; Yoon, Y.-G.; Hoffmann, M.; Pak, N.; Wetzstein, G.; Kato, S.; Schrödel, T.; Raskar, R.; Zimmer, M.; Boyden, E.S.; et al. Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy. Nat. Methods 2014, 11, 727–730. [Google Scholar] [CrossRef] [PubMed]
  5. Ng, R.; Levoy, M.; Brédif, M.; Duval, G.; Horowitz, M.; Hanrahan, P. Light Field Photography with a Hand-Held Plenoptic Camera; Tech Report CSTR 2005-02; Stanford University Computer Science: Stanford, CA, USA, 2005. [Google Scholar]
  6. Lytro ILLUM. Available online: https://www.lytro.com/illum (accessed on 2 June 2016).
  7. Raytrix. Available online: http://www.raytrix.de/ (accessed on 2 June 2016).
  8. 3D capture for the next generation. Available online: http://www.pelicanimaging.com (accessed on 2 June 2016).
  9. Liu, H.; Jonas, E.; Tian, L.; Jingshan, Z.; Recht, B.; Waller, L. 3D imaging in volumetric scattering media using phase-space measurements. Opt. Express. 2015, 23, 14461–14471. [Google Scholar] [CrossRef] [PubMed]
  10. Muenzel, S.; Fleischer, J.W. Enhancing layered 3D displays with a lens. Appl. Opt. 2013, 52, D97–D101. [Google Scholar] [CrossRef] [PubMed]
  11. Levoy, M.; Hanrahan, P. Light field rendering. In Computer Graphics Annual Conference Series, Proceedings of the SIGGRAPH 1996, New Orleans, LA, USA, 4–9 August 1996; ACM SIGGRAPH: New York, NY, USA, 1996; pp. 31–42. [Google Scholar]
  12. Levoy, M.; Ng, R.; Adams, A.; Footer, M.; Horowitz, M. Light field microscopy. ACM Trans. Graph. 2006, 25, 924–934. [Google Scholar] [CrossRef]
  13. Glastre, W.; Hugon, O.; Jacquin, O.; de Chatellus, H.G.; Lacot, E. Demonstration of a plenoptic microscope based on laser optical feedback imaging. Opt. Express 2013, 21, 7294–7303. [Google Scholar] [CrossRef] [PubMed]
  14. Waller, L.; Situ, G.; Fleischer, J.W. Phase-space measurement and coherence synthesis of optical beams. Nat. Photonics 2012, 6, 474–479. [Google Scholar] [CrossRef]
  15. Georgiev, T.; Zheng, K.C.; Curless, B.; Salesin, D.; Nayar, S.; Intwala, C. Spatio-Angular Resolution Tradeoff in Integral Photography. In Eurographics Symposium on Redering (2006); Akenine-Möller, T., Heidrich, W., Eds.; The Eurographics Association: Geneva, Switzerland, 2006. [Google Scholar]
  16. Schroff, S.A.; Berkner, K. Image formation analysis and high resolution image reconstruction for plenoptic imaging systems. Appl. Opt. 2013, 52, D22–D31. [Google Scholar] [CrossRef] [PubMed]
  17. Pérez, J.; Magdaleno, E.; Pérez, F.; Rodríguez, M.; Hernández, D.; Corrales, J. Super-Resolution in Plenoptic Cameras Using FPGAs. Sensors 2014, 14, 8669–8685. [Google Scholar] [CrossRef] [PubMed]
  18. D’Angelo, M.; Pepe, F.V.; Garuccio, A.; Scarcelli, G. Correlation Plenoptic Imaging. Phys. Rev. Lett. 2016, 116, 223602. [Google Scholar] [CrossRef]
  19. Pepe, F.V.; Scarcelli, G.; Garuccio, A.; D’Angelo, M. Plenoptic imaging with second-order correlations of light. Quantum Meas. Quantum Metrol. 2016, 3, 20–26. [Google Scholar] [CrossRef]
  20. Ferri, F.; Magatti, D.; Gatti, A.; Bache, M.; Brambilla, E.; Lugiato, L.A. High-Resolution Ghost Image and Ghost Diffraction Experiments with Thermal Light. Phys. Rev. Lett. 2005, 94, 183602. [Google Scholar] [CrossRef] [PubMed]
  21. Brida, G.; Chekhova, M.V.; Fornaro, G.A.; Genovese, M.; Lopaeva, L.; Ruo Berchera, I. Systematic analysis of signal-to-noise ratio in bipartite ghost imaging with classical and quantum light. Phys. Rev. A 2011, 83, 063807. [Google Scholar] [CrossRef]
  22. Brida, G.; Genovese, M.; Ruo Berchera, I. Experimental realization of sub-shot-noise quantum imaging. Nat. Photonics 2010, 4, 227–230. [Google Scholar] [CrossRef]
  23. Klyshko, D.N. Photons and Nonlinear Optics; CRC Press: Boca Raton, FL, USA, 1988. [Google Scholar]
  24. D’Angelo, M.; Valencia, A.; Rubin, M.H.; Shih, Y.H. Resolution of quantum and classical ghost imaging. Phys. Rev. A 2005, 72, 013810. [Google Scholar] [CrossRef]
  25. D’Angelo, M.; Shih, Y.H. Quantum Imaging. Laser Phys. Lett. 2005, 2, 567–596. [Google Scholar] [CrossRef]
  26. Scully, M.O.; Zubairy, M.S. Quantum Optics, 1st ed.; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  27. Rubin, M.H.; Klyshko, D.N.; Shih, Y.H.; Sergienko, A.V. Theory of two-photon entanglement in type-II optical parametric down-conversion. Phys. Rev. A 1994, 50. [Google Scholar] [CrossRef]
  28. Rubin, M.H. Transverse correlation in optical spontaneous parametric down-conversion. Phys. Rev. A 1996, 54. [Google Scholar] [CrossRef]
  29. Burlakov, A.V.; Chekhova, M.V.; Klyshko, D.N.; Kulik, S.P.; Penin, A.N.; Shih, Y.H.; Strekalov, D.V. Interference effects in spontaneous two-photon parametric scattering from two macroscopic regions. Phys. Rev. A 1997, 56. [Google Scholar] [CrossRef]
  30. Kim, Y.-H. Measurement of one-photon and two-photon wave packets in spontaneous parametric downconversion. J. Opt. Soc. Am. B 2003, 20, 1959–1966. [Google Scholar] [CrossRef]
  31. Baek, S.-Y.; Kim, Y.-H. Spectral properties of entangled photon pairs generated via frequency-degenerate type-I spontaneous parametric down-conversion. Phys. Rev. A 2008, 77, 043807. [Google Scholar] [CrossRef]
  32. Rubin, M.H.; Shih, Y. Resolution of ghost imaging for nondegenerate spontaneous parametric down-conversion. Phys. Rev. A 2008, 78, 033836. [Google Scholar] [CrossRef]
  33. Karmakar, S.; Shih, Y. Two-color ghost imaging with enhanced angular resolving power. Phys. Rev. A 2010, 81, 033845. [Google Scholar] [CrossRef]
  34. Goodman, J.W. Introduction to Fourier Optics, 2nd ed.; McGraw-Hill Science/Engineering/Math: New York, NY, USA, 1996. [Google Scholar]
  35. Pittman, T.B.; Shih, Y.H.; Strekalov, D.V.; Sergienko, A.V. Optical imaging by means of two-photon quantum entanglement. Phys. Rev. A 1995, 52, R3429. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Schematic setup for correlation plenoptic imaging with entangled photons from SPDC. Signal and idler beams emitted from the SPDC source impinge on a beam-splitter (BS). Both beams are split into a reflected path a and a transmitted path b. The reflected beam propagates toward the lens L a of focal length f and is refracted toward the high resolution sensor array S a . The transmitted beam propagates through the object, playing the role of the desired scene, and is collected by the lens L b of focal length F before being detected by the high-resolution sensor array S b . The two sensors are connected to a coincidence counting circuit. On one hand, distances z b , z b , z b are chosen in such a way that the source and the sensor S b are in conjugate planes of the lens L b . On the other hand, distances z a and z a are such that, when the two-photon thin-lens equation 1 / ( z b + z a ) + 1 / z a = 1 / f is satisfied, a ghost image of the object is retrieved on sensor S a , triggered by sensor S b .
Figure 1. Schematic setup for correlation plenoptic imaging with entangled photons from SPDC. Signal and idler beams emitted from the SPDC source impinge on a beam-splitter (BS). Both beams are split into a reflected path a and a transmitted path b. The reflected beam propagates toward the lens L a of focal length f and is refracted toward the high resolution sensor array S a . The transmitted beam propagates through the object, playing the role of the desired scene, and is collected by the lens L b of focal length F before being detected by the high-resolution sensor array S b . The two sensors are connected to a coincidence counting circuit. On one hand, distances z b , z b , z b are chosen in such a way that the source and the sensor S b are in conjugate planes of the lens L b . On the other hand, distances z a and z a are such that, when the two-photon thin-lens equation 1 / ( z b + z a ) + 1 / z a = 1 / f is satisfied, a ghost image of the object is retrieved on sensor S a , triggered by sensor S b .
Technologies 04 00017 g001
Figure 2. Unfolded version, or Klyshko picture, of the schematic setup reported in Figure 1, in the case in which the ghost image of the object is focused on the sensor S a . On one hand, by means of coincidence detection, the lens L a reproduces on the sensor S a the ghost image of the object. On the other hand, the source and the sensor S b are in conjugate planes of the lens L b . However, based on the advanced-wave model proposed by Klyshko, the effect can be understood by treating sensor S b as the light source and the SPDC source as a simple mirror. The solid and the dashed lines represent two-photon amplitudes that pass through the same slit; hence, at second order, they are focused in the same point of sensor S a . The dashed and the dotted two-photon amplitudes are emitted by the same source point and are thus focused in the same point of sensor S b .
Figure 2. Unfolded version, or Klyshko picture, of the schematic setup reported in Figure 1, in the case in which the ghost image of the object is focused on the sensor S a . On one hand, by means of coincidence detection, the lens L a reproduces on the sensor S a the ghost image of the object. On the other hand, the source and the sensor S b are in conjugate planes of the lens L b . However, based on the advanced-wave model proposed by Klyshko, the effect can be understood by treating sensor S b as the light source and the SPDC source as a simple mirror. The solid and the dashed lines represent two-photon amplitudes that pass through the same slit; hence, at second order, they are focused in the same point of sensor S a . The dashed and the dotted two-photon amplitudes are emitted by the same source point and are thus focused in the same point of sensor S b .
Technologies 04 00017 g002
Figure 3. Comparison between focused (left), misfocused (center) and refocused (right) images of a two-dimensional object. Intensities are normalized to their maximum value in all panels. The out-of-focus and the refocused images are taken in the same setup, in which z b z b F / 3 .
Figure 3. Comparison between focused (left), misfocused (center) and refocused (right) images of a two-dimensional object. Intensities are normalized to their maximum value in all panels. The out-of-focus and the refocused images are taken in the same setup, in which z b z b F / 3 .
Technologies 04 00017 g003
Figure 4. Comparison between the coherent and the incoherent ghost image of a single slit of width a = 26 μ m , as given by Equations (12) and (13), respectively, in the same setup described in Section 3. Both functions have been normalized to their value in ρ a = 0 for any value of α.
Figure 4. Comparison between the coherent and the incoherent ghost image of a single slit of width a = 26 μ m , as given by Equations (12) and (13), respectively, in the same setup described in Section 3. Both functions have been normalized to their value in ρ a = 0 for any value of α.
Technologies 04 00017 g004
Figure 5. Observation of a double slit of width a = 0 . 2 mm and center-to-center distance 2 a from two different points of view. Here, the setting is one-dimensional, with the same parameters as the setup described in Section 3. The coherent ghost images of Equation (10) enable us to change the point of view on any out-of-focus plane by selecting the point ρ b on the sensor S b , corresponding to a source point ρ s = ρ b / M . In this case, the axis of the double slit coincides with the optical axis, and the chosen points on S b are ρ b = M σ (solid line, on the right) and ρ b = + M σ (dashed line, on the left).
Figure 5. Observation of a double slit of width a = 0 . 2 mm and center-to-center distance 2 a from two different points of view. Here, the setting is one-dimensional, with the same parameters as the setup described in Section 3. The coherent ghost images of Equation (10) enable us to change the point of view on any out-of-focus plane by selecting the point ρ b on the sensor S b , corresponding to a source point ρ s = ρ b / M . In this case, the axis of the double slit coincides with the optical axis, and the chosen points on S b are ρ b = M σ (solid line, on the right) and ρ b = + M σ (dashed line, on the left).
Technologies 04 00017 g005
Back to TopTop