-
Before discussing holographic techniques, we begin with a quick overview of non-holographic techniques. A well-known example of the non-holographic technique for turning bad light into good light is adaptive optics24, in which an aberrated wavefront is detected with a wavefront sensor and corrected adaptively with a deformable mirror. With the advent of high-resolution spatial light modulators (SLMs) for phase modulation, various techniques for correcting distorted wavefronts with more complex phase structures have been demonstrated25,26. It should be noted, however, that adaptive optics works effectively only within a restricted field of view (i.e., the isoplanatic patch) over which the aberration does not change; in other words, the point spread function of the overall imaging system, including the random medium, retains shift invariance or isoplanatism. Astronomers are now making efforts to solve this problem using multi-conjugate adaptive optics27. A random medium is said to have a memory effect28 when its scattered field has a shift invariance with respect to the angle shift of the incident light. The memory effect is known to be closely related to the thickness of the random medium; a strongly scattering medium can have a memory effect as long as its scattering layer is thin (like ground glass), but it loses the memory effect as its thickness increases. The concept of the memory effect also plays an important role in other techniques, which is explained later.
Another type of non-holographic technique takes a mathematical approach to the problem, where the object information is retrieved numerically from the bad light by solving a sort of inverse problem in a wide sense. Specific techniques include model-based parametrization of the physical process for the solution search using optimization algorithms29−30, single-pixel imaging based on orthogonal pattern projection31,32, machine learning33, and neural networks34,35. These techniques are now enjoying rapid progress, and their success depends on how well the physical process and prior knowledge are integrated into the mathematical model.
-
The basic functions of adaptive optics (i.e., wavefront detection and correction) are included in the principle of holography. Wavefront detection is performed in the natural course of hologram recording because the hologram is nothing but an interferogram in which the wavefront,
${\phi _O}({\boldsymbol{r}})$ , of the object beam is recorded as$$ \begin{split} t({\boldsymbol{r}}) \propto & |{u_O}({\boldsymbol{r}}) + {u_R}({\boldsymbol{r}}){|^2}\; = \;|{u_O}({\boldsymbol{r}}){|^2} + 1 \\ & + |{u_O}({\boldsymbol{r}})|\exp [i{\phi _O}({\boldsymbol{r}})]\exp ( - i{\boldsymbol{k}} \cdot {\boldsymbol{r}}) \\ & + |{u_O}({\boldsymbol{r}})|\exp [ - i{\phi _O}({\boldsymbol{r}})]\exp (i{\boldsymbol{k}} \cdot {\boldsymbol{r}}) \end{split} $$ (1) where
$t({\boldsymbol{r}})$ is the amplitude transmittance of the hologram,${u_O}({\boldsymbol{r}})$ is the object beam, and${u_R}({\boldsymbol{r}})$ is the reference plane wave with a unit amplitude,$|{u_R}({\boldsymbol{r}})| = 1$ , and phase,${\phi _R} = {\boldsymbol{k}} \cdot {\boldsymbol{r}}$ , for off-axis holography. Also recorded in the hologram is the phase-conjugated object beam,${u_O}* = \;|{u_O}|\exp ( - i{\phi _O})$ , with the reversed sign of phase, which plays a key role in the wavefront correction, as explained below.As shown schematically in Fig. 3, the light from a coherently illuminated object is perturbed by a random medium and then recorded with a reference plane wave,
$\exp (i{\boldsymbol{k}} \cdot {\boldsymbol{r}})$ , in a hologram with the amplitude transmittance given by Eq. 1. When the hologram is read out with a counter-propagating plane wave,$\exp ( - i{\boldsymbol{k}} \cdot {\boldsymbol{r}})$ , a phase-conjugated object beam,${u_O}* = \;|{u_O}|\exp ( - i{\phi _O})$ , is generated from the last term in Eq. 1. Physically, it is a time-reversed object beam that propagates back into the random medium to retrace its path backward through it, like watching a time-reversed movie. As long as the time reversal symmetry holds with the physical law of wave propagation in the random medium, the light exiting from the random medium becomes the original unperturbed object light, although with the sign of phase reversed, which converges to form a real image of the object. Thus, bad light turned into good light.Fig. 3 Schematic illustration of the concept and principle of wavefront correction by holographic phase-conjugation.
The unique feature of holographic phase conjugation compared with adaptive optics is as follows. Referring to Fig. 3, we consider two points, A and B, at different locations on the object. The lights from these points as point sources (indicated by rays in black and green) pass through different parts of the random medium and receive different perturbations or wavefront distortions. In conventional adaptive optics, wavefront sensing is performed for a single point source (e.g., a guide star in astronomy), and wavefront correction is made for this single point in the field of view. This means that conventional adaptive optics cannot perform wavefront corrections simultaneously for points A and B from two isoplanatic patches with different perturbations. On the other hand, in holographic phase conjugation, every object point in the field of view serves as its own guide star for imaging itself with light backtracking through the random medium by virtue of time reversal. Therefore, holographic phase conjugation can have a wider field of view for thick and complex random media than conventional adaptive optics. Note, however, that because this advantage is brought about by the coherent holographic recording of objects, the technique of holographic phase conjugation cannot be applied directly to the imaging of incoherently illuminated objects, such as stars in astronomy and fluorescence-labeled cells in biology.
The technique of wavefront correction by holographic phase conjugation has a long history going back to early seminal work in the 1960s by Kogelnik36 and Leith and Upatnieks37. With the availability of novel holographic recording materials (e.g., photorefractive crystals) and high-resolution SLMs, new implementations of holographic phase conjugation have been proposed and demonstrated for imaging through random media, including optical phase conjugation using an Fe-doped LiNbO3 photorefractive crystals38, and a digital optical phase conjugation system39,40 based on the combination of a high-resolution image sensor and an SLM for digital wavefront sensing and correction.
-
The holographic phase conjugation described above requires duplication of the randomly perturbed object beam, with its phase sign reversed in the reconstruction process. It further requires precise alignment between the hologram and the random medium for the reconstructed phase-conjugated beam to precisely retrace the same path backwards through the random medium. Also, the corrected final image is not available downstream of the hologram, but is formed upstream of the original object itself; a solution to this third problem was given in Kogelnik and Pennington41.
In their seminal paper in 1966, Goodman et al. proposed yet another holographic technique42 (i.e., common-path holographic recording), which is free from the problems mentioned above. Fig. 4 shows the concept and principle of wavefront correction via quasi-common path holographic recording. For simplicity, the object is assumed to be a single point source at O on the object, although the principle applies to other points on the object. A reference point source is placed at point R, such that the distance, OR, between the object and reference points subtends a small field angle, θ, seen from a recording point, H, on the hologram. Suppose that the random medium is placed at distance, z, from the hologram, then the object and reference rays received at the recording point, H, have emerged from the random medium with relative lateral shift
$z\theta $ at the exit. Let Δ represent the lateral correlation length of the perturbed optical field exiting from the random medium, such that Δ is the average linear dimension over which the wavefront, emerging from the random medium, is nearly constant. If the angular distance between the object and the reference satisfies$\theta < \Delta /z$ , then the object and reference rays have passed through identical (i.e., highly correlated) portions of the random medium. In other words, it realizes a geometry for common-path holographic recording in which the common distortions of the reference and object waves cancel each other through interference. The condition,$\theta < \Delta /z$ , sets the maximum available field angle. Given a correlation length, Δ, (determined by the physical characteristic of the random medium), the field of view can be maximized for$z \approx 0$ by recording the hologram immediately behind the random medium. From the hologram recoded in this technique, the image can be reconstructed in the same manner as with a conventional hologram without special attention to alignment, and it can be observed downstream of the hologram.Fig. 4 Schematic illustration of the concept and principle of wavefront correction by common-path holographic recording.
After the early experimental demonstration of long-distance holographic imagery through air turbulence by Goodman et al.43, the concept and principle of common-path holographic recording were combined with digital holography to make new advancements44−46.
-
As preliminaries for the explanation of holographic correloscopy7, let us start with the principle of non-holographic imaging through a thin scattering medium based on speckle intensity correlation47,48. As shown in Fig. 5, the light from an object is scattered by a thin but strong scattering medium placed at a distance,
$\hat z$ , from the object, and the scattered field is observed on an observation plane at a distance, z, from the scattering medium. If the object is illuminated with spatially incoherent quasi-monochromatic light, and the scattering medium has a memory effect, the intensity distribution on the observation plane is given by$$ I({\boldsymbol{r}}) = S({\boldsymbol{r}},{\boldsymbol{\hat r}}) * O({\boldsymbol{\hat r}}) = \int {S({\boldsymbol{r}} - {\boldsymbol{\hat r}})} \,O({\boldsymbol{\hat r}})d{\boldsymbol{\hat r}} $$ (2) where
$S({\boldsymbol{r}},{\boldsymbol{\hat r}})$ is the shift-invariant point spread function or the intensity speckle pattern created by a point source at point${\boldsymbol{\hat r}}$ on the object,$ O({\boldsymbol{\hat r}}) $ is the object intensity distribution, and$ * $ is the convolution operation with the angularly scaled coordinates,${\boldsymbol{r}} = z\,{\boldsymbol{\theta }}$ , and$ {\boldsymbol{\hat r}} = - \hat z\,{\boldsymbol{\theta }} $ ; the convolution represents the angular memory effect. We can now compute the autocorrelation of the observed intensity distribution.$$ \begin{split} I({\boldsymbol{r}}) \otimes I({\boldsymbol{r}}) = &\left[ {S({\boldsymbol{r}},{\boldsymbol{\hat r}}) * O({\boldsymbol{\hat r}})} \right] \otimes \left[ {S({\boldsymbol{r}},{\boldsymbol{\hat r}}) * O({\boldsymbol{\hat r}})} \right] \\ =& \left[ {S({\boldsymbol{r}},{\boldsymbol{\hat r}}) \otimes S({\boldsymbol{r}},{\boldsymbol{\hat r}})} \right] * \left[ {O({\boldsymbol{\hat r}}) \otimes O({\boldsymbol{\hat r}})} \right] \end{split} $$ (3) where
$ \otimes $ denotes autocorrelation; the passage to the second line of Eq. 3 can be easily confirmed from their relationship in the spectral domain. Remembering that$S({\boldsymbol{r}},{\boldsymbol{\hat r}})$ is the speckle pattern created by a point source at point${\boldsymbol{\hat r}}$ on the object, we note that$S({\boldsymbol{r}},{\boldsymbol{\hat r}}) \otimes S({\boldsymbol{r}},{\boldsymbol{\hat r}})$ has a sharp delta-correlation peak with its width corresponding to the average speckle size, which stands on a constant background resulting from the nonnegativity of$S({\boldsymbol{r}},{\boldsymbol{\hat r}})$ . By writing$$ S({\boldsymbol{r}},{\boldsymbol{\hat r}}) \otimes S({\boldsymbol{r}},{\boldsymbol{\hat r}}) \approx \delta ({\boldsymbol{\hat r}}) + {\rm{const}}. $$ (4) we obtain form Eq. 3,
$$ I({\boldsymbol{r}}) \otimes I({\boldsymbol{r}}) \approx O({\boldsymbol{\hat r}}) \otimes O({\boldsymbol{\hat r}}) + {\rm{const}}{\rm{. }} $$ (5) which states that, apart from the contrast reduction caused by the constant background, the intensity correlation of the scattered field gives the autocorrelation function of the intensity distribution of the hidden object. Hence, the remaining problem is determining the object intensity from its autocorrelation function. This is mathematically equivalent to the phase recovery problem of the power spectrum studied in the field of X-ray diffraction. Bertolotti et al.47 and Katz et al.48 used iterative phase-retrieval algorithms, such as the Fienup algorithm49, which make use of a priori knowledge about the object, as constraints in the solution search.
We are now ready to explain holographic correloscopy7,50 as another solution for phase recovery, remembering that holography was invented as a technique to preserve phase information that was lost by intensity recording in traditional photography. We add a reference point source at point
${{\boldsymbol{r}}_0}$ , as shown in Fig. 6, so that the intensity distribution in the object space becomesFig. 6 Holographic correloscopy for imaging through a scattering medium based on speckle intensity correlation.
$$ {I_O}({\boldsymbol{\hat r}}) = O({\boldsymbol{\hat r}}) + \delta ({\boldsymbol{\hat r}} - {{\boldsymbol{\hat r}}_0}) $$ (6) Note that this is the intensity distribution under spatially incoherent illumination, and it differs from the familiar complex amplitude distribution of coherent illumination, such as that shown in Fig. 4. Replacing
$ O({\boldsymbol{\hat r}}) $ in Eq. 5 with${I_O}({\boldsymbol{\hat r}})$ , we have$$ \begin{split} I({\boldsymbol{r}}) \otimes I({\boldsymbol{r}}) \approx &\left[ {O({\boldsymbol{\hat r}}) + \delta ({\boldsymbol{\hat r}} - {{{\boldsymbol{\hat r}}}_0})} \right] \otimes \left[ {O({\boldsymbol{\hat r}}) + \delta ({\boldsymbol{\hat r}} - {{{\boldsymbol{\hat r}}}_0})} \right] + {\rm{const}}. \\ \approx & O({\boldsymbol{\hat r}} + {{{\boldsymbol{\hat r}}}_0}) + O\left( { - ({\boldsymbol{\hat r}} - {{{\boldsymbol{\hat r}}}_0})} \right) \\&+ O({\boldsymbol{\hat r}}) \otimes O({\boldsymbol{\hat r}}) + \delta ({\boldsymbol{\hat r}}) + {\rm{const.}} \\[-10pt] \end{split} $$ (7) As in conventional holography, the first term,
$ O({\boldsymbol{\hat r}} + {{\boldsymbol{\hat r}}_0}) $ , gives a laterally shifted image of the object, whereas the second term,$ O( - ({\boldsymbol{\hat r}} - {{\boldsymbol{\hat r}}_0})) $ , gives a symmetrically shifted image. We regard the remaining terms as noise appearing in the center and background. Thus, we can reconstruct the image using speckle intensity correlation. Singh et al.50 demonstrated full 3D holographic correloscopy by making use of the memory effect in the axial direction.Eq. 7 suggests that the real essence of the principle lies in the cross-correlation term between the object and reference,
$ \delta ({\boldsymbol{\hat r}} - {{\boldsymbol{\hat r}}_0}) \otimes O({\boldsymbol{\hat r}}) $ , which generates the original image corresponding to the interference between the reference and object beams in conventional holography. Then, an idea naturally arises that we record the speckle pattern,$ S({\boldsymbol{r}},{\boldsymbol{\hat r}}) $ , or the point spread function, in advance with a reference point source only. We would then compute its cross-correlation with the speckle pattern,$ I({\boldsymbol{r}}) $ , of the object:$$ \begin{split} S({\boldsymbol{r}},{\boldsymbol{\hat r}}) \otimes I({\boldsymbol{r}}) =& S({\boldsymbol{r}},{\boldsymbol{\hat r}}) \otimes \left[ {S({\boldsymbol{r}},{\boldsymbol{\hat r}}) * O({\boldsymbol{\hat r}})} \right] \\ = &\left[ {S({\boldsymbol{r}},{\boldsymbol{\hat r}}) \otimes S({\boldsymbol{r}},{\boldsymbol{\hat r}})} \right] * O({\boldsymbol{\hat r}})\\ \approx & O({\boldsymbol{\hat r}}) + {\rm{const}}. \end{split} $$ (8) where use is made of the relation in Eq. 4. In fact, this idea was devised by Freund51 in 1990 in his seminal paper on wall lens, which largely influenced recent research in scatter-based lensless microscopy52. This cross-correlation technique requires two steps of speckle recording; therefore, it is not suitable for imaging through dynamic random media, but it has the advantage of removing noise terms. The principles explained so far are based on intensity correlation under spatially incoherent illumination. Next, we consider the case of a field correlation under coherent illumination.
Referring to Fig. 6, a similar discussion can be made by replacing the intensity variables (denoted by uppercase letters) with the complex-amplitude variables (denoted by the corresponding lowercase letters, apart from the field on the observation plane denoted by
$u({\boldsymbol{r}})$ ). Assuming the memory effect, the complex field on the observation plane is given by$$ u({\boldsymbol{r}}) = s({\boldsymbol{r}},{\boldsymbol{\hat r}}) * {u_O}({\boldsymbol{\hat r}}) $$ (9) where
$s({\boldsymbol{r}},{\boldsymbol{\hat r}})$ is the amplitude point spread function, and${u_O}({\boldsymbol{\hat r}})$ is the field resulting from the superposition of the object and the reference point source.$$ {u_O}({\boldsymbol{\hat r}}) = o({\boldsymbol{\hat r}}) + \delta ({\boldsymbol{\hat r}} - {{\boldsymbol{\hat r}}_0}) $$ (10) The autocorrelation function (i.e., the spatial coherence function) of the field on the observation plane is given by
$$ \begin{split} \Gamma ({\boldsymbol{r}}) =& u({\boldsymbol{r}}) \otimes u({\boldsymbol{r}}) \\ =& \left[ {s({\boldsymbol{r}},{\boldsymbol{\hat r}}) * {u_O}({\boldsymbol{\hat r}})} \right] \otimes \left[ {s({\boldsymbol{r}},{\boldsymbol{\hat r}}) * {u_O}({\boldsymbol{\hat r}})} \right] \\ =& \left[ {s({\boldsymbol{r}},{\boldsymbol{\hat r}}) \otimes s({\boldsymbol{r}},{\boldsymbol{\hat r}})} \right] * \left[ {{u_O}({\boldsymbol{\hat r}}) \otimes {u_O}({\boldsymbol{\hat r}})} \right] \\ \end{split} $$ (11) An important difference from the previous intensity case is that the autocorrelation of the complex-valued speckle field does not have an unwanted constant background.
$$ s({\boldsymbol{r}},{\boldsymbol{\hat r}}) \otimes s({\boldsymbol{r}},{\boldsymbol{\hat r}}) \approx \delta ({\boldsymbol{\hat r}}) $$ (12) so that we have from Eqs. 10,11,
$$ \begin{split} \Gamma ({\boldsymbol{r}}) \approx & \left[ {o({\boldsymbol{\hat r}}) + \delta ({\boldsymbol{\hat r}} - {{{\boldsymbol{\hat r}}}_0})} \right] \otimes \left[ {o({\boldsymbol{\hat r}}) + \delta ({\boldsymbol{\hat r}} - {{{\boldsymbol{\hat r}}}_0})} \right] \\ \approx & \,o({\boldsymbol{\hat r}} + {{{\boldsymbol{\hat r}}}_0}) + o\left( { - ({\boldsymbol{\hat r}} - {{{\boldsymbol{\hat r}}}_0})} \right) + o({\boldsymbol{\hat r}}) \otimes o({\boldsymbol{\hat r}}) + \delta ({\boldsymbol{\hat r}}) \end{split} $$ (13) Thus, the object hidden behind the scattering medium can be reconstructed from the spatial coherence function of the scattered optical field. The most important difference from the intensity correlation is that we can reconstruct the complex amplitude of the object, including the phase information, which permits the holographic interferometry of an object hidden behind the scattering medium44.
So far, the principle of correloscopy has been explained without providing a clear explanation of the definition of the correlation operation denoted by
$ \otimes $ . Mathematically, the correlation between statistical signals is defined using the ensemble average as$g({\boldsymbol{r}}) \otimes g({\boldsymbol{r}}) = < g({\boldsymbol{r}}' + {\boldsymbol{r}})g^*({\boldsymbol{r}}') > $ . In practice, however, the ensemble average, <…>, needs to be replaced either by the temporal average or by the spatial average, assuming the stationarity of the statistical process53. Although the temporal average is suitable for imaging with time-varying speckle fields through a dynamic scattering medium, such as rotating ground glass, the spatial average is used in imaging through a static scattering medium, as is often the case with speckle intensity correlation.Here, we show two different ways to implement the principle of field-correlation correloscopy. Fig. 7 shows an example of a fully optical implementation based on a shearing interferometer. The spatial coherence function of the scattered field is detected and visualized in real time as the contrast and shift of interference fringes using a radial shearing interferometer54. This scheme, which originated from coherence holography55, is suitable for imaging through a dynamic scattering medium that permits spatial coherence detection through the time averaging of the speckle fields with the image sensor as an integrator.
Fig. 7 Interferometric implementation of holographic correloscopy for imaging through a scattering medium.
For the case of a dynamic scattering medium, such as rotating ground glass, we can find a much simpler solution than the interferometric implementation (shown in Fig. 7) in which the coherence function,
$\Gamma ({\boldsymbol{r}})$ , of the scattered field was detected on the observation plane, so as to follow the result in Eq. 13. Let us first note that the thin but strong scattering medium in fast motion destroys the coherence of both the object and the reference beams and serves as a spatially incoherent extended source with the light intensity distribution,${I_S}({\boldsymbol{\tilde r}})$ . Next, let us note further that, from the van Cittert-Zernike theorem1, the coherence function of the scattered field,$\Gamma ({\boldsymbol{r}})$ , can be obtained from the Fourier transform of the incoherent source intensity distribution,${I_S}({\boldsymbol{\tilde r}})$ . Now, we have reached a simpler solution44. Instead of detecting the coherence function, we detect the intensity distribution of the field exiting the scattering medium by using imaging optics focused on the scatter surface, as illustrated in Fig. 8. Because the scatter medium is thin, the intensity distribution,${I_S}({\boldsymbol{\tilde r}})$ , represents the lensless Fourier transform hologram created by the object and point source. Hence, we can understand, without knowledge of the van Cittert-Zernike theorem, that the object can be reconstructed by computing the inverse Fourier transform of the intensity distribution,${I_S}({\boldsymbol{\tilde r}})$ , detected with an image sensor; the time-averaged operation takes place over one frame period of the image sensor, which averages out speckles.It is of interest to note that this simple implementation (remote imaging digital holography7) of holographic correloscopy can be interpreted as an extreme case of common-path holography (shown in Fig. 4), in which the locations of the random medium and hologram are identical with
$z = 0$ , so that the perfect common-path geometry is realized.
Holographic 3D Imaging through Random Media: Methodologies and Challenges
- Light: Advanced Manufacturing 3, Article number: (2022)
- Received: 09 September 2021
- Revised: 26 January 2022
- Accepted: 08 February 2022 Published online: 01 May 2022
doi: https://doi.org/10.37188/lam.2022.014
Abstract: Imaging through random media continues to be a challenging problem of crucial importance in a wide range of fields of science and technology, ranging from telescopic imaging through atmospheric turbulence in astronomy to microscopic imaging through scattering tissues in biology. To meet the scope of this anniversary issue in holography, this review places a special focus on holographic techniques and their unique functionality, which play a pivotal role in imaging through random media. This review comprises two parts. The first part is intended to be a mini tutorial in which we first identify the true nature of the problems encountered in imaging through random media. We then explain through a methodological analysis how unique functions of holography can be exploited to provide practical solutions to problems. The second part introduces specific examples of experimental implementations for different principles of holographic techniques, along with their performance results, which were taken from some of our recent work.
Research Summary
Holographic 3D imaging through random media: Holography unveils hidden objects
Imaging through random media is a challenging problem of crucial importance in a wide range of fields of science and technology, ranging from telescopic imaging through atmospheric turbulence in astronomy to microscopic imaging through scattering tissues in biology. Mitsuo Takeda from Utsunomiya University, Wolfgang Osten from Stuttgart University, and Eriko Watanabe from the University of Electro-Communications review principles and techniques for holographic imaging through random media. First, the true nature of the problems encountered in imaging through random media is identified. Then, through a methodological analysis, how unique functions of holography can be exploited to provide practical solutions to problems is explained. Specific examples of experimental implementations for different principles of holographic techniques, along with their performance results, are introduced.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article′s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article′s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.