HTML
-
As has been observed repeatedly in the past22, 32-34, 36, LSFM can suffer from optical artifacts caused by non-fluorescent absorbing materials. There is no intrinsic solution for this issue in LSFM because such materials cannot be directly characterized by fluorescence imaging. We demonstrated that our novel method of attenuation correction in LSFM using OPTiSPIM data can significantly reduce these artifacts, bringing back biological details to the image (such as fine nerve structures) and recreating overall greyscale levels very similar to the unattenuated control. It achieves this improvement in a positive manner by measuring the absorption, rather than by trying to avoid it. As shown by the comparison of the embryonic mouse head in Fig. 4a–c, our corrected version of the data (Fig. 4b) is clearly much more similar to the unattenuated region of the control image (Fig. 4c) than to the raw data (Fig. 4a). Similarly, for the in situ stained lymph nodes in Supplementary Fig. 3, the data after correction gives a more accurate representation of the lymph node than does the uncorrected data.
Some residual shadow artifacts may remain even after correction is applied (the dark horizontal line extending to the right from the bottom of the eye in Fig. 4b and the vertical streaks in Supplementary Fig. 3d). These generally occur when the attenuation reduces the fluorescence signal down to or below background levels. Because our correction enhances a "real" signal, our implementation (see the Materials and methods section and Eq. 11) explicitly suppresses amplification in these regions. A direct application of the Beer–Lambert law (see the Materials and methods section and Eq. 7) would result in amplified background/noise in these regions. Our use of Eq. 11 rather than Eq. 7 was motivated by the "first, do no harm" principle: in regions where we suspect the measured signal to be simply background or noise, rather than amplify this "signal, " we chose to suppress the amplification that the Beer–Lambert law would suggest and retain the raw, measured values. See Supplementary Text Section 2 and Supplementary Fig. 2 for further information. However, clearly, such artifacts remain in only a small region of the corrected image, and there may be approaches for reducing or removing this issue in the future.
One of the advantages of the attenuation correction system presented here is that it can be used as a compliment to previous methods that have been developed to avoid attenuation artifacts. Both mSPIM and multi-view LSFM are capable of reducing these artifacts, but their effectiveness in samples with complex geometries (see the schematic in Fig. 1a) can be compromised. In this paper, we show that our method is compatible with single-sided mSPIM (Eq. 4 in the Materials and methods section describes this), and the generalization to multi-sided illumination and multi-view imaging should be straightforward. Although "self-healing" light sheets (e.g., using Bessel or Airy beams) can help to reduce illumination artifacts, neither they nor the mSPIM technique can deal with artifacts caused by attenuation of the detected fluorescence. Since the OPTiSPIM-based method described here depends on measurement and correction of the attenuation as opposed to approaches that attempt to "view around" attenuating features, we expect that it will serve as a complimentary addition to self-healing light sheets and mSPIM. Table 1 summarizes important approaches that have been described to combat attenuation artifacts in LSFM data with some of their benefits and limitations.
Method Benefits Limitations References Chemical clearing • Reduces optical scattering
• Compatible with other (non-LSFM) optical microscopy methods• Not compatible with live imaging
• Complete clearing difficult with large/dense samples
• Protocols can be slow (weeks) and use toxic reagents
• Most protocols do not reduce absorption19, 27, 34 Purely computation methods • Require no extra imaging hardware/data acquisition • Discrepancies between theory used in corrections and practical imaging conditions can introduce artifacts Supp. Mat. 3, Supp. Mat. 4 Multi-photon excitation • Reduced scattering of excitation light
• Reduced photo-toxicity
• "All-optical" method• Does not correct artifacts in the detected fluorescence
• Depth of imaging limited (< 1 mm)22, 35 "Self-healing" excitation • "All-optical" method • Does not correct artifacts in the detected fluorescence 35, 36, 37 Multi-view SPIM • Can also improve resolution • Sequential view acquisition (slow)
• Computational post-processing required18, 33 mSPIM • "All-optical" method
• Straightforward, economical implementation• Does not correct artifacts in the detected fluorescence 32, 39 Multi-arm LSFM • High data acquisition rates • Requires complicated, expensive hardware 40-42, 43 OPTiSPIM • Correct regions where absorption cannot be avoided
• 3D attenuation maps available "for free"• Computational post-processing required
• (currently) custom-built setup required
• OPT of small samples (< 100 µm) may require methods to extend the depth of fieldPresent work Table 1. Methods to alleviate attenuation in LSFM
The samples considered in this study were either optically cleared (biological specimens) or were intrinsically very low scattering (the fluorescent beads in aqueous agarose, Fig. 2). In these cases, as discussed in the Materials and methods section, Eq. 2 provides a good model of the attenuation. However, many applications of LSFM involve imaging of living samples that cannot be optically cleared by the methods we employed. These live samples can introduce two types of problems that were avoided in this study. First, our method requires generating both LSFM and OPT scans of the sample, which will limit the temporal resolution that is achievable (compared to LSFM alone). Although this may be a significant issue when high speed is critical, tOPT has an advantage because it does not rely of fluorescence contrast and thus very short exposure times can be used. Bassi et al.47 demonstrated combining LSFM and tOPT for living zebrafish embryos; thus, at least in this widely used model organism, we expect that our method will be applicable. The second issue that may arise with our method when imaging live, uncleared biological samples is that refraction/scattering may be significant so that Eq. 2 is not valid. In this case, our attenuation model (and tOPT apparatus) would have to be modified to account for (and quantify) the sample's refractive index variations. Such a method might be possible by implementing a diffraction tomography system; 48 however, this is beyond the scope of this study.
In principle, our method for correcting attenuation artifacts can be applied to other microscopy techniques besides LSFM, such as confocal microscopy. Computationally, all that would be required would be a change in the integration paths in Eqs. 5 and 6 (see the Materials and methods section). However, we are unaware of any imaging system besides the OPTiSPIM that allows the collection of both fluorescence data and a map of the attenuating features of the sample.
In summary, we present a novel method for the correction of attenuation artifacts in LSFM that takes advantage of two different imaging modalities: (1) the measurement of fluorescence data (via the SPIM mode of OPTiSPIM) and the distribution of the attenuation coefficient (via tOPT) and (2) the computational correction of the former by using a physical model based on the latter. Our method is easy to incorporate into most LSFM platforms that allow sample rotation. Importantly, the proposed method is compatible with previously published techniques for attenuation artifact correction and can act as a complement to techniques such as mSPIM and multi-view LSFM imaging.
-
Imaging was performed using the OPTiSPIM setup described in Mayer et al46. Briefly, for SPIM illumination, a single arm employing a cylindrical lens to create the light sheet was used. Detection was via a CCD camera coupled to a telecentric optical lens system. The sample was mounted from above and suspended in an imaging chamber located at the intersection of the illumination and detection arms. Within the imaging chamber, the sample can be translated along the three orthogonal spatial axes and rotated about the vertical axis; these degrees of freedom permit both OPT (rotational) and SPIM (translational) scanning. A schematic of the setup is shown in Supplementary Fig. 4.
-
We used tOPT to reconstruct the 3D map of the attenuation coefficient of the sample. OPT was designed so that a raw image measured in transmission mode is the shadow projection of the sample onto the camera. Because the diffraction limits both the imaging resolution and the depth of field, without using techniques to extend the depth of field, OPT is generally best suited to sample sizes that are more than ~100 µm (see Supplementary Text Section 4 for a discussion of this issue)5. Quantitative reconstruction of the attenuation requires that some light be transmitted through the sample. For regions that are completely opaque, no information is available.
To correct for attenuation, we first consider the Beer–Lambert law:49
$$ I = I_0 \cdot \exp \left( { - \alpha \cdot x} \right) $$ (1) where I0 is the incident intensity, α is the attenuation coefficient, and x is the thickness of the object. This formula represents the case for spatially uniform attenuation; in a more general case where the attenuation can vary spatially, the product α·x becomes a path integral along a light ray:
$$ I\left( {\mathop{r}\limits^{\rightharpoonup} } \right) = I_{0} \cdot \exp \left( {\mathop {\int}\limits_{ - \infty }^{\mathop{r}\limits^{\rightharpoonup} } { - \alpha \left( {\mathop{s}\limits^{\rightharpoonup} } \right) \cdot {{d}}\mathop{s}\limits^{\rightharpoonup} } } \right) $$ (2) where $\alpha \left({\mathop{r}\limits^{\rightharpoonup} } \right)$ is the attenuation coefficient at position $\mathop{r}\limits^{\rightharpoonup}$ in the sample. Here, we assume that the imaging processes can be described by a ray optics model, that is, diffraction and refraction are not taken into account. This may be responsible for minor artifacts in the corrected data when imaging at a high resolution using high NA optics or in samples with significant variations of refractive indices (e.g., see the "stripe" artifact extending from the bottom of the eye in Fig. 4b and the discussion in the Results section).
Equation 2 (and the following equations that are based on it) contains the implicit assumption that photons attenuated by the sample do not contribute to the image formation process. This will be the case when the attenuation is due to absorption during tOPT imaging of the sample (we neglect the possibility of significant fluorescence emission subsequent to the absorption, which can be eliminated by spectral filtering). Attenuation via scattering can also be modeled by Eq. 2, provided that the scattered light is not collected by the imaging optics. However, samples that can scatter light in such a way that it does contribute to the imaging process (e.g., back-reflected light or diffuse scattering in turbid media) will not be correctly modeled by Eq. 2. A correct treatment of these types of samples would require a more detailed model of the scattering process, which is beyond the scope of this paper. However, even with this restriction, there is a wide range of biological samples for which attenuation can be corrected via this method.
We take advantage of the fact that a reconstructed tOPT data set is a good approximation to the attenuation coefficient, $\alpha \left({\mathop{r}\limits^{\rightharpoonup} } \right)$, of the sample50. Thus, the calculation of the effect of the attenuation on the fluorescence SPIM image—what we term the AM—can be based on the tOPT reconstruction, $\alpha \left({\mathop{r}\limits^{\rightharpoonup} } \right)$. This approximation may fail for very strongly absorbing regions of the sample: as the measured value of $I\left({\mathop{r}\limits^{\rightharpoonup} } \right)$ in Eq. 2 approaches zero, the back-projection algorithm that is used to calculate $\alpha \left({\mathop{r}\limits^{\rightharpoonup} } \right)$ becomes less accurate.
The formation of a fluorescence image can be thought of as the combination of two processes: light from the excitation source (in the case of LSFM, the light sheet) must propagate to (and excite) the fluorophore to be imaged, and the light emitted by the fluorophore must propagate to the detector (for LSFM, a camera). This geometry is sketched in Fig. 2b.
We first consider the simpler process of LSFM illumination: the light sheet is modeled as a non-diffracting plane of light, which we consider to be propagating along the x-axis of the microscope. In this case, we rewrite Eq. 2 to define the illumination AM, ${{AM}}_{{ill}}, $:
$$ {{AM}}_{{ill}}\left( {x, y, z} \right) = \frac{{I\left( {x, y, z} \right)}}{{I_0}} = \exp \left( { - \mathop {\int}\limits_{ - \infty }^x {\alpha \left( {x{'}, y, z} \right) \cdot {{d}}x{'}} } \right) $$ (3) Integration is performed up to point x where the fluorophore under consideration is located. We approximate the light sheet as an infinitesimally thin plane of light:
$$ I\left( {x, y, z} \right) = I_0 \cdot \delta \left( z \right) \cdot H\left( {y, \Delta y} \right) $$ (4) where $\delta \left(z \right)$ is a delta-function, $\begin{array}{l}H\left({y, \Delta y} \right) = 1, \quad\, \left| y \right| < \Delta y\\ \quad \quad \quad \, \, \, = 0, \quad{{otherwise}}\end{array}$, and, $2\Delta y$ is the height of the light sheet.
If a resonant scan mirror (RSM) is used to tilt the light sheet, as in mSPIM32, we can modify Eq. 3 to account for this:
$$ \begin{array}{l}{{AM}}_{{ill}}\left( {x{{, }}y{{, }}z} \right) = \frac{1}{{\varphi _{{max}} - \varphi _{{min}}}}\\ \quad {\int}_{\varphi _{{min}}}^{\varphi _{{max}}} {\exp \left( -{\mathop {\int}\limits_{ - \infty }^x {{{Rot}}\left( {\alpha \left( {x' , y, z} \right), \varphi ' } \right)} \cdot {{d}}x' } \right) \cdot {{d}}\varphi ' } \end{array} $$ (5) where {φmin, φmax} is the range of angles through which the light sheet is scanned by the RSM, and ${{Rot}}\left({\alpha, \varphi } \right)$ denotes a function that rotates the 3D distribution of the attenuation coefficient, $\alpha \left({\mathop{r}\limits^{\rightharpoonup} } \right)$, by angle φ in the plane of the light sheet (the xz plane). For a static light sheet with φmin = φmax = 0, Eq. 5 reduces to Eq. 3.
For LSFM detection, the emitted light is collected over the entire aperture cone of the objective lens used for detection (see Fig. 2b). As with the illumination, we assume that the emitted light travels along straight paths (neglecting scattering and refraction), but in a sample of non-uniform attenuation, each distinct path within the detection cone may pass through regions with different attenuation. The effect of the attenuation on the fluorescence emitted by a fluorophore at {x, y, z} is therefore the integral over all the paths within the detection cone. For convenience, we perform the integral using polar coordinates centered at {x, y, z}:
$$ {{AM}}_{{det}}\left( {x, y, z} \right) = \frac{1}{{2\pi \vartheta _{{max}}}}{\int}_0^{\vartheta _{{max}}}\\ {{\int}_{ - \pi }^\pi {\exp \left( { - {\int}_0^{r_{{max}}} {\alpha \left( {r, \varphi , \vartheta } \right) \cdot {{d}}r} } \right) \cdot {{d}}\varphi } \cdot {{d}}\vartheta } $$ (6) As Eq. 6 requires a triple integral, whereas ${{AM}}_{{ill}}$ requires only a single (Eq. 3) or a double (Eq. 5) integral, the determination of ${{AM}}_{{det}}$ is the more computationally intensive calculation.
Because excitation and emission are independent processes, once ${{AM}}_{{ill}}$ and ${{AM}}_{{det}}$ have been calculated, the total AM of the complete LSFM imaging process is given by their product:
$$ {{AM}} = {{AM}}_{{ill}} \cdot {{AM}}_{{det}} $$ (7) Having determined the AM, we next consider the form of the detected fluorescence signal that will be generated by an LSFM measurement. To a good approximation, this is given by
$$ F_{{det}} = {{AM}} \cdot F_0 + B $$ (8) where F0 is the "real" signal (that we want to recover), and B is the background signal. B represents the fact that during the fluorescence imaging process the measured signal may have received a contribution that is not directly related to the concentration of the fluorophore at point {x, y, z} being imaged. Examples of processes that would contribute to this contamination are background room lights that are not completely blocked or thermal noise in the CCD detector. In practice, B can be determined by measuring the mean detected signal in a region of a fluorescence image outside the sample (where it is known that there are no fluorophores present). This equation for $F_{{det}}$ is easily inverted to solve for F0:
$$ F_0 = (F_{{det}} - B)/{{AM}} $$ (9) Although the variables in the above equations (and those that follow) are often 3D matrices, the functions $X \cdot Y$ (multiplication), $X/Y$ (division), and $X^{ - 1}$ (inverse) are performed element-wise rather than as matrix operations. Supplementary Fig. 5 illustrates the processing steps involved in collecting and processing the data graphically. Fig. 2c–j depicts these steps using experimentally measured data from a simple fluorescent beads-and-ink phantom.
Although eq. 9 is theoretically valid, we found in practice that there are several modifications to it that result in more stable and accurate corrections of attenuation artifacts in LSFM images.
-
First, considering the form of Eqs. 3–7, clearly, for a finite α, the value(s) of the AM(s) will fall within the range of $0\, < \, {{AM}} \le 1$. ${{AM}}(x, y, z) = 1$ implies that the fluorescent signal from point {x, y, z} in the sample is unaffected by attenuation, and the smaller the value of AMI, the more the fluorescence has been attenuated.
For Eq. 9 to be physically meaningful, we require that the background B be positive and that $F_{{det}} \ge B$. Ideally, these conditions will be satisfied; however, in real measurements, noise may play a significant role. To investigate this role further, we re-formulate Eq. 9 to explicitly account for errors/uncertainties in the various parameters:
$$ \begin{array}{*{20}{c}} {F_0 \pm \Delta F_0 = \frac{{\left( {F_{{det}} \pm \Delta F_{{det}}} \right) - \left( {B \pm \Delta B} \right)}}{{{{AM}} \pm \Delta {{AM}}}}} \\ {F_0 \pm \Delta F_0 = \frac{{\left( {F_{{det}} - B} \right) \pm \sqrt {\Delta F_{{det}}^2 + \Delta B^2} }}{{{{AM}} \pm \Delta {{AM}}}}} \end{array} $$ (10) Because our measurements of $F_{{det}}$, B, and ${{AM}}$ are independent, the relative error in the calculation of the signal F0 is
$$ \frac{{\Delta F_0}}{{F_0}} = \sqrt {\frac{{\Delta F_{{det}}^2 + \Delta B^2}}{{\left( {F_{{det}} - B} \right)^2}} + \frac{{\Delta {{AM}}^2}}{{{{AM}}^2}}} $$ (11) This equation indicates that the error in our calculation of the real signal F0 will be large when $F_{{det}} - B$ or ${{AM}}$ is small, that is, when either the detected signal is close to the background level $\left({F_{{det}}\sim B} \right)$ or when the attenuation is large $\left({{{AM}} \to 0} \right)$. Therefore, we chose to modify Eq. 9 to avoid the high-error regime as follows. We first rewrite Eq. 9 as
$$ \begin{array}{l}F_0 = \left( {F_{{det}} - B} \right) - \left( {F_{{det}} - B} \right) + \left( {F_{{det}} - B} \right) \cdot \frac{1}{{{AM}}}\\ F_0 = \left( {F_{{det}} - B} \right) + \left( {F_{{det}} - B} \right) \cdot \left( {\frac{1}{{{AM}}} - 1} \right)\end{array} $$ (12) Written this way, the "real" signal F0 is composed of the raw data $\left({F_{{det}} - B} \right)$ plus a term that takes attenuation into account. We next introduce a weighting factor, S, to the second term (the one that compensates attenuation):
$$ F_{{est}} = \left( {F_{{det}} - B} \right) + S \cdot \left( {F_{{det}} - B} \right) \cdot \left( {\frac{1}{{{AM}}} - 1} \right) $$ (13) where $F_{{est}}$ is now our estimate of the real signal, F0. Thus, we define S so that when our attenuation correction is trustworthy, S ≈ 1, and Eq. 13 is a good approximation to Eq. 9. However, in situations in which Eq. 9 may just amplify the noise, we want to have S ≈ 0 so that $F_{{est}} \approx F_{{det}} - B$.
The weighting factor, S, that we use in Eq. 13 should be our best estimate of the likelihood that our measured signal, $F_{{det}}$, is primarily real and not background, that is, we chose S to be
$$ S = \frac{{{"} {{detected}}\, {{signal}}\, {{in}}\, {{the}}\, {{absence}}\, {{of}}\, {{background}}{"} }}{{{"} {{detected}}\, {{signal}}{"} }} $$ (14) to satisfy the above requirements. From Eq. 9, this becomes
$$ \begin{array}{l}S = \frac{{{AM} \cdot F_0}}{{F_{{det}}}}\\ S = \frac{{{{AM}} \cdot \left( {(F_{{det}} - B)/{{AM}}} \right)}}{{F_{{det}}}}\\ S = \frac{{F_{{det}} - B}}{{F_{{det}}}}\end{array} $$ (15) Substituting Eq. 15 into Eq. 13 and simplifying:
$$ \begin{array}{l}F_{{est}} = (F_{{det}} - B) + \frac{{F_{{det}} - B}}{{F_{{det}}}} \cdot \left( {\frac{1}{{{AM}}} - 1} \right) \cdot (F_{{det}} - B)\\ F_{{est}} = (F_{{det}} - B) + \frac{{\left( {F_{{det}} - B} \right)^2}}{{F_{{det}}}} \cdot \left( {\frac{{1 - {{AM}}}}{{{AM}}}} \right)\\ F_{{est}} = \left( {F_{{det}} - B} \right) \cdot \left[ {1 + \frac{{\left( {F_{{det}} - B} \right) \cdot \left( {1 - {{AM}}} \right)}}{{{AM} \cdot F_{{det}}}}} \right]\end{array} $$ (16) This is the equation that we have implemented to perform our attenuation correction calculations. For $F_{{det}} > > B$ (i.e., when the measured signal is substantially greater than the background and we can trust our method of attenuation correction), Eq. 16 reduces to Eq. 9. Additionally, in the low-attenuation regime where ${{AM}} \to 1$, Eq. 16 becomes $F_{{est}} = F_{{det}} - B$ as expected.
To estimate the value of B for a given experiment, we take "dark" images of the sample, with all filter, camera, and other settings identical to those for imaging, but with the light sheet power set to zero. In principle, the average signal level in this "dark" image can be taken as the value of B. In practice, we found better results were obtained (i.e., better suppression of noise amplification) by setting B equal to the mean of the "dark" image signal + 1 standard deviation of the signal because this gives a more conservative estimate of the background level.
-
Another issue that has not been explicitly accounted for in either Eq. 9 or Eq. 16 is that, generally, we cannot expect $\alpha \left({\mathop{r}\limits^{\rightharpoonup} } \right)$ and thus AM to be wavelength-independent. The extent to which this will have a significant effect on our results will depend on the properties of the attenuating material. The ink used in the bead phantom (Fig. 2c–j) does not have a strong spectral dependence, at least in the visible region of the spectrum; however, the NBT/BCIP staining used in the lymph nodes (Supplementary Fig. 3) does have a noticeable chromaticity. This means that when we perform tOPT to generate the AM, we should ensure that the wavelengths used are appropriate. For example, because of the Stokes shift between the excitation and emission wavelengths in fluorescence, ideally, ${{AM}}_{{ill}}$ and ${{AM}}_{{det}}$ will each be generated from their own $\alpha _{{ill}}$ and $\alpha _{{det}}$ at the appropriate wavelengths. In practice, we achieved this by using a halogen lamp as a transmission source and by putting the appropriate filters in the detection path (see the Scanning section below). This results in a slight modification to Eqs. 5 and 6 into forms
$$ \begin{array}{l}{{AM}}_{{ill}}\left( {x, y, z} \right) = \frac{1}{{\varphi _{{max}} - \varphi _{{min}}}}\\ \quad {\int}_{\varphi _{{min}}}^{\varphi _{{max}}} {\exp \left( { - \mathop {\int}\limits_{ - \infty }^x {{Rot}\left( {\alpha _{{ill}}\left( {x' , y, z} \right), \varphi ' } \right)} \cdot {{d}}x' } \right) \cdot {{d}}\varphi ' } \end{array} $$ (17) and
$$ \begin{array}{l}AM_{{det}}\left( {x, y, z} \right) = \\ \quad \frac{1}{{2\pi \vartheta _{{max}}}}{\int}_0^{\vartheta _{{max}}} {{\int}_{ - \pi }^\pi {\exp \left( { - {\int}_0^{r_{{max}}} {\alpha _{{det}}\left( {r, \varphi , \vartheta } \right) \cdot {{d}}r} } \right) \cdot {{d}}\varphi } \cdot {{d}}\vartheta } \end{array} $$ (18) where the wavelength dependence of the attenuation coefficients is explicit.
Because of hardware constraints, it was not possible to scan the embryonic mouse head (Figs. 1, 3 and 4) using the halogen lamp, and thus we measured our $\alpha \left({\mathop{r}\limits^{\rightharpoonup} } \right)$ at a wavelength (660 nm) that was significantly different from either the excitation (488 nm) or emission (~525 nm) wavelengths of the fluorophores in the sample. We realized that in many OPTiSPIM setups, it may not be possible to generate spectrally accurate $\alpha \left({\mathop{r}\limits^{\rightharpoonup} } \right)$. Thus, we decided to adapt our procedure to take this into account. To make the problem tractable we assumed that, in spectral terms, there is only one important type of attenuating substance in the sample. This is a reasonable assumption in the case of the lymph node shown in Supplementary Fig. 3, where the attenuation is predominantly caused by NBT/BCIP staining, or in the case of the embryonic mouse head in Figs. 1, 3 and 4, where the only significant attenuation is from the eye pigmentation. It would probably not be valid, for example, if we performed in situ NBT/BCIP staining on the mouse head, which would then contain two strong sources of attenuation with presumably uncorrelated spectral properties.
For samples with a single attenuating species, we assumed that a shift in wavelength will result in a rescaling of the attenuation coefficient, but that this rescaling is independent of the position in the sample. Thus, instead of directly applying Eqs. 17 and 18, we first applied the transformations
$$ \alpha _{{ill}} = K_{{ill}} \cdot \alpha _{{measured}} $$ (19) and
$$ \alpha _{{det}} = K_{{det}} \cdot \alpha _{{measured}} $$ (20) where $K_{{ill}}$ and $K_{{det}}$ are factors of proportionality between the attenuation at the measured wavelength and at the illumination and detection wavelengths, respectively.
See also Supplementary Text Section 3.