Fast feature identification for holographic tracking: The orientation alignment transform

Bhaskar Jyoti Krishnatreya
David G. Grier
Department of Physics and Center for Soft Matter Research, New York University, New York, NY 10003
Abstract.

The concentric fringe patterns created by features in holograms may be associated with a complex-valued orientational order field. Convolution with an orientational alignment operator then identifies centers of symmetry that correspond to the two-dimensional positions of the features. Feature identification through orientational alignment is reminiscent of voting algorithms such as Hough transforms, but may be implemented with fast convolution methods, and so can be orders of magnitude faster.

Holographic microscopy records information about the spatial distribution of illuminated objects through their influence on the phase and intensity distribution of the light they scatter. This information can be retrieved from a hologram, at least approximately, by reconstructing the three-dimensional light field responsible for the recorded intensity distribution (1); (2). Alternatively, features of interest in a hologram can be interpreted with predictions of the theory of light scattering to obtain exceedingly precise measurements of a scattering object's three-dimensional position, size and refractive index (3). The availability of so much high-quality information about the properties and motions of individual colloidal particles has proved a boon for applications as varied as product quality assessment (4), microrheology (5); (6), porosimetry (7), microrefractometry (8), and flow velocimetry (9); (10), as well as for molecular binding assays (9), and as a tool for fundamental research in statistical physics (11); (12); (13) and materials science (14).

Fitting measured holograms to theoretical predictions requires an initial estimate for each scatterer's position. This can pose challenges for conventional image analysis algorithms because the hologram of a small object consists of alternating bright and dark fringes covering a substantial area in the field of view (9). Here, we introduce a fast, robust and accurate feature-identification algorithm that not only meets the needs of holographic particle tracking, but also should be useful in other image analysis applications.

Figure 1. Feature detection by orientation alignment. (a) Normalized hologram b(\boldsymbol{r}) of a 0.8 \mathrm{\upmu}\mathrm{m}-radius polystyrene sphere in water. (b) Magnitude \left|\nabla b(\boldsymbol{r})\right| of the gradient of the image in (a). (c) The orientation, 2\phi(\boldsymbol{r}), of the gradients. Inset: phase angle of the orientation alignment convolution kernel, (d) Orientation alignment transform of the image in (a). Inset: Schematic representation of how three pixels (colored red) contribute to the real part of the transform. Blue lobes represent real-valued contributions to \Psi(\boldsymbol{r}) associated with each highlighted pixel.

Figure 1(a) shows a typical hologram of a colloidal polystyrene sphere in water. This hologram was recorded with an in-line holographic video microscope (1); (2) using a collimated linearly polarized laser for illumination (Coherent Cube, vacuum wavelength \lambda=447~\mathrm{n}\mathrm{m}). Light scattered by the sphere interferes with the rest of the beam in the focal plane of a microscope objective (Nikon Plan Apo, 100\times oil immersion, numerical aperture 1.4). The objective, in combination with a tube lens, relays the interference pattern to a video camera (NEC TI-324A II) with an effective magnification of 135 \mathrm{n}\mathrm{m}\mathrm{/}pixel. The intensity distribution recorded by the video camera is normalized by a background image (3); (9) to suppress spurious interference fringes. Figure 1(a) shows a 480\times 480 pixel region of the normalized intensity, b(\boldsymbol{r}).

The sphere's hologram features bright and dark circular fringes all centered on a point in the focal plane that coincides with the sphere's center. This point could be identified by performing a circular Hough transform, which additionally would identify the radii of all the rings (15). Hough transforms, however, have a computational complexity of \mathcal{O}\!\left\{ N^{4}\right\} in the number N of pixels on the side of an N\times N image (15), and thus are prohibitively costly. Variants of Hough transforms that identify centers but not radii can achieve a computational complexity of \mathcal{O}\!\left\{ N^{3}\log N\right\} (16).

More efficient searches for centers of rotational symmetry take advantage of the observation that gradients in the intensity of images such as Fig. 1(a) either point toward or away from the centers. Figure 1(b) shows the magnitude, \left|\nabla b(\boldsymbol{r})\right|, of the image's gradient. Each pixel in the gradient image, \nabla b(\boldsymbol{r}), is associated with a direction,

\phi(\boldsymbol{r})=\tan^{{-1}}\left(\frac{\partial _{y}b(\boldsymbol{r})}{\partial _{x}b(\boldsymbol{r})}\right), (1)

relative to the image's \hat{x} axis. Figure 1(c) shows \phi(\boldsymbol{r}) for the image in Fig. 1(a). Each pixel therefore offers information that the center of a feature might lie somewhere along direction \phi(\boldsymbol{r}) relative to its position \boldsymbol{r}. Voting algorithms (9) make use of this information by allowing each pixel to cast votes for pixels along its preferred direction, the votes of all pixels being tallied in an accumulator array. Hough transforms operate on a similar principle, but also incorporate distance information. Pixels in the transformed image that accumulate the most votes then are candidates for center positions, and may be located with sub-pixel accuracy using standard methods of particle tracking (17). Alternatively, each intersection between pixels' votes can be computed directly as the solution of a set of simultaneous equations (18). Voting algorithms typically identify the centers of features such as the example in Fig. 1(a) to within 1/10 pixel. Efficient implementations (9); (18) have a computational complexity of \mathcal{O}\!\left\{ N^{3}\right\}. Achieving this efficiency involves first identifying pixels with the strongest gradients, typically by imposing a threshold on \left|b(\boldsymbol{r})\right|.

Here, we introduce an alternative to discrete voting algorithms that is based on a continuous transform of the local orientation field. This approach eliminates the need for threshold selection and further reduces the computational burden of localizing circular features in an image. The spatially varying orientation of gradients in b(\boldsymbol{r}) may be described with the two-fold orientational order parameter (19); (20)

\psi(\boldsymbol{r})=\left|\nabla b(\boldsymbol{r})\right|^{2}e^{{2i\phi(\boldsymbol{r})}}. (2)

The factor of two in the exponent accounts for the bidirectional nature of orientation information obtained from gradients, as can be seen in Fig. 1(c). Weighting the order parameter by \left|\nabla b(\boldsymbol{r})\right|^{2} ensures that pixels in regions with stronger gradients contribute more to the estimate for the local orientation.

To identify symmetry-ordained coincidences in the orientation field, we convolve \psi(\boldsymbol{r}) with the two-fold symmetric transformation kernel,

K(\boldsymbol{r})=\frac{1}{r}e^{{-2i\theta}}, (3)

to obtain the orientation alignment transform

\Psi(\boldsymbol{r})=\int K(\boldsymbol{r}-\boldsymbol{r}^{\prime})\psi(\boldsymbol{r}^{\prime})\, d^{2}r^{\prime}. (4)

The phase of K(\boldsymbol{r}) complements the phase of \psi(\boldsymbol{r}), as can be seen in the inset to Fig. 1(c). The integrand of Eq. (4) therefore is real-valued and non-negative along the line \boldsymbol{r}^{\prime}-\boldsymbol{r} that is oriented along \theta=\phi(\boldsymbol{r}^{\prime}), and is complex-valued along other directions. Real-valued contributions directed along gradients of b(\boldsymbol{r}) accumulate at points \boldsymbol{r} in \Psi(\boldsymbol{r}) that are centers of symmetry of the gradient field, as illustrated schematically in the inset to Fig. 1(d). Complex-valued contributions, by contrast, tend to cancel out. Centers of symmetry in b(\boldsymbol{r}) therefore are transformed into centers of brightness in B(\boldsymbol{r})=\left|\Psi(\boldsymbol{r})\right|^{2}, as can be seen in Fig. 1(d). The centroid of the peak then can be identified and located (17).

Figure 2. Feature identification in a multi-particle hologram. The greyscale hologram b(\boldsymbol{r}) of 12 colloidal spheres is transformed by the orientation alignment transform into sharply resolved peaks in B(\boldsymbol{r}). The peaks' centers are indicated by crosses plotted over the original image. The scale bar indicates 10 \mathrm{\upmu}\mathrm{m}.

Circular features at larger radii from a center of symmetry subtend more pixels in b(\boldsymbol{r}) and thus would tend to have more influence over the position of the center of brightness in B(\boldsymbol{r}). The factor of 1/r in Eq. (3) ensures that all of the interference fringes comprising the hologram of a sphere contribute with equal weight to estimate for the centroid.

The orientation alignment transform defined by Eqs. (2), (3) and (4) is related to the Fourier-Mellin transform, which is used to detect geometrically invariant features in images (21); (22). It can be computed efficiently using the Fourier convolution theorem,

\tilde{\Psi}(\boldsymbol{k})=\tilde{K}(\boldsymbol{k})\tilde{\psi}(\boldsymbol{k}), (5)

where \tilde{\psi}(\boldsymbol{k}) is the Fourier transform of \psi(\boldsymbol{r}), and where

\tilde{K}(\boldsymbol{k})=\frac{1}{k}e^{{-2i\theta}} (6)

is the Fourier transform of K(\boldsymbol{r}). The orientation alignment transform therefore can be calculated by performing a fast Fourier transform (FFT) on \psi(\boldsymbol{r}), multiplying by a precomputed kernel, \tilde{K}(\boldsymbol{k}), and then performing an inverse FFT. Computing the gradient image by convolution with a Savitzky-Golay filter (23) reduces sensitivity to noise in b(\boldsymbol{r}) and can be performed in \mathcal{O}\!\left\{ N^{2}\right\} operations. The transform's overall computational complexity is set by the \mathcal{O}\!\left\{ N^{2}\log N\right\} cost of the forward and inverse FFT, and so is more efficient than voting algorithms. Rather than requiring sequential analysis of above-threshold pixels, moreover, the orientation alignment transform lends itself to implementation on parallel processors. Our implementation in the IDL programming language achieves real-time performance (30 frames\mathrm{/}\mathrm{s}) on a 2 \mathrm{G}flop\mathrm{/}\mathrm{s} processor for holograms such as the example in Fig. 2.

Figure 2 illustrates the orientation alignment transform's performance for identifying and locating multiple particles in a single image simultaneously. This hologram records twelve 3 \mathrm{\upmu}\mathrm{m}-diameter colloidal silica spheres that were arranged in four different planes using holographic optical tweezers (24). Despite interference between the spheres' scattering patterns and uncorrected motion artifacts in the hologram, each sphere's contribution to the hologram is resolved into a single peak by the orientation alignment transform. The peaks' locations are identified by crosses superimposed on b(\boldsymbol{r}), and constitute estimates for the particles' two-dimensional positions in that hologram.

The widths and heights of peaks in B(\boldsymbol{r}) depend on the particles' axial positions, as can be seen in Fig. 2. Particles closer to the focal plane are transformed into sharper brighter peaks. This dependence can be calibrated on a particle-by-particle basis to facilitate real-time three-dimensional tracking with minimal additional computational burden. Two-dimensional tracking requires no separate calibration.

Unambiguously identifying and locating individual spheres becomes increasingly difficult as their scattering patterns overlap and centers of symmetry become obscured. Spurious features can be introduced, moreover, by superposition of symmetrically arranged scattering patterns. Symmetry considerations therefore should provide little information on individual particles' positions for dense samples in the multiple-scattering limit. Results such as those in Fig. 2 confirm reliable detection of micrometer-scale spheres down to separations of two or three wavelengths.

Figure 3. (a) Trajectory of a colloidal sphere obtained by analyzing a holographic video with the orientation alignment transform. The 9.1 \mathrm{min} trajectory is colored by time. (b) The mean-squared displacement along \hat{x} and \hat{y} computed from the trajectory in (a), together with linear fits to Eq. (8), which are plotted as dashed curves.

Applying the same analysis to each snapshot in a holographic video sequence yields the in-plane trajectory for each sphere in the field of view. Figure 3(a) shows the trajectory of the sphere from Fig. 1 obtained in this way from 16,500 consecutive video frames. Each frame, moreover, yields two measurements of the particle's position because the even and odd scan lines are recorded separately. Given the recording rate of 29.97 frames\mathrm{/}\mathrm{s}  the time interval between interleaved video fields is \Delta t=16.68~\mathrm{m}\mathrm{s}. The camera's exposure time, 0.1 \mathrm{m}\mathrm{s}, is fast enough that the sphere will not have diffused appreciably during image acquisition (25); (26). The 33,000 position measurements plotted in Fig. 3(a) record the particle's Brownian motion over more than 9 \mathrm{min}.

Assuming that the sphere diffuses freely without significant hydrodynamic coupling to surrounding surfaces, the mean-squared displacement,

\Delta r_{j}^{2}(\tau)=\left<\left[r_{j}(t+\tau)-r_{j}(t)\right]^{2}\right> (7)

should satisfy the Einstein-Smoluchowski equation

\Delta r_{j}^{2}(\tau)=2D_{j}\tau+2\epsilon _{j}^{2}, (8)

where r_{j}(t) is the sphere's position along one of the Cartesian coordinates with r_{0}(t)=x(t) and r_{1}(t)=y(t), where D_{j} is the diffusion coefficient along that direction, and where \epsilon _{j} is the error in the associated position measurement. Analyzing measured trajectories with Eq. (8) therefore provides a method to measure the precision with which a particle's position can be measured (17); (25); (26).

The data in Fig. 3(b) show the mean-squared displacements along the \hat{x} and \hat{y} direction computed from the trajectories in Fig. 3(a) using Eq. (7). Error bars in Fig. 3 reflect statistical uncertainties in the computed values. Although results along the two directions appear to agree with each other to within these uncertainties, least-squares fits to the Einstein-Smoluchowski prediction in Eq. (8) yield slightly different values for the particle's diffusion coefficient: D_{x}=0.292\pm 0.002~\mathrm{\mathrm{\upmu}\mathrm{m}}^{{2}}\mathrm{/}\mathrm{s} and D_{y}=0.281\pm 0.002~\mathrm{\mathrm{\upmu}\mathrm{m}}^{{2}}\mathrm{/}\mathrm{s}. This discrepancy may be attributed to blurring along the \hat{y} direction that arises when the even and odd scan lines are extracted from each interlaced video frame. The resulting loss of spatial resolution along \hat{y} tends to suppress the apparent diffusivity along that direction (25); (26). This artifact may be avoided by using a progressive scan camera. The larger of the measured diffusion coefficients is consistent with the Stokes-Einstein prediction D=k_{B}T/(6\pi\eta a_{p})=0.296\pm 0.002~\mathrm{\mathrm{\upmu}\mathrm{m}}^{{2}}\mathrm{/}\mathrm{s} for a sphere of radius a_{p}=0.805\pm 0.001~\mathrm{\upmu}\mathrm{m} (27) diffusing through water with viscosity \eta=0.912\pm 0.005~\mathrm{m}\mathrm{Pa}\;\mathrm{s} at absolute temperature T=297.1\pm 0.2~\mathrm{K}.

Fits to Eq. (8) also yield estimates for errors in the particle's position of \epsilon _{x}=8~\mathrm{n}\mathrm{m} and \epsilon _{y}=9~\mathrm{n}\mathrm{m}, or roughly 0.06 pixel in each direction. This performance is comparable to the precision obtained with voting algorithms (9); (18). Because of its speed advantage, the orientation alignment transform in conjunction with centroid-locating algorithms (17) should be immediately useful for in-plane particle tracking applications. Its results also can be used to bootstrap more detailed analyses (9) for applications that require greater precision or simultaneous tracking and characterization.

The orientation alignment transform performs well for identifying features composed of large numbers of closely spaced concentric fringes. It does not fare so well with simple disk-like features whose few alignment coincidences occur at comparatively large ranges. Such images are better analyzed with Hough transforms, voting algorithms, or related morphological methods. The orientation alignment transform, by contrast, is better suited to holographic images whose gradient-rich structure is computationally burdensome for conventional methods.

Open-source software implementing the orientation alignment transform is available online at http://physics.nyu.edu/grierlab/software/. This work was supported primarily by a grant from Procter & Gamble and in part by the MRSEC program of the National Science Foundation through Grant Number DMR-0820341.

References

  • (1)
    J. Sheng, E. Malkiel and J. Katz.
    “Digital holographic microscope for measuring three-dimensional particle distributions and motions.”
    Appl. Opt.45, 3893–3901 (2006).
  • (2)
    S.-H. Lee and D. G. Grier.
    “Holographic microscopy of holographically trapped three-dimensional structures.”
    Opt. Express15, 1505–1512 (2007).
  • (3)
    S.-H. Lee, Y. Roichman, G.-R. Yi, S.-H. Kim, S.-M. Yang, A. van Blaaderen, P. van Oostrum and D. G. Grier.
    “Characterizing and tracking single colloidal particles with video holographic microscopy.”
    Opt. Express15, 18275–18282 (2007).
  • (4)
    F. C. Cheong, K. Xiao and D. G. Grier.
    ``Characterization of individual milk fat globules with holographic video microscopy.”
    J. Dairy Sci.92, 95–99 (2009).
  • (5)
    F. C. Cheong, S. Duarte, S.-H. Lee and D. G. Grier.
    “Holographic microrheology of polysaccharides from Streptococcus mutans biofilms.”
    Rheol. Acta48, 109–115 (2009).
  • (6)
    G. Bolognesi, S. Bianchi and R. Di Leonardo.
    “Digital holographic tracking of microprobes for multipoint viscosity measurements.”
    Opt. Express19, 19245–19254 (2011).
  • (7)
    F. C. Cheong, K. Xiao, D. J. Pine and D. G. Grier.
    “Holographic characterization of individual colloidal spheres' porosities.”
    Soft Matter7, 6816–6819 (2011).
  • (8)
    H. Shpaisman, B. J. Krishnatreya and D. G. Grier.
    “Holographic microrefractometer.”
    Appl. Phys. Lett.101, 091102 (2012).
  • (9)
    F. C. Cheong, B. Sun, R. Dreyfus, J. Amato-Grill, K. Xiao, L. Dixon and D. G. Grier.
    “Flow visualization and flow cytometry with holographic video microscopy.”
    Opt. Express17, 13071–13079 (2009).
  • (10)
    L. Dixon, F. C. Cheong and D. G. Grier.
    “Holographic particle-streak velocimetry.”
    Opt. Express19, 4393–4398 (2011).
  • (11)
    Y. Roichman, B. Sun, A. Stolarski and D. G. Grier.
    “Influence of non-conservative optical forces on the dynamics of optically trapped colloidal spheres: The fountain of probability.”
    Phys. Rev. Lett.101, 128301 (2008).
  • (12)
    K. Xiao and D. G. Grier.
    “Multidimensional optical fractionation with holographic verification.”
    Phys. Rev. Lett.104, 028302 (2010).
  • (13)
    J. Fung and V. N. Manoharan.
    “Holographic measurements of anisotropic three-dimensional diffusion of colloidal clusters.”
    Phys. Rev. E88, 020302 (2013).
  • (14)
    J. Fung, K. E. Martin, R. W. Perry, D. M. Kaz, R. McGorty and V. N. Manoharan.
    “Measuring translational, rotational, and vibrational dynamics in colloids with digital holographic microscopy.”
    Opt. Express19, 8051–8065 (2011).
  • (15)
    D. H. Ballard.
    “Generalizing the Hough transform to detect arbitrary shapes.”
    Pattern Recognition13, 111–122 (1981).
  • (16)
    C. Hollitt.
    “A convolution approach to the circle Hough transform for arbitrary radius.”
    Machine Vision and Applications24, 683–694 (2013).
  • (17)
    J. C. Crocker and D. G. Grier.
    “Methods of digital video microscopy for colloidal studies.”
    J. Colloid Interface Sci.179, 298–310 (1996).
  • (18)
    R. Parthasarathy.
    “Rapid, accurate particle tracking by calculation of radial symmetry centers.”
    Nature Methods9, 724–726 (2012).
  • (19)
    B. I. Halperin and D. R. Nelson.
    “Theory of two-dimensional melting.”
    Phys. Rev. Lett.41, 121–124 (1978).
  • (20)
    D. R. Nelson and B. I. Halperin.
    “Dislocation-mediated melting in two dimensions.”
    Phys. Rev. B19, 2457–2484 (1979).
  • (21)
    J. Rubinstein, J. Segman and Y. Zeevi.
    “Recognition of distorted patterns by invariance kernels.”
    Pattern Recognition24, 959–967 (1991).
  • (22)
    T. J. Atherton and D. J. Kerbyson.
    “Size invariant circle detection.”
    Image and Vision Computing17, 795–803 (1999).
  • (23)
    A. Savitzky and M. J. E. Golay.
    “Smoothing and differentionation of data by simplified least squares procedures.”
    Acta Crystallographica36, 1627–1639 (1964).
  • (24)
    D. G. Grier.
    “A revolution in optical manipulation.”
    Nature424, 810–816 (2003).
  • (25)
    T. Savin and P. S. Doyle.
    “Role of finite exposure time on measuring an elastic modulus using microrheology.”
    Phys. Rev. E71, 041106 (2005).
  • (26)
    T. Savin and P. S. Doyle.
    ``Static and dynamic errors in particle tracking microrheology.”
    Biophys. J.88, 623–638 (2005).
  • (27)
    B. J. Krishnatreya, A. Colen-Landy, P. Hasebe, B. A. Bell, J. R. Jones, A. Sunda-Meya and D. G. Grier.
    “Measuring Boltzmann's constant through holographic video microscopy of a single sphere.”
    Am. J. Phys.82, 23–31 (2014).