Machine-learning techniques for fast and accurate feature localization in holograms of colloidal particles

Mark D. Hannel1, Aidan Abdulali2, Michael O’Brien1, and David G. Grier1

Holograms of colloidal particles can be analyzed with the Lorenz-Mie theory of light scattering to measure individual particles’ three-dimensional positions with nanometer precision while simultaneously estimating their sizes and refractive indexes. Extracting this wealth of information begins by detecting and localizing features of interest within individual holograms. Conventionally approached with heuristic algorithms, this image analysis problem can be solved faster and more generally with machine-learning techniques. We demonstrate that two popular machine-learning algorithms, cascade classifiers and deep convolutional neural networks (CNN), can solve the feature-localization problem orders of magnitude faster than current state-of-the-art techniques. Our CNN implementation localizes holographic features precisely enough to bootstrap more detailed analyses based on the Lorenz-Mie theory of light scattering. The wavelet-based Haar cascade proves to be less precise, but is so computationally efficient that it creates new opportunities for applications that emphasize speed and low cost. We demonstrate its use as a real-time targeting system for holographic optical trapping.

1Department of Physics and Center for Soft Matter Research, New York University, New York, NY 10003, USA
2Packer Collegiate Institute, Brooklyn, NY 11201, USA

OCIS codes: (180.6900) Three-dimensional microscopy; (090.1995) Digital holography; (100.4996) Pattern recognition, neural networks; (290.5850) Scattering, particles.

1 Introduction: Holographic Particle Characterization

Holographic particle characterization [1] uses quantitative analysis of holographic video microscopy images to measure the size, shape, and composition of individual colloidal particles, in addition to their three-dimensional positions. When applied to a stream of dispersed particles, holographic characterization measurements provide insights into the joint distribution of particle size and composition that cannot be obtained in any other way. This technique has been demonstrated on both homogeneous and heterogeneous [2, 3] dispersions of colloidal spheres, and has been extended to work for colloidal clusters [4, 5, 6], and aggregates [7, 8], as well as colloidal rods [9] and other aspherical particles [10, 11]. Applications include monitoring protein aggregation in biopharmaceuticals [7], detecting agglomeration in semiconductor polishing slurries [12], gauging the progress of colloidal synthesis reactions [13, 14], performing microrheology [15], microrefractometry [16], and microporosimetry [17] measurements, assessing the quality of dairy products [18], and monitoring contaminants in wastewater [3].

The critical first step in holographic particle characterization is to detect features of interest within a recorded video frame, and to localize them well enough to enable subsequent analysis [19, 20, 2, 21]. False positive and negative detections clearly are undesirable. Poor localization slows downstream analysis [2, 21] and can prevent fitting algorithms from converging to reasonable results. Here, we demonstrate that machine-learning algorithms can meet the need for reliable feature detection and precise object localization in holographic video microscopy. This complements the previously reported [2] use of machine-learning regression to estimate characteristics such as particle size from holographic features that already have been detected, localized and isolated by other means. With appropriate training, machine-learning algorithms surpass standard image-analysis techniques in their ability to cope with common image defects such as overlapping features. They also operate significantly faster, thereby enabling applications that benefit from real-time performance on low-cost hardware.

Figure 1: Overview of holographic particle characterization. (a) Plane-wave illumination is scattered by colloidal particles (red spheres). The field scattered by a particle at 𝐫p interferes with the plane wave to produce a hologram in the focal plane of a microscope. (b) Features in a digitally recorded hologram are detected with a machine-learning algorithm before being analyzed with light-scattering theory to estimate the particles’ physical properties.

2 Detecting and Localizing Holographic Features

Figure 1 illustrates the challenge of recognizing features in holograms. Light scattered by a particle spreads as it propagates to the focal plane of a conventional microscope. There, it interferes with the remainder of the illuminating beam to create a pattern of concentric interference fringes. The microscope magnifies this interference pattern and projects it onto the detector of a video camera. The intensity variations associated with a single colloidal particle typically span many pixels in a recorded image and display rich internal structure. Their scale and complexity render such features difficult to recognize by conventional particle-tracking techniques.

2.1 Heuristic Algorithms

One practical method for detecting holographic features and locating their centers involves transforming extended interference patterns into compact peaks [20, 21], and then locating the peaks with standard centroid detectors [19, 22]. Successful implementations of this two-step approach have been based on voting algorithms such as the circular Hough transform [20, 23, 22] and the orientation alignment transform [21]. Both of these feature-coalescence algorithms rely on the radial symmetry of typical single-particle holograms’ symmetry without reference to the underlying image-formation mechanism. For an image of N pixels on a side, voting algorithms have a computational complexity of 𝒪{N3logN} [24] whereas the convolution-based orientation alignment transform runs in 𝒪{N2logN} operations [20]. For this reason, we transform holograms with the orientation alignment transform to assess the performance of heuristic algorithms, using an open-source implementation described in [20].

The peaks created by feature coalescence can be detected and their centroids localized as local maxima in the transformed images. When presented with holograms of well-separated colloidal spheres, heuristic algorithms provide sub-pixel precision for particle localization [20, 21]. This easily meets the need to localize features for subsequent analysis. We use the open-source TrackPy implementation [22] of the Crocker-Grier algorithm [19].

Detecting and localizing local maxima can be very efficient if the peaks have well-defined widths, heights and separations [19, 22]. Transformed holograms of colloidal particles, however, can have widely varying contrasts and extents depending on the particles’ properties and heights above the focal plane. Thresholds for feature detection and localization therefore must be assessed from the transformed images themselves. This can create a bottleneck for heuristic feature detection and localization.

2.2 Machine-Learning Algorithms

Machine-learning techniques can reduce the computational burden of detecting and localizing features of interest in holographic microscopy images, and also prove to be more robust against false positive and negative feature detections. We have implemented two such approaches: a cascade of boosted classifiers based on Haar-like wavelets, and a deep convolutional neural network (CNN). Both approaches yield estimates for the in-plane coordinates, (xp,yp), for every particle in the field of view, as well as the extent of the region of interest encompassing the scattering pattern. The box superimposed on the two-particle hologram in Fig. 1(b) represents a region of interest centered on one of the particles that was computed by a CNN.

Cascade classifiers and convolutional neural networks both work by convolving holograms with small arrays and interpreting the results. They thus require 𝒪{N2} operations, which gives them the potential to run significantly more quickly than heuristic algorithms, particularly for larger images. Each has particular strengths for particle localization in holographic microscopy images.

Cascade classifiers were originally developed for detecting faces in photographs [25]. They work by convolving an image with a sequence of selected wavelets, each of which is considered to be a “weak classifier” for objects of interest. An above-threshold response from a linear combination of such weak classifiers signifies the presence of a feature of interest centered at the point of strongest response. Regions containing such above-threshold responses are analyzed with the weak classifiers at the next step of the cascade. Any regions that remain after analysis by the full cascade are considered to be features. The analysis is performed at a sequence of resolutions to capture features at different scales. Haar wavelets are particularly attractive for this application because they are implemented in integer arithmetic with highly efficient algorithms. The training process determines which Haar wavelets constitute useful weak classifiers at each level of the cascade, and which combinations best serve as strong classifiers for features of interest. Training also optimizes the number of stages of increasingly fine resolution required to detect features reliably and to localize them with a requested precision. This approach has been adapted for a wide range of object recognition and image segmentation tasks [26]. Our application of this technique to holographic feature localization is based on an open-source implementation of Haar cascade classifiers made available by the OpenCV project [27]. This cascade classifier can be trained to recognize non-standard features of interest, such as holograms of colloidal particles. For each such feature in a hologram, it yields a candidate set of rectangular regions of interest that may include multiple estimates for each feature. Any such overlapping detections can be coalesced with standard methods for non-maximum suppression [28]. The center of each resulting rectangle constitutes an estimate for the associated feature’s position in the focal plane.

Convolutional neural networks also solve image recognition tasks through convolutions with selected kernels. In this case, the convolutions are integrated into the network’s multi-layered, feed-forward architecture [29] and employ kernels that are designed and optimized during training. Constructing a CNN to perform general image classification requires massive computational resources [30]. Once constructed, however, a CNN can be retrained easily to recognize particular features of interest. Our application of CNNs for feature localization is based on TensorBox [31], an open-source package built on the GoogLeNet-OverFeat network [29], specifically on Inception v1 [32]. Tensorbox provides a convenient interface for training the input layers of Inception to recognize features of interest and the for training the output layers to associate these features with regression estimates for the locations and extents of detected features.

Both types of supervised machine-learning algorithms require sets of sample data for training and validation. Each training element consists of an image containing zero, one or more features together with a “ground truth” annotation for each feature in that image specifying the features’ locations and extents. Normally, these images are obtained experimentally and are annotated by hand. We instead train with synthetic holograms that are computed with the same light scattering theory [1] used to analyze experimental holograms. Using the physics of image formation for the ground truth for training eliminates the effort and errors inherent in empirical annotation.

3 Holographic Image Formation

Referring to Fig. 1, we model the holographic microscope’s illumination as a plane wave at frequency ω propagating down the z^ axis (along -z^) and linearly polarized along x^:

𝐄0(𝐫,t)=u0eikzeiωtx^, (1)

where k=nmω/c is the wavenumber of the light in a medium of refractive index nm. A particle at position 𝐫p scatters the incident wave, thereby creating the scattered field

𝐄s(𝐫,t)=u0eikzp𝐟s(k[𝐫-𝐫p])eiωt, (2)

where 𝐟s(k𝐫) is the Lorenz-Mie scattering function [33, 34]. For the particular case of scattering by a sphere, 𝐟s(k𝐫) is parameterized by the sphere’s radius ap and refractive index np [33]. The field that reaches point 𝐫 in the focal plane (z=0) is the superposition of these two contributions,

𝐄(𝐫,t)=𝐄0(𝐫,t)+𝐄s(𝐫,t). (3)

The dimensionless intensity, b(𝐫)u0-2|𝐄(𝐫,t)|2, is then given by

b(𝐫)=|x^+eikzp𝐟s(k[𝐫-𝐫p])|2. (4)

In addition to ap and np, this model for the image formation process depends on a small number of parameters that characterize the instrument. Our holographic microscope is powered by a 15mW/ fiber-coupled diode laser (Coherent Cube) operating at a vacuum wavelength of λ=447nm/. The combination of a half-wave plate and a polarizing beam splitter reduces the power incident on the sample to 3mW/ and ensures that the light is linearly polarized along x^, as required by Eq. (4). A 100× oil-immersion objective lens (Nikon S-Plan Apo, numerical aperture 1.3) and a matched 200mm/ tube lens provide a total magnification of 135nm/pixel on a standard video camera (NEC TI-324AII). The 640×480pixel grid is digitized at 8bits/pixel and recorded as uncompressed digital video at 29.97frames/s with a commercial digital video recorder (Pioneer DVR-560H). The refractive index of the medium, nm, is determined to within a part per ten thousand using an Abbe refractometer (Edmund Scientific).

Having determined the calibration constants, we treat the particle’s position and properties as adjustable parameters and fit predictions of Eq. (4) to experimentally measured holograms. To do so, each video frame must first be corrected by subtracting off the camera’s dark count [8], and then normalizing by the microscope’s background intensity distribution [1]. Such fits typically yield a sphere’s position with a precision of 1nm/ in the plane and 3nm/ axially [20, 35]. Characterization results are similarly precise, with the radius of a micrometer-diameter sphere typically being resolved to within 3nm/ and the refractive index to within a part per thousand [36, 16].

This excellent performance requires starting estimates for the adjustable parameters that are good enough for the fitting algorithm to converge to a globally optimal solution. The fitter computes trial holograms according to Eq. (4), which is computationally expensive. It has to perform fewer of these computations when it is provided with better estimates for the starting parameters. Whereas heurstic localization algorithms meet this need, machine-learning algorithms are substantially faster and more robust, and can be comparably precise.

4 Applying Machine Learning to Holographic Particle Localization

We used Eq. (4) to generate training images of particles with radii ranging from ap=0.25µm/ to 5µm/, refractive indexes from np=1.4 to 2.5, and axial positions from zp=5µm/ to 50µm/. Each training hologram has parameters selected at random from this range and is centered at random within the field of view. Normalized experimental holograms have uncorrelated white noise that we model as additive Gaussian noise with a standard deviation of five percent.

Our cascade classifier was trained with 6000 synthetic images of colloidal spheres. These were combined with a complementary set of 4000 particle-free images recorded by the instrument itself. Each computed image is annotated with the coordinates of the corners defining that feature’s region of interest. The region is centered on the feature’s actual position and has an extent that encloses 10 interference fringes. The classifier was trained until its rate of false positive detections fell to 8×10-4. This was achieved with a classifier that searches for features through five resolution stages, with each stage being comprised of a distinct set of five wavelets. This geometry and the choice of weak classifiers was arrived at by the training algorithm’s optimizer.

The convolutional neural network was trained with 3000 synthetic holographic images; another 600 were used for validation. These images also were annotated with feature positions and extents drawn from the ground truth for the image-formation process. CNN training converged after 50 000 cycles of training and validation.

4.1 Precision and Accuracy

Figure 2: Each localization technique provided estimates for the trajectory of a simulated brownian particle. (a) Probability distribution functions for the localization error achieved by (top) heuristic algorithm, (middle) convolutional neural network, and (bottom) cascade classifier. Inset shows expanded view of the subpixel resolution. Vertical dashed line indicates single-pixel precision. (b) Mean-square displacement computed from trajectories obtained with the three detection algorithms. Short-time asymptotes yield dynamical estimates for the localization error. Open circles represent experimental data, as explained in Sec. 4.4.
Figure 3: Localization errors as a function of particle radius and refractive index at a height of zp=13.5µm/ above the focal plane. (a) Cascade classifier. (b) Convolutional neural network. (c) Rate of false positive detections for the cascade classifier. (d) Hologram of a 500nm/-diameter silica sphere that was overlooked by the cascade classifier. This particle was localized to within one pixel by the CNN. (e) Hologram of a 2.4µm/-diameter polystyrene sphere (upper left) interfering with the hologram of a 4.0µm/-diameter TPM sphere located 15µm/ above it. Blue dots show feature locations proposed by the heuristic algorithm; Red boxes enclose features detected by the CNN; Dashed black boxes are proposed by the cascade classifier. (f) Hologram of four particles overlaid with regions of interest identified by the CNN. One occluded feature was overlooked by the CNN.

We assess the detectors’ localization precision by comparing detection results with known input parameters. A typical example for a particular choice of particle properties is shown in Fig. 2. The three probability distributions in Fig. 2(a) present the root-mean-square localization error obtained by each of the algorithms when tracking particles with ap=1.0µm/ and np=1.5. We generate data for these plots by simulating the diffusion of such a particle through water at a temperature of 20°C/ starting from the center of the field of view at zp=13.5µm/ and proceeding for 3000 steps at 33ms/ per step.

The heuristic algorithm consistently yields sub-pixel precision with a median error of 0.04pixels. The convolutional neural network also yields sub-pixel precision with a median localization error of 0.61pixels. The cascade classifier performs less well, with a median localization error of 1.81pixels and a substantial probability for errors extending to several pixels. For applications such as Lorenz-Mie microscopy that require input estimates with sub-pixel precision, the cascade classifier’s localization precision may not be sufficient.

The inset of Fig. 2(b) shows the trajectory reconstructed by each of the algorithms. The measured trajectory’s mean-squared displacement (MSD) provides an estimate for the particle’s diffusion coefficient. All three methods yield results that are consistent with the particle’s true diffusivity, D=0.482µm2/s, which suggests that their localization errors are normally distributed. Extrapolating the MSD to zero lag time provides an estimate for the localization error [19, 37]. In all three cases, the extrapolated measurement error is consistent with the median values from Fig. 2(a).

Applying the same techniques across the entire range of particle sizes and refractive indexes yields results for the median localization error summarized in Fig. 3(a) and 3(b). Results from the cascade classifier in Fig. 3(a) range from single-pixel precision under most conditions to more than 20 pixels for the largest spheres we considered. These errors are dominated by the cascade classifier’s tendency to displace location estimates toward the center of the field of view when presented with features that extend outside the observation window. This problem is more pronounced for the larger holographic features created by larger scatterers. Smaller particles create holograms with low signal-to-noise ratio that can be overlooked by the cascade classifier, leading to false negative detections. Such conditions are indicated by white crosses in Fig. 3(a). A typical example of such a challenging hologram is shown in Fig. 3(d).

The results plotted in Fig. 3(b) show that the CNN yields much smaller localization errors than the cascade classifier. The CNN achieves sub-pixel resolution over the entire range of parameters, although localization precision is worse for weak scatterers and large spheres. Unlike the cascade classifier, it returned no false negative results, and even achieved single-pixel precision for the low-contrast hologram in Fig. 3(d).

Both the cascade classifier and the CNN return a small rate of false positive detections. Figure 3(c) reports the false-positive rate for the cascade classifier, which ranges from 10-1frame-1 for holograms of particles with ap<3µm/ to 3frame-1 for holograms of larger spheres. In all cases, these false positive detections come in addition to the correct particle detection, and result from the classifier’s failure to correctly coalesce multiple detections of the same particle. Such false positive detections contribute to the very large localization error for large spheres in Fig. 3(a). The CNN performs substantially better, with fewer than one false positive per thousand frames.

4.2 Multiple Particles

The results presented so far apply to holograms of single particles. In practice, it is not unusual for multiple particles to enter the microscope’s field of view simultaneously. Their scattering patterns interfere to create intensity variations that can confuse heuristic detection algorithms. Depending on the particles’ proximity and alignment, their holograms can merge into irregular patterns whose analysis requires more specialized techniques [4, 6]. The hologram in Fig. 3(e) illustrates the effect of more modest overlap. It captures a 2.4µm/-diameter polystyrene sphere 17µm/ above the focal plane whose hologram is partially occluded by that of a 4.0µm/-diameter TPM sphere situated 15µm/ above and 15µm/ off to the side. Discrete points overlaid on this image show the positions that the heuristic algorithm identified as centers of candidate features. Of the 10 proposed features, 8 are false positive detections and one is poorly localized.

Both machine-learning algorithms perform better than the heuristic algorithm for this image. The cascade classifier correctly detects both particles, as indicated by dashed rectangles in Fig. 3(e). The estimated locations, however, are displaced significantly from the features’ true centers, presumably because of interference between the two scattering patterns. The CNN not only detects and localizes both particles correctly, but also provides useful estimates for the extent of the scattering patterns, as denoted by the solid (red) squares overlaid on Fig. 3(e).

More substantial overlap can confound the CNN, leading to false-negative detections. Figure 3(f) shows a hologram with overlapping features due to four spheres located in four different planes over a 50µm/ axial range. The CNN correctly detects and localizes three particles, and provides reasonable estimates for their features’ extents. The fourth feature, which is larger and has lower contrast, is omitted. Such false negatives become more common as the number and extent of features in a hologram increases. Standard bright-field imaging would miss even more of these particles because the axial range over which they are distributed greatly exceeds a conventional microscope’s depth of focus.

These results illustrate that machine-learning algorithms can be more reliable than heuristic algorithms for detecting and localizing features in non-ideal holograms. For applications such as monitoring colloidal concentrations, this benefit alone might recommend machine-learning algorithms over other approaches. The principal benefit of machine-learning algorithms, however, is their ability to detect features rapidly, even on low-power computational platforms.

4.3 Computation Speed

Table 1: Analysis times in ms/frame for the heuristic algorithm, the convolutional neural network (CNN) implemented on CPU and GPU, and the cascade classifier implemented on a workstation and on a Raspberry Pi 3 single-board computer.
Mean [ms] Median [ms] Std. [ms] Min [ms] Max [ms]
Heuristic (CPU) 695 700 11 670 1000
CNN (CPU) 278 278 2.8 271 315
CNN (GPU) 52 52 4.8 50 70
Cascade (CPU) 17 17 1.0 15 81
Cascade (RPi) 173 171 12 159 275

Table 1 presents timing data for holographic feature detection on a 1Gflops desktop workstation outfitted with an nVidia GTX 680 GPU. This system can detect a single feature in just under 700ms/ using the heuristic algorithm described in Sec. 2.1. Of this, 150ms/ is required for the orientation alignment transform and half a second is required to analyze the transformed image and then to detect and localize its peaks. This bottleneck can be reduced to 50ms/ by specifying the anticipated width, height and separation of the transformed peaks. In this case, the present implementation’s processing speed is consistent with previous reports [1, 20, 22] when account is taken of image size and processor speed. No single set of such parameters, however, successfully detects features over the entire range of parameters considered in Fig. 3. The slow operation reported in Table 1 therefore represents the cost of generality.

The CNN routinely outperforms the heuristic algorithm by a factor of 2.5 on the same hardware over the entire range of parameters. Transferring the CNN calculation to the GPU increases this advantage to a factor of 11. Most remarkably, the cascade classifier is 40 times faster than the reference heuristic algorithms, even without GPU acceleration, processing features fast enough to keep up with the 33ms/ frame rate of a standard video camera.

The cascade classifier is so computationally efficient that it can be deployed usefully on a lightweight embedded computer. We demonstrated this by analyzing holograms on a Raspberry Pi 3 single-board computer. Even though the light-weight computer runs the cascade classifier 10 times slower than the workstation, it is still 4 times faster than the heuristic algorithms on the workstation. Reducing the resolution by half, improves the Raspberry Pi’s detection time to 40ms/ per image which corresponds to 25frames/s.

4.4 Experimental Demonstrations

Figure 4: (a) Cascade classifier tracking 2µm/-diameter colloidal spheres diffusing through water in a holographic optical trapping system. Each trace shows 5 seconds of the associated particle’s motion. The associated video (Visualization 1) shows the tracking data being used to alternately trap and release the particles. (b) CNN detection of holographic features. The high-contrast feature is created by a 1.5µm/ diameter silica sphere. The low-contrast feature represents a coliform bacterium in the dispersion.

Real-time detection of holographic features has applications beyond holographic particle characterization. The implementations presented here are suitable for targeting optical traps in holographic trapping systems [38]. We have demonstrated this by integrating machine-learning particle detection into an automated trapping system that projects optical traps onto the particles’ positions to acquire them for subsequent processing. Pioneering implementations of automated trapping [39] rely on conventional imaging and so require target particles to lie near the microscope’s focal plane. Holographic targeting works over a much larger axial range. Both the CNN and the cascade classifier locate particles in the plane with sufficient precision to ensure reliable trapping. The axial coordinate required for three-dimensional targeting can be extracted from holographic features using previously reported techniques [2]. Because of its speed, the cascade classifier is particularly useful for targeting fast-moving particles. Figure 4(a) shows the cascade classifier tracking colloidal particles in real time as they diffuse in a holographic trapping system. The instrument uses this tracking information to trap the detected particles, as shown in the associated video (Visualization 1).

Figure 4(b) shows typical results obtained with the CNN analyzing experimental data. The image is a normalized hologram of a 1.5µm/-diameter silica sphere dispersed in water flowing down a 100µm/-deep microfluidic channel. The second holographic feature in this image is due to a coliform bacterium in the sample [3]. The CNN detects and correctly localizes both features despite their substantial difference in contrast.

We can estimate the feature-detection algorithms’ precision for particle localization by tracking diffusing particles [19]. The open circles in Fig. 2(b) show results obtained with heuristic and machine-learning algorithms for the mean-square displacement of a single colloidal polystyrene sphere (Duke Scientific, catalog no. 4016A, nominal diameter 1.587±0.018µm/) diffusing through water at room temperature. Because polystyrene is 5/ more dense than water, the particle sediments 11µm/ over the course of this 3min/ measurement. In all three cases, the in-plane localization error obtained by extrapolating these results is consistent with that reported for synthetic data.

5 Discussion

The use of machine-learning algorithms for detecting and localizing holographic features enable and enhance a host of applications for holographic video microscopy. CNNs detect and localize colloidal particles faster than conventional image-analysis techniques and localize particles well enough for subsequent processing. Our implementation also estimates the extent of each holographic feature thereby bypassing the standard next step in Lorenz-Mie microscopy [20] and saving additional time. These substantial speed enhancements make it possible to perform holographic particle characterization measurements in real time rather than requiring off-line processing. CNNs also are more successful at interpreting overlapping features in multi-particle holograms and thus can be used to analyze more concentrated suspensions.

The Haar-based cascade classifier also outstrips the heuristic algorithms’ ability to detect colloidal particles, particularly in heterogeneous samples and crowded fields of view. Although it cannot match the localization precision of CNNs, its speed and modest computational requirements create new opportunities. We have deployed our cascade classifier on a light-weight single-board computer and have demonstrated its utility for counting particles and thus for measuring colloidal concentrations. Such a low-cost instrument should be useful for routine monitoring of industrial processes and products and for environmental monitoring. We also have demonstrated the cascade classifier’s utility for high-speed targeting in holographic trapping. In this case, speed is more important than localization precision for interacting with processes as they unfold.

While the present study focuses on detecting and localizing holographic features with radial symmetry, the machine-learning framework can be applied equally well to asymmetric holograms produced by rods, clusters or biological samples. By reducing the computational burden of analyzing holograms, machine-learning algorithms extend the reach of holographic tracking and holographic characterization. More generally, machine-learning algorithms are well-suited to bootstrapping the more detailed analysis involved in holographic particle characterization. We anticipate that more of these physics-based processing steps will be taken over by machine-learning algorithms as that technology advances.

Open-source software for holographic particle tracking and characterization is available online at


This work was supported primarily by the MRSEC program of the National Science Foundation through Award no. DMR-1420073 and in part by the SBIR program of the NSF through Award no. IPP-1519057. The holographic trapping instrument was developed under the MRI program of the NSF through Grant Number DMR-0922680.


  • [1] S.-H. Lee, Y. Roichman, G.-R. Yi, S.-H. Kim, S.-M. Yang, A. van Blaaderen, P. van Oostrum, and D. G. Grier, “Characterizing and tracking single colloidal particles with video holographic microscopy,” Opt. Express 15, 18,275–18,282 (2007). 0712.1738.
  • [2] A. Yevick, M. Hannel, and D. G. Grier, “Machine-learning approach to holographic particle characterization,” Opt. Express 22, 26,884–26,890 (2014).
  • [3] L. A. Philips, D. B. Ruffner, F. C. Cheong, J. M. Blusewicz, P. Kasimbeg, B. Waisi, J. McCutcheon, and D. G. Grier, “Holographic characterization of contaminants in water: Differentiation of suspended particles in heterogeneous dispersions,” Water Research 122, 431–439 (2017).
  • [4] R. W. Perry, G. N. Meng, T. G. Dimiduk, J. Fung, and V. N. Manoharan, “Real-space studies of the structure and dynamics of self-assembled colloidal clusters,” Faraday Discuss. 159, 211–234 (2012).
  • [5] J. Fung, R. W. Perry, T. G. Dimiduk, and V. N. Manoharan, “Imaging multiple colloidal particles by fitting electromagnetic scattering solutions to digital holograms,” J. Quant. Spectr. Rad. Trans. 113, 2482–2489 (2012).
  • [6] J. Fung and V. N. Manoharan, “Holographic measurements of anisotropic three-dimensional diffusion of colloidal clusters,” Phys. Rev. E 88, 020,302 (2013).
  • [7] C. Wang, X. Zhong, D. B. Ruffner, A. Stutt, L. A. Philips, M. D. Ward, and D. G. Grier, “Holographic characterization of protein aggregates,” J. Pharm. Sci. 105, 1074–1085 (2016).
  • [8] C. Wang, F. C. Cheong, D. B. Ruffner, X. Zhong, M. D. Ward, and D. G. Grier, “Holographic characterization of colloidal fractal aggregates,” Soft Matter 12, 8774–8780 (2016).
  • [9] F. C. Cheong and D. G. Grier, “Rotational and translational diffusion of copper oxide nanorods measured with holographic video microscopy,” Opt. Express 18, 6555–6562 (2010).
  • [10] A. Wang, T. G. Dimiduk, J. Fung, S. Razavi, I. Kretzschmar, K. Chaudhary, and V. N. Manoharan, “Using the discrete dipole approximation and holographic microscopy to measure rotational dynamics of non-spherical colloidal particles,” J. Quant. Spectr. Rad. Trans. 146, 499–509 (2014).
  • [11] M. Hannel, C. Middleton, and D. G. Grier, “Holographic characterization of imperfect colloidal spheres,” Appl. Phys. Lett. 107, 141,905 (2015). 1508.01710.
  • [12] F. C. Cheong, P. Kasimbeg, D. B. Ruffner, E. H. Hlaing, J. M. Blusewicz, L. A. Philips, and D. G. Grier, “Holographic characterization of colloidal particles in turbid media,” Appl. Phys. Lett. 111, 153,702 (2017).
  • [13] C. Wang, H. Shpaisman, A. D. Hollingsworth, and D. G. Grier, “Celebrating Soft Matter’s 10th Anniversary: Monitoring colloidal growth with holographic microscopy,” Soft Matter 11, 1062–1066 (2015).
  • [14] C. Wang, H. W. Moyses, and D. G. Grier, “Stimulus-responsive colloidal sensors with fast holographic readout,” Appl. Phys. Lett. 107, 051,903 (2015). 1507.06680.
  • [15] F. C. Cheong, S. Duarte, S.-H. Lee, and D. G. Grier, “Holographic microrheology of polysaccharides from Streptococcus mutans biofilms,” Rheol. Acta 48, 109–115 (2008).
  • [16] H. Shpaisman, B. J. Krishnatreya, and D. G. Grier, “Holographic microrefractometer,” Appl. Phys. Lett. 101, 091,102 (2012).
  • [17] F. C. Cheong, K. Xiao, D. J. Pine, and D. G. Grier, “Holographic characterization of individual colloidal spheres’ porosities,” Soft Matter 7, 6816–6819 (2011).
  • [18] F. C. Cheong, K. Xiao, and D. G. Grier, ‘‘Characterization of individual milk fat globules with holographic video microscopy,” J. Dairy Sci. 92, 95–99 (2009).
  • [19] J. C. Crocker and D. G. Grier, “Methods of digital video microscopy for colloidal studies,” J. Colloid Interface Sci. 179, 298–310 (1996).
  • [20] F. C. Cheong, B. Sun, R. Dreyfus, J. Amato-Grill, K. Xiao, L. Dixon, and D. G. Grier, “Flow visualization and flow cytometry with holographic video microscopy,” Opt. Express 17, 13,071–13,079 (2009).
  • [21] B. J. Krishnatreya and D. G. Grier, “Fast feature identification for holographic tracking: The orientation alignment transform,” Opt. Express 22, 12,773–12,778 (2014).
  • [22] D. Allan, T. Caswell, N. Keim, and C. van der Wel, “Trackpy v0.3.2,” (2016).
  • [23] R. Parthasarathy, “Rapid, accurate particle tracking by calculation of radial symmetry centers,” Nature Methods 9, 724–726 (2012).
  • [24] C. Hollitt, “A convolution approach to the circle Hough transform for arbitrary radius,” Machine Vision and Applications 24, 683–694 (2013).
  • [25] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in IEEE Proceedings on Computer Vision and Pattern Recognition, vol. 1, pp. 511–518 (IEEE, 2001).
  • [26] R. Lienhart and J. Maydt, “An extended set of Haar-like features for rapid object detection,” in IEEE Proceedings on Image Processing, vol. 1, pp. 900–903 (IEEE, 2002).
  • [27] Itseez, “Open Source Computer Vision Library,” (2015).
  • [28] A. Neubeck and L. Van Gool, “Efficient non-maximum suppression,” in 18th International Conference on Pattern Recognition (ICPR’06), vol. 3, pp. 850–855 (2006).
  • [29] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, “Overfeat: Integrated recognition, localization and detection using convolutional networks,” preprint (2013). 1312.6229.
  • [30] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems,” (2015). Software available from, URL
  • [31] R. Stewart and M. Andriluka, “End-to-end people detection in crowded scenes,” preprint (2015). 1506.04878.
  • [32] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015).
  • [33] C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (Wiley Interscience, New York, 1983).
  • [34] M. I. Mishchenko, L. D. Travis, and A. A. Lacis, Scattering, Absorption and Emission of Light by Small Particles (Cambridge University Press, Cambridge, 2001).
  • [35] F. C. Cheong, B. J. Krishnatreya, and D. G. Grier, “Strategies for three-dimensional particle tracking with holographic video microscopy,” Opt. Express 18, 13,563–13,573 (2010).
  • [36] B. J. Krishnatreya, A. Colen-Landy, P. Hasebe, B. A. Bell, J. R. Jones, A. Sunda-Meya, and D. G. Grier, “Measuring Boltzmann’s constant through holographic video microscopy of a single sphere,” Am. J. Phys. 82, 23–31 (2014).
  • [37] X. Michalet and A. J. Berglund, “Optimal diffusion coefficient estimation in single-particle tracking,” Phys. Rev. E 85, 061,916 (2012).
  • [38] D. G. Grier, “A revolution in optical manipulation,” Nature 424(6950), 810–816 (2003).
  • [39] S. C. Chapin, V. Germain, and E. R. Dufresne, “Automated trapping, assembly, and sorting with holographic optical tweezers,” Opt. Express 14, 13,095–13,100 (2006).