CATCH: Characterizing and Tracking Colloids Holographically using deep neural networks

Lauren E. Altman    David G. Grier Department of Physics and Center for Soft Matter Research, New York University, New York, NY 10003
Abstract

In-line holographic microscopy provides an unparalleled wealth of information about the properties of colloidal dispersions. Analyzing one colloidal particle’s hologram with the Lorenz-Mie theory of light scattering yields the particle’s three-dimensional position with nanometer precision while simultaneously reporting its size and refractive index with part-per-thousand resolution. Analyzing a few thousand holograms in this way provides a comprehensive picture of the particles that make a dispersion, even for complex multicomponent mixtures. All of this valuable information comes at the cost of three computationally expensive steps: identifying and localizing features of interest within recorded holograms, estimating each particle’s properties based on characteristics of the associated features, and finally optimizing those estimates through pixel-by-pixel fits to a generative model. Here, we demonstrate an end-to-end implementation that is based entirely on machine-learning techniques. Characterizing and Tracking Colloids Holographically (CATCH) with deep convolutional neural networks is fast enough for real-time applications and otherwise outperforms conventional analytical algorithms, particularly for heterogeneous and crowded samples. We demonstrate this system’s capabilities with experiments on free-flowing and holographically trapped colloidal spheres.

I Introduction

Lorenz-Mie microscopy is a powerful technology for analyzing the properties of colloidal particles and measuring their three-dimensional motions [4]. Starting from in-line holographic microscopy images [32, 5], Lorenz-Mie microscopy measures the three-dimensional location, size and refractive index of each micrometer-scale particle in the microscope’s field of view. A typical measurement yields each particle’s position with nanometer precision over a hundred-micrometer range [8], its size with few-nanometer precision and its refractive index to within a part per thousand [21]. Results from sequences of holograms can be linked into trajectories for flow visualization [9], microrheology [6], photonic force microscopy [18], and to monitor transformations in colloidal dispersions’ properties [9, 33, 36, 35]. The availability of in situ data on particles’ sizes, compositions and concentrations is valuable for product development, process control and quality assurance in such areas as biopharmaceuticals [37], semiconductor processing [7], and wastewater management [28].

Unlocking the full potential of Lorenz-Mie microscopy requires an implementation that operates in real time and robustly interprets the non-ideal holograms that emerge from real-world samples. Here, we demonstrate that this challenge can be met with machine-learning techniques, specifically deep convolutional neural networks that are trained with synthetic data derived from physics-based models.

The analytical pipeline for Lorenz-Mie microscopy involves (1) identifying and localizing features of interest in recorded holograms and (2) estimating single-particle properties from the measured intensity pattern in each feature [4, 34, 14, 27, 13]. The CATCH network performs these analytical steps over an exceptionally wide range of operating conditions, yielding results more robustly and 100 times faster than the best reference implementations based on conventional methods [9, 22, 12, 16]. The results are sufficiently accurate to solve outstanding real-world materials-analysis problems and can bootstrap nonlinear least-squares fits for the most demanding applications.

II Methods

II.1 Lorenz-Mie Microscopy

The holographic microscope used for Lorenz-Mie microscopy is shown schematically in Fig. 1(a). It illuminates the sample with a collimated laser beam whose electric field may be modeled as a plane wave of frequency ω and vacuum wavelength λ propagating along the z^ axis,

𝐄0(𝐫)=u0eikze-iωtx^. (1)

Here, u0 is the beam’s amplitude and k=2πnm/λ is the wavenumber of light in a medium of refractive index nm. The beam is assumed to be linearly polarized along x^. Our implementation uses a fiber-coupled diode laser (Coherent Cube) operating at λ=447nm. The 10mW beam is collimated at 3mm diameter, which more than fills the input pupil of the microscope’s objective lens (Nikon Plan Apo, 100×, numerical aperture 1.4, oil immersion). In combination with a 200mm tube lens, this objective relays images to a grayscale camera (FLIR Flea3 USB 3.0) with a 1280×1024pixel sensor, yielding a system magnification of 48nm/pixel.

Figure 1: Schematic representation of Lorenz-Mie microscopy. (a) A fiber-coupled laser illuminates a colloidal sample. Light scattered by a particle interferes with the rest of the illumination in the focal plane of a microscope that magnifies and relays the interference pattern to a video camera. (b) Each recorded hologram is analyzed to detect features of interest. (c) Each feature is localized within a region whose size is dictated by the local signal-to-noise ratio. (d) Fitting a feature to the model in Eq. (3) yields estimates for 𝐫p, ap and np.

A colloidal particle located at 𝐫p relative to the center of the microscope’s focal plane scatters a small proportion of its illumination to position 𝐫 in the focal plane of the microscope,

𝐄s(𝐫)=E0(𝐫p)𝐟s(k(𝐫-𝐫p)). (2)

The scattered wave’s relative amplitude, phase and polarization are described by the Lorenz-Mie scattering function, 𝐟s(k𝐫), which generally depends on the particle’s size, shape, orientation and composition [3, 24, 15]. For simplicity, we model the particle as an isotropic homogeneous sphere, so that 𝐟s(k𝐫) depends only on the particle’s radius, ap, and its refractive index, np.

The incident and scattered waves interfere in the microscope’s focal plane. The resulting interference pattern is magnified by the microscope and is relayed to the camera, which records its intensity. Each snapshot in the camera’s video stream constitutes a hologram of the particles in the observation volume. The image in Fig. 1(b) shows a typical hologram of four colloidal silica spheres.

The distinguishing feature of Lorenz-Mie microscopy is the method used to extract information from recorded holograms. Rather than attempting to reconstruct the three-dimensional light field that created the hologram, Lorenz-Mie microscopy instead treats the analysis as an inverse problem, modeling the recorded intensity pattern as [4]

I(𝐫)=u02|x^+eikzp𝐟s(k(𝐫-𝐫p))|2+I0, (3)

where I0 is the calibrated dark count of the camera. Fitting Eq. (3) to a measured hologram, as shown in Fig. 1(d), yields the particle’s three-dimensional position, 𝐫p, as well as its radius, ap, and its refractive index, np, at the imaging wavelength.

II.1.1 Holographic optical trapping

Holographic optical traps at a vacuum wavelength of 1064nm are projected into the sample using the same objective lens that is used for holographic microscopy. The traps are powered by a fiber laser (IPG Photonics YLR-10-LP) whose wavefronts are imprinted with computer-generated phase holograms [19] using a liquid crystal spatial light modulator (Holoeye Pluto). The modified beam is relayed into the objective lens with a dielectric multilayer dichroic mirror (Semrock), which permits simultaneous holographic trapping and holographic imaging.

II.2 Conventional Analysis

The first challenge in using Eq. (3) to analyze a hologram is to detect features of interest due to particles in the field of view. The concentric-ring pattern of a colloidal particle’s hologram can confound traditional object-detection algorithms that seek out simply connected regions of similar intensity. This problem has been addressed with two-dimensional mappings such as circular Hough transforms that coalesce concentric rings into compact peaks [9, 26, 22] that can be detected and localized with standard peak-finding algorithms [11]. This approach is reasonably effective for detecting and localizing holograms of well-separated particles. It performs poorly for concentrated samples, however, because overlapping scattering patterns create spurious peaks in the transformed image that can trigger false positive detections. These artifacts can be mitigated by limiting the range over which rings are coalesced at the cost of reducing sensitivity to larger holographic features. Optimizing the trade-off between false-positive and false-negative detections requires tuning the search range in parameter space and therefore creates a barrier to a fully-automated implementation.

Having selected regions of interest such as the examples in Fig. 1(c), the next step is to obtain estimates for the particles’ positions and properties that are good enough to bootstrap nonlinear least-squares fits. Reference implementations [12, 4] of Lorenz-Mie microscopy use the initial localization estimate for the in-plane position, and wavefront-curvature estimates for ap and zp. The initial value of np often is based on a priori knowledge.

Ideally, these initial stages of analysis should proceed with minimal intervention, robustly identifying features and yielding reasonable estimates for parameters over the widest domain of possible values. Applications that would benefit from real-time performance also place a premium on fast algorithms, particularly those that perform effectively on standard computer hardware. These requirements can be satisfied with machine-learning algorithms, which surpass conventional algorithms in robustness and speed.

II.3 Machine Learning Analysis

Previous efforts to streamline holographic particle characterization with machine-learning techniques [38, 16], have addressed separate facets of the problem, specifically feature localization [16] and property estimation [38]. Although each effort has been successful in its domain, their approaches are not easily combined. The improvement in end-to-end processing speed and robustness consequently has been modest.

Figure 2: CATCH uses a deep convolutional neural network (YOLOv3) to detect, localize and estimate the extent, wn, of features in normalized holograms. Each feature is cropped from the image, scaled to a standard 201×201pixel format, and transferred to a second network that estimates the particle’s axial position, radius and refractive index. Each network consists of convolutional layers (CL) that analyze image data and feed their results into fully connected (FC) layers that perform regression.
Figure 3: False negative detections in simulated holograms. (a) Conventional feature detection algorithms miss up to 40% of particles in simulated single-particle holograms. (b) The convolutional neural network misses fewer than 0.1% of the features in the same data set.

We address the need for fast, fully automated hologram analysis with a modular machine-learning system based on highly optimized deep convolutional neural networks. The system, shown in Fig. 2, is trained with synthetic data that cover the entire anticipated domain of operating conditions without requiring manual annotation. Each module yields useful intermediate results, and the end-to-end system effectively bootstraps full-resolution fits, which we validate with experimental data.

The first module identifies features of interest in whole-field holograms, localizes them, and estimates their extents. Each detected feature then is cropped from the image and passed on to the second module, which estimates the particle’s radius, refractive index and axial position. A feature’s pixels and parameter estimates then can be passed on to the third module, not depicted in Fig. 2, which refines the parameter estimates by performing a nonlinear least squares fit to Eq. (3). This modular architecture permits limiting the analysis to just what is required for the application at hand.

II.3.1 Detection and Localization

The detection module is based on the darknet implementation of YOLOv3, a state-of-the-art real-time object-detection framework that uses a convolutional neural network to identify features of interest in images, to localize them and, optionally, to classify them [29]. Given our focus on detection and localization, we adopt the comparatively simple and fast TinyYOLO variant, which consists of 23 convolutional layers with a total of 25 620 adjustable parameters defining convolutional masks and their weighting factors.

Starting with the default darknet weights, we trained the detection network to recognize features in holographic microscopy images using a custom data set consisting of 10 000 synthetic holographic images for training and an additional 1000 images for validation. The synthetic images are designed to mimic experimental holograms over the anticipated domain of operation, with between zero and five particles positioned randomly in the field of view. Particles are randomly assigned radii between ap=200nm and ap=5µm and refractive indexes between np=1.338 and np=2.5, and are located along the optical axis at distances from the focal plane between zp=50pixels and zp=600pixels, with each axial pixel corresponding to the in-plane scale of 48nm. The extent of each holographic feature is defined to be the radius enclosing 20 interference fringes and therefore scales with the particle’s size and axial position. Ideal holograms are degraded with 5% additive Gaussian noise. The ground truth for localization training consists of the in-plane coordinates, (xp,yp), of each feature in a hologram together with the features’ extents. We trained for 500 000epochs with a batch size of 64 images per epoch and 32 subdivisions.

Taking a grayscale image as input, the model returns estimates for each of the detected features’ in-plane positions, (xp,yp), and their extents. These regions of interest can be used immediately to measure particle concentrations, for example, or they can be passed on to the next module for further analysis.

II.3.2 Parameter Estimation

CATCH estimates a particle’s axial position, size, and refractive index by passing the associated block of pixels through a second deep convolutional neural network for regression analysis. The regression network, depicted schematically in Fig. 2, consists of 19 layers with a total of 34 983 total trainable parameters and is constructed with the open-source Tensorflow framework [1] using the Keras API. The network’s input is a 201×201 block of pixels cropped from the holographic image. Image data initially passes through a series of convolutional and pooling layers that reduce the dimensionality of the regression space. The flattened output then is fed through a shared fully-connected layer into three separate fully-connected networks each of which yields one of the three parameters, zp, ap, and np.

The shared fully-connected layer enables the estimator to accommodate holographic features with different extents. Each detected feature is cropped to a integer multiple of the network’s input size based on its estimated extent, and then is scaled down by decimation. The scale factor then is fed into the shared layer along with the results of the convolution analysis. The resulting multi-scale analysis yields consistent results across the entire range of particle properties and positions in the observation volume.

We trained the estimator on 10 000 synthetic single-particle holograms and a validation set of 1000 images, using minimal dropout and L2 regularization to prevent overfitting. Particle properties were drawn from the same distributions used to train the localization network. We degrade these holograms with additive noise, as well as in-plane radial localization error up to 10pixel and estimated extent error up to 10% to simulate worst-case errors from the first stage of analysis. Training with the Adam optimizer [20] proceeded for 5000 epochs with a batch size of 64.

III Results and Discussion

III.1 Validation with Synthetic Data

The CATCH network’s performance is validated first with synthetic data and then through experiments on model systems. The synthetic validation data set consists of 10 000 holograms that were generated independently of the data used for training. This data set is designed to assess performance under ideal imaging conditions without the additional complexity of overlapping features in multi-particle holograms. Each synthetic hologram contains one particle with randomly selected properties placed at random in the field of view of size 1280×1024pixel and includes 5% additive Gaussian noise.

III.1.1 Processing Speed

Tests were performed with hardware-accelerated versions of each algorithm running on desktop workstation equipped with an NVIDIA Titan Xp GPU. On this hardware, conventional algorithms [9, 22, 11, 2, 12] perform a complete end-to-end single-frame analysis in roughly 1s. By contrast, the CATCH network’s detector and localizer requires 20ms per frame and the machine-learning estimator requires an additional 0.9ms per feature. This is fast enough for real-time performance assuming a typical frame rate of 30frames/s.

III.1.2 Detection Accuracy

When assessing the detection accuracy, we are concerned primarily with the rate of false negative detections. False positive detections are less concerning because they can be identified and filtered through post-processing, but false negatives represent lost information. Conventional feature detection algorithms [9, 26, 22] have been shown to work well for small, weakly-scattering particles. Over the larger range of parameter space plotted in Fig. 3(a), however, conventional algorithms fail to detect up to 40% of particles, even under ideal conditions. Over the same range, the neural network misses fewer than 0.1% of features, as shown in Fig. 3(b), and proposes no false positives. The false negatives occur for very small particles that are nearly index matched to the medium whose holograms have the poorest signal-to-noise ratios in this study.

This dramatic improvement in detection reliability greatly expands the parameter space for unattended Lorenz-Mie particle characterization. It allows for automated analysis of larger volumes, larger particles and larger ranges of particle characteristics in a single sample. Such systems could have been analyzed previously, but would have required more detailed human intervention.

III.1.3 Localization and Feature Extent

Localization accuracy is assessed for true-positive detections on synthetic images using the input particle locations as the ground truth. As presented in Fig. 4, the net in-plane localization error is smaller than 1.5pixel or 70nm across the entire range of parameters, and typically is better than 1pixel. Estimates for the features’ extents vary from the ground truth by 15% with a bias toward underprediction. This figure of merit is a target for future improvement because scaling errors propagate into the regression analysis and are found to increase errors in the estimates for ap and zp.

Figure 4: In-plane localization error, Δr, as a function of particle radius and refractive index, averaged over axial position. Obtained from TinyYOLO implementation of the network localizer.

III.1.4 Characterization

Fig. 5 summarizes the regression network’s performance for estimating axial position, particle size and refractive index in the validation set of synthetic data. Each panel shows the root-mean-square error for one parameter as a function of ap and np, averaged over 𝐫p. Over most of the parameter domain, the estimator predicts the relevant parameter within 10%.

The errors in conventional gradient-descent fits to the Lorenz-Mie theory display pronounced anticorrelations between ap and np [31]. No such cross-parameter correlation is evident in the error surfaces plotted in Fig. 5. This difference highlights a potential benefit of machine-learning regression for complex image-analysis tasks. Unlike conventional fitters, machine-learning algorithms do not attempt to follow smooth paths through complex error landscapes, but rather rely upon an internal representation of the error landscape that is built up during training. Directly reading out an optimal solution from such an internal representation is computationally efficient and less prone to trapping in local minima of the conventional error surface. Most importantly, unsupervised parameter estimation eliminates the need for a priori information or human intervention in colloidal materials characterization.

Figure 5: Root-mean-square errors in (a) axial position, Δzp, (b) radius, Δap and (c) refractive index, Δnp, as a function of radius and refractive index. Results are averaged over placement errors.

III.2 Experimental Validation

Having validated the CATCH system’s performance with synthetic data, we use it to analyze experimental data. Some applications, such as measuring particle concentations, can be undertaken with the detection and localization module alone. Some other tasks require characterization data and can be performed with the output of the estimation module. Still others use machine-learning estimates to bootstrap nonlinear least-squares fits to Eq. (3). The full end-to-end mode of operation benefits from the speed and robustness of machine-learning estimation and delivers the precision of nonlinear optimization [4, 21].

III.2.1 Fast and accurate colloidal concentrations

CATCH’s detection subsystem rapidly counts particles passing through the microscope’s observation volume and thus can measure their concentration. Its ability to detect particles over a large axial range is an advantage relative to conventional image-based particle-counting techniques [30], which have a limited depth of focus and thus a more restricted observation volume.

Although the holographic microscope’s measurement volume might be known a priori, CATCH also can estimate the effective observation volume from the least bounding rectangular prism that encloses all detected particle locations. This internal calibration is most effective for particles that remain dispersed throughout the height of the channel. For such samples, it addresses uncertainties due to variations in actual channel dimensions and accounts for detection limits near the boundaries of the observation volume.

We demonstrate machine-learning concentration measurements on a heterogeneous sample composed of two different populations of colloidal spheres of similar sizes, but different refractive indexes dispersed in water at nearly equal number densities. The low-index population (n=1.37) is composed of mesoporous silica (Sigma-Aldrich, CAS no. 7631-86-9), and has a relatively broad size distribution. The high-index (n=1.60) population is composed of monodisperse polystyrene spheres (Duke Standards, catalog no. 4025A). Both populations have a nominal radius of ap=1.25µm, so conventional particle-characterization techniques would not be able to distinguish between the two populations. The low-contrast features created by low-index particles can be difficult to detect in combination with the more prominent features created by high-index particles, particularly when both appear in the same field of view.

A 30µL aliquot of this dispersion is introduced into a channel formed by bonding the edges of a #1.5 cover glass to the face of a glass microscope slide with UV-cured adhesive (Norland Products, catalog no. NOA81). The resulting channel is roughly 1mm wide, 2cm long and 20µm deep. Once the cell is mounted on the stage of the holographic microscope, we transport roughly 10µL of this sample through the microscope’s observation volume at roughly 1mms-1 in a capillary-driven flow. This is fast enough to ensure that most particles are recorded just once with the video camera running at 24frames/s. A data set of 19 000 video frames recorded over 13min probes a total volume of 0.36±0.05µL given the effective observation volume of 20×32×(29±4.)µm, or 19±3.pL. The 15% uncertainty in the axial extent dominates the uncertainty in the effective observation volume. The in-plane dimensions are determined to better than 1%.

CATCH reports 907 features in this sample, which corresponds to a net concentration of (2.5±0.4)×106particles/mL. This value is consistent with expectations based on the concentrations of the stock dispersions and also is consistent with value of of 2.6×106particles/mL obtained with with a commercial xSight holographic particle characterization instrument. CATCH is fast enough to complete the concentration estimate in the time required to record the images.

III.2.2 Characterizing heterogeneous dispersions

Figure 6: Machine-learning estimates for radius, ap, and refractive index, np, of a mixture of polystyrene and mesoporous silica spheres. Each point represents the properties of a single particle and is colored by the probability density of the observations, P(ap,np). Side bars show projections of this joint probability distribution into the probability distributions for particle radius, P(ap), and particle refractive index P(np). The red dashed curve is a fit to a sum of two Gaussian distributions.

The detection subsystem does not distinguish between different populations of particles in the same sample. The estimation subsystem, however, can differentiate particles by refractive index. The scatter plot in Fig. 6 shows machine-learning estimates for the radius and refractive index of the particles detected in the mixed sample, each point representing the properties of one sphere. Points are colored by the density of observations, P(ap,np). The projected size distribution, P(ap) shows only a single peak because the two populations of spheres have the same mean radius. Two peaks are clearly resolved, however, in the projected distribution of refractive indexes, P(np), which also is plotted in Fig. 6. The dashed (red) curve superimposed on this distribution is the sum of two Gaussian probability distributions which allows us to discern two distinct populations, one at np=1.36±0.04 and the other at np=1.57±0.12. These values are consistent with expectations for mesoporous silica and polystyrene respectively, and were obtained from the machine-learning estimates without invoking a priori knowledge.

III.2.3 Tracking confined sedimentation

To illustrate fully refined tracking based on CATCH estimation, we measure the sedimentation of a colloidal sphere between two parallel horizontal surfaces. The influence of slot confinement on a colloidal sphere’s in-plane drag coefficient has been reported previously using conventional imaging [10]. The axial drag coefficient has not been reported, presumably because of the difficulty of measuring the axial position with sufficient accuracy.

We perform the measurement on a colloidal silica sphere (Bangs Laboratories, catalog number SS05N) dispersed in 30µL of deionized water that is contained in a glass sample chamber formed by bonding the edges of a glass cover slip to the face of a glass microscope slide. Holographic optical traps are projected into the sample using the same objective lens that is used to record holograms [19]. We lift the sphere to the top of its sample chamber using a holographic optical trap [19, 25] and then release it. Analyzing the particle’s trajectory then yields an estimate for the buoyant mass density that can be compared with an orthogonal estimate based on the particle’s holographically measured measured size and refractive index.

The data points in Fig. 7 are machine-learning estimates of the particle’s axial position, zp(t), as a function of time, recorded at 24frames/s. The solid (black) curve is obtained by fitting the sphere’s hologram to Eq. (3) starting from machine-learning estimates for 𝐫p(t), ap and np. These fits converge to ap=1.14±0.04µm and np=1.398±0.005, which are consistent with the manufacturer’s specification and with the population-averaged properties, ap=1.17±0.15µm and np=1.42±0.02, obtained with a commercial holographic particle characterizer (Spheryx xSight) operating at the same laser wavelength.

In addition to being acted upon by gravity, the particle also is hydrodynamically coupled to the walls of its sample chamber, which reduces its mobility. The particle sediments under gravity at a rate,

dzpdt=-43πap3(ρp-ρm)gμ(zp), (4a)
that depends on the difference between its mass density, ρp, and the mass density of the medium, ρm. Hydrodynamic coupling to the parallel glass walls reduces the sphere’s mobility, μ(zp), by an amount that depends on its axial position within the channel. Specifically, the flow field due to the sedimenting sphere is modified by no-slip boundary conditions at the lower and upper walls, which are located at z=z0 and z=z0+H relative to the microscope’s focal plane, respectively. For simplicity, we model the resulting dependence by combining lowest-order single-wall corrections [17] with linear superposition approximation to obtain
μ(z)16πηap(1-98apz-z0-98apH-z+z0), (4b)

where η=0.89mPas is the viscosity of water. The solid (red) curve in Fig. 7 is a fit to the prediction of Eq. (4) with z0, H and ρp as adjustable parameters. This fit yields ρp=2.18+0.07-0.20gcm-3 assuming ρm=0.997gcm-3 for water.

The particle’s comparatively low mass density is consistent with its low refractive index. Maxwell Garnett effective medium theory [23] suggests that the particle’s density may be estimated from its refractive index as

ρp=ρ0Lm(n0)-Lm(np)Lm(n0)-Lm(1), (5)

where ρ0=2.20gcm-3 is the density of fused silica, n0=1.465 is the refractive index of fused silica at the imaging wavelength, and

Lm(n)=n2-nm2n2+2nm2 (6)

is the Lorentz-Lorenz function. The result, ρp=1.90±0.10gcm-3, is consistent with the lower bound of the kinematic estimate, and so helps to validate [21] the accuracy and precision with which CATCH characterizes and tracks colloidal particles.

Figure 7: Estimated (points) and refined (solid curve) axial trajectory of a colloidal silica sphere being lifted to the upper wall of a water-filled channel and allowed to sediment to the lower wall under gravity. The heavy (red) curve is a fit to Eq. (4) for the density of the particle and the positions of the walls, which are indicated by horizontal dashed lines.

IV Conclusions

CATCH is an end-to-end machine-learning system for analyzing the properties of colloidal dispersions from holographic microscopy images. Based on YOLO and a custom-designed deep convolutional neural network, this system delivers the full characterization and tracking power of Lorenz-Mie microscopy with greatly improved speed and robustness. This implementation has been validated both with simulated data and also through experimental measurements on model colloidal dispersions. These measurements illustrate the utility of CATCH for measuring the concentrations of colloidal dispersions, for characterizing the particles in heterogeneous dispersions, and for measuring single-particle dynamics.

More generally, CATCH embodies a paradigm shift in measurement theory, with machine-learning algorithms replacing physical mechanisms and physics-based models in precision measurements. The availability of such ”brain-in-a-box” instruments increases the speed and robustness of such measurements and also promises access to physical phenomena that cannot readily be measured by other means.

Our open-source implementation of the end-to-end CATCH system is available online at https://github.com/laltman2/CNNLorenzMie.

Acknowledgements

This work was supported primarily by the MRSEC program of the National Science Foundation under Award Number DMR-1420073. Additional support was provided by the SBIR program of the National Institutes of Health under Award Number R44TR001590. The Titan Xp GPU used for this work was provided by a GPU Grant from NVIDIA. The Spheryx xSight holographic characterization instrument was acquired by the NYU MRSEC as shared instrumentation. The holographic trapping and microscopy system used for experiments in Sec. III.2 was developed under the MRI program of the NSF under Award Number DMR-0922680.

References

  • [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng (2015) TensorFlow: large-scale machine learning on heterogeneous systems. Note: Software available from tensorflow.org External Links: Link Cited by: §II.3.2.
  • [2] D. Allan, T. Caswell, N. Keim, and C. van der Wel (2016) Trackpy v0.3.2. Note: \urlhttp://doi.org/10.5281/zenodo.60550 Cited by: §III.1.1.
  • [3] C. F. Bohren and D. R. Huffman (1983) Absorption and Scattering of Light by Small Particles. Wiley Interscience, New York. Cited by: §II.1.
  • [4] S. Lee, Y. Roichman, G. Yi, S. Kim, S. Yang, A. van Blaaderen, P. van Oostrum, and D. G. Grier (2007) Characterizing and tracking single colloidal particles with video holographic microscopy. Opt. Express 15, pp. 18275–18282. External Links: Document Cited by: §I, §I, §II.1, §II.2, §III.2.
  • [5] S. Lee and D. G. Grier (2007) Holographic microscopy of holographically trapped three-dimensional structures. Opt. Express 15, pp. 1505–1512. External Links: Document Cited by: §I.
  • [6] F. C. Cheong, S. Duarte, S. Lee, and D. G. Grier (2008) Holographic microrheology of polysaccharides from Streptococcus mutans biofilms. Rheol. Acta 48, pp. 109–115. External Links: Document Cited by: §I.
  • [7] F. C. Cheong, P. Kasimbeg, D. B. Ruffner, E. H. Hlaing, J. M. Blusewicz, L. A. Philips, and D. G. Grier (2017) Holographic characterization of colloidal particles in turbid media. Appl. Phys. Lett. 111, pp. 153702. External Links: Document Cited by: §I.
  • [8] F. C. Cheong, B. J. Krishnatreya, and D. G. Grier (2010) Strategies for three-dimensional particle tracking with holographic video microscopy. Opt. Express 18, pp. 13563–13573. External Links: Document Cited by: §I.
  • [9] F. C. Cheong, B. Sun, R. Dreyfus, J. Amato-Grill, K. Xiao, L. Dixon, and D. G. Grier (2009) Flow visualization and flow cytometry with holographic video microscopy. Opt. Express 17, pp. 13071–13079. External Links: Document Cited by: §I, §I, §II.2, §III.1.1, §III.1.2.
  • [10] E. R. Dufresne, D. Altman, and D. G. Grier (2001) Brownian dynamics of a sphere between parallel walls. Europhys. Lett. 53 (2), pp. 264–270. External Links: Document Cited by: §III.2.3.
  • [11] J. C. Crocker and D. G. Grier (1996) Methods of digital video microscopy for colloidal studies. J. Colloid Interface Sci. 179, pp. 298–310. External Links: Document Cited by: §II.2, §III.1.1.
  • [12] T. G. Dimiduk, J. Fung, R. W. Perry, and V. N. Manoharan HoloPy – hologram processing and light scattering in python. External Links: Link Cited by: §I, §II.2, §III.1.1.
  • [13] J. Fung, R. W. Perry, T. G. Dimiduk, and V. N. Manoharan (2012) Imaging multiple colloidal particles by fitting electromagnetic scattering solutions to digital holograms. J. Quant. Spectr. Rad. Trans. 113, pp. 2482–2489. Cited by: §I.
  • [14] J. Fung, K. E. Martin, R. W. Perry, D. M. Kaz, R. McGorty, and V. N. Manoharan (2011) Measuring translational, rotational, and vibrational dynamics in colloids with digital holographic microscopy. Opt. Express 19, pp. 8051–8065. Cited by: §I.
  • [15] G. Gouesbet and G. Gréhan (2011) Generalized lorenz-mie theories. Springer-Verlag, Berlin. Cited by: §II.1.
  • [16] M. D. Hannel, A. Abdulali, M. O’Brien, and D. G. Grier (2018) Machine-learning techniques for fast and accurate feature localization in holograms of colloidal particles. Opt. Express 26, pp. 15221–15231. External Links: Document Cited by: §I, §II.3.
  • [17] J. Happel and H. Brenner (1991) Low reynolds number hydrodynamics. Kluwer, Dordrecht. Cited by: 4.
  • [18] Y. Roichman, B. Sun, A. Stolarski, and D. G. Grier (2008) Influence of non-conservative optical forces on the dynamics of optically trapped colloidal spheres: the fountain of probability. Phys. Rev. Lett. 101, pp. 128301. External Links: Document Cited by: §I.
  • [19] D. G. Grier (2003) A revolution in optical manipulation. Nature 424 (6950), pp. 810–816. External Links: Document Cited by: §II.1.1, §III.2.3.
  • [20] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. Note: arXiv:1412.6980 Cited by: §II.3.2.
  • [21] B. J. Krishnatreya, A. Colen-Landy, P. Hasebe, B. A. Bell, J. R. Jones, A. Sunda-Meya, and D. G. Grier (2014) Measuring Boltzmann’s constant through holographic video microscopy of a single sphere. Am. J. Phys. 82, pp. 23–31. External Links: Document Cited by: §I, §III.2.3, §III.2.
  • [22] B. J. Krishnatreya and D. G. Grier (2014) Fast feature identification for holographic tracking: the orientation alignment transform. Opt. Express 22, pp. 12773–12778. External Links: Document Cited by: §I, §II.2, §III.1.1, §III.1.2.
  • [23] V. Markel (2016) Introduction to the Maxwell Garnett approximation: tutorial. J. Opt. Soc. Am. A 33, pp. 1244–1256. Cited by: §III.2.3.
  • [24] M. I. Mishchenko, L. D. Travis, and A. A. Lacis (2001) Scattering, Absorption and Emission of Light by Small Particles. Cambridge University Press, Cambridge. Cited by: §II.1.
  • [25] M. J. O’Brien and D. G. Grier (2019) Above and beyond: holographic tracking of axial displacements in holographic optical tweezers. Opt. Express 27, pp. 24866–25435. External Links: Document Cited by: §III.2.3.
  • [26] R. Parthasarathy (2012) Rapid, accurate particle tracking by calculation of radial symmetry centers. Nature Methods 9, pp. 724–726. Cited by: §II.2, §III.1.2.
  • [27] R. W. Perry, G. N. Meng, T. G. Dimiduk, J. Fung, and V. N. Manoharan (2012) Real-space studies of the structure and dynamics of self-assembled colloidal clusters. Faraday Discuss. 159, pp. 211–234. Cited by: §I.
  • [28] L. A. Philips, D. B. Ruffner, F. C. Cheong, J. M. Blusewicz, P. Kasimbeg, B. Waisi, J. McCutcheon, and D. G. Grier (2017) Holographic characterization of contaminants in water: differentiation of suspended particles in heterogeneous dispersions. Water Research 122, pp. 431–439. External Links: Document Cited by: §I.
  • [29] J. Redmon and A. Farhadi (2018) YOLOv3: an incremental improvement. CoRR abs/1804.02767. External Links: Link Cited by: §II.3.1.
  • [30] D. C. Ripple and Z. Hu (2016) Correcting the relative bias of light obscuration and flow imaging particle counters. Pharm. Res. 33 (3), pp. 653–672. Cited by: §III.2.1.
  • [31] D. B. Ruffner, F. C. Cheong, J. M. Blusewicz, and L. A. Philips (2018) Lifting degeneracy in holographic characterization of colloidal particles using multi-color imaging. Opt. Express 26, pp. 13239–13251. Cited by: §III.1.4.
  • [32] J. Sheng, E. Malkiel, and J. Katz (2006) Digital holographic microscope for measuring three-dimensional particle distributions and motions. Appl. Opt. 45 (16), pp. 3893–3901. Cited by: §I.
  • [33] H. Shpaisman, B. J. Krishnatreya, and D. G. Grier (2012) Holographic microrefractometer. Appl. Phys. Lett. 101, pp. 091102. External Links: Document Cited by: §I.
  • [34] F. Soulez, L. Denis, C. Fournier, É. Thiébaut, and C. Goepfert (2007) Inverse-problem approach for particle digital holography: accurate location based on local optimization. J. Opt. Soc. Am. A 24, pp. 1164–1171. Cited by: §I.
  • [35] C. Wang, H. W. Moyses, and D. G. Grier (2015) Stimulus-responsive colloidal sensors with fast holographic readout. Appl. Phys. Lett. 107, pp. 051903. External Links: Document Cited by: §I.
  • [36] C. Wang, H. Shpaisman, A. D. Hollingsworth, and D. G. Grier (2015) Celebrating Soft Matter’s 10th anniversary: monitoring colloidal growth with holographic microscopy. Soft Matter 11, pp. 1062–1066. External Links: Document Cited by: §I.
  • [37] C. Wang, X. Zhong, D. B. Ruffner, A. Stutt, L. A. Philips, M. D. Ward, and D. G. Grier (2016) Holographic characterization of protein aggregates. J. Pharm. Sci. 105, pp. 1074–1085. External Links: Document Cited by: §I.
  • [38] A. Yevick, M. Hannel, and D. G. Grier (2014) Machine-learning approach to holographic particle characterization. Opt. Express 22, pp. 26884–26890. External Links: Document Cited by: §II.3.