Machine learning enables precise holographic characterization of colloidal materials in real time

Lauren E. Altman    David G. Grier Department of Physics and Center for Soft Matter Research, New York University, New York, NY 10003, USA
Abstract

Holographic particle characterization uses in-line holographic video microscopy to track and characterize individual colloidal particles dispersed in their native fluid media. Applications range from fundamental research in statistical physics to product development in biopharmaceuticals and medical diagnostic testing. The information encoded in a hologram can be extracted by fitting to a generative model based on the Lorenz-Mie theory of light scattering. Treating hologram analysis as a high-dimensional inverse problem has been exceptionally successful, with conventional optimization algorithms yielding nanometer precision for a typical particle’s position and part-per-thousand precision for its size and index of refraction. Machine learning previously has been used to automate holographic particle characterization by detecting features of interest in multi-particle holograms and estimating the particles’ positions and properties for subsequent refinement. This study presents an updated end-to-end neural-network solution called CATCH (Characterizing and Tracking Colloids Holographically) whose predictions are fast, precise, and accurate enough for real-world high-throughput applications. The ability of CATCH to learn a representation of Lorenz-Mie theory that fits within a diminutive 200 kB hints at the possibility of developing a greatly simplified formulation of light scattering by small objects.

I Introduction

Machine learning algorithms are revolutionizing measurement science by decoupling quantitative analysis of experimental data from the mathematical representation of the underlying theory [1, 2]. The abstract representation of a measurement principle that is encoded in a well-designed and well-trained machine-learning system can rival the precision and accuracy attained by fitting to an analytic theory and typically yields results substantially faster. Gains in speed and robustness have been particularly impressive for measurement techniques based on video streams [3, 4, 5, 6], which typically involve distilling small quantities of valuable information from large volumes of noisy data. Previous studies have demonstrated that machine-learning algorithms dovetail well with holographic video microscopy [7], identifying features of interest within experimentally recorded holograms [8, 4, 9] and extracting individual particles’ positions and characteristics from the information encoded in those features [10, 4, 9].

Using holography to count, track and characterize colloidal particles provides unprecedented insights into the composition and microscopic dynamics of colloidal dispersions [11, 12, 13], with applications ranging from fundamental research in statistical physics [14, 15] to formulation and manufacture of biopharmaceuticals [16, 17, 18, 19] and medical testing [20, 21]. Hologram analysis is a challenging inverse problem [22, 7] both because recorded intensity patterns necessarily omit half of the information about the light’s amplitude and phase profiles and also because the underlying Lorenz-Mie theory of light scattering is notoriously complicated [23, 24, 25]. Extracting quantitative information from holograms is an unusual application for machine learning in two respects: (1) it involves regression of continuously varying properties from experimental data and (2) the machine-learning system can be trained with synthetic data generated from an exact theory [10, 4, 26]. The trained system therefore embodies a simplified representation of the underlying theory over a specified parameter domain that can be computed rapidly enough to be useful for real-world applications.

Refer to caption
Figure 1: Schematic representation of Lorenz-Mie microscopy using CATCH machine-learning analysis. (a) Collimated laser light illuminates a colloidal sample. Light scattered by a particle interferes with the rest of the illumination in the focal plane of a microscope, which magnifies and relays the interference pattern to a video camera. (b) A typical recorded hologram of micrometer-diameter silica and polystyrene colloidal spheres. Superimposed boxes denote features corresponding to individual particles. (c) The CATCH machine-learning pipeline consists of two modules. The Localizer, implemented with YOLOv5, finds in-plane coordinates, xp and yp, for detected features and generates bounding boxes such as the examples in (b). Feeding these features into the Estimator yields predictions for 𝐫p, ap and np.

Previous machine-learning implementations of holographic particle characterization surpassed conventional algorithms [27] for detecting features associated with particles in complicated multi-particle holograms [8, 4, 9]. They fared less well, however, at reliably extracting information from those features [10, 4, 9], typically resolving particle radius and refractive index with 5 % accuracy [9], compared with the part-per-thousand resolution obtained with iterative optimization [14]. Even so, the precision afforded by such machine-learning implementations is competitive with standard particle-resolved sizing techniques such as electron microscopy and is good enough to bootstrap iterative optimization for especially demanding applications. Most importantly, machine-learning analysis can be applied to novel systems without requiring a priori knowledge of their composition.

Guided by an analysis of the information content encoded in colloidal particles’ holograms, we designed and implemented a deep neural network called CATCH that rapidly performs fully-automated analyses of in-line holographic microscopy images to detect, localize and characterize individual colloidal particles [4]. Here, we introduce enhancements to the CATCH architecture that improve the precision and accuracy of parameter estimation substantially enough to rival iterative optimization algorithms across best-case parameter ranges. The availability of a fast end-to-end solution for colloidal characterization creates opportunities for high-throughput applications in areas such as medical diagnostics [28, 20, 29] and industrial process control [18]. The ability of CATCH to encapsulate the complexities of Lorenz-Mie theory in a small memory footprint furthermore hints at the existence of a simplified representation of light-scattering theory that would benefit areas as diverse as astrophysics and industrial materials characterization.

I.1 Lorenz-Mie Microscopy

Figure 1(a) schematically represents an in-line holographic microscope that is suitable for characterizing and tracking colloidal particles [11, 30]. A sample containing colloidal particles is illuminated by a collimated laser beam whose electric field may be modeled as a plane wave of frequency ω and vacuum wavelength λ propagating along the z^ axis,

𝐄0(𝐫,t)=E0eikzeiωtx^. (1)

Here, E0 is the field’s amplitude and k=2πnm/λ is the wavenumber of light in a medium of refractive index nm. The beam is assumed to be linearly polarized along x^. Our implementation uses a fiber-coupled diode laser (Coherent, Cube) operating at λ=447 nm. The 10 mW beam is collimated at a diameter of 3 mm, which more than fills the input pupil of the microscope’s objective lens (Nikon Plan Apo, 100×, numerical aperture 1.4, oil immersion). The objective lens relays images through a 200 mm tube lens to a gray-scale camera (FLIR, Flea3 USB 3.0) with a 1280×1024 pixel sensor, yielding a system magnification of 48 nm/pixel and a dynamic range of 8 bits/pixel.

A colloidal particle located at 𝐫p scatters a small proportion of the illumination to position 𝐫 in the focal plane of the microscope,

𝐄s(𝐫,t)=E0eikzp𝐟s(k(𝐫𝐫p))eiωt. (2)

The scattered wave’s relative amplitude, phase and polarization are described by the Lorenz-Mie scattering function, 𝐟s(k𝐫), which generally depends on the particle’s size, shape, orientation and composition [23, 24, 25]. For simplicity, we model the particle as an isotropic homogeneous sphere, so that 𝐟s(k𝐫) depends only on the particle’s radius, ap, and refractive index, np.

The incident and scattered waves interfere in the microscope’s focal plane. The resulting interference pattern is magnified by the microscope and is relayed to the camera [31], which records its intensity. Each snapshot in the camera’s video stream constitutes a hologram of the particles in the observation volume. The image in Fig. 1(b) is a typical experimentally recorded hologram of colloidal silica and polystyrene spheres.

The distinguishing feature of Lorenz-Mie microscopy is the method used to extract information from recorded holograms. Rather than attempting to reconstruct the three-dimensional light field that created the hologram, Lorenz-Mie microscopy instead treats the analysis as an inverse problem, modeling the intensity pattern recorded in the plane z=0 as [11]

I(𝐫)=E02|x^+eikzp𝐟s(k(𝐫𝐫p))|2+I0, (3)

where I0 is the calibrated dark count of the camera. Fitting Eq. (3) to a measured hologram yields estimates for the three-dimensional position, 𝐫p, radius, ap, and refractive index, np, for each particle in the field of view.

II Algorithms for hologram analysis

II.1 Feature Detection and Localization

Analyzing a hologram such as the example in Fig. 1(b) begins with detecting features of interest in the recorded image. This is a challenging image analysis problem because the number of features typically is not known a priori, each feature can cover a large area with alternating bright and dark fringes, and neighboring particles’ fringes can interfere with each other. Circular Hough transforms [32, 33, 27], voting algorithms [32] and symmetry-based transforms [27, 34] leverage a feature’s radial symmetry to coalesce its concentric rings into a simple peak that can be detected with standard particle-tracking algorithms [35]. Image noise and interference artifacts can violate the assumptions underlying these algorithms, leading to poor localization and an undesirable rate of false-positive and false-negative detections [4].

II.2 Pixel Selection

Having detected and localized a feature, the analytical pipeline selects pixels for further analysis. Limiting the selection to too small a range discards information from the diffraction pattern’s outer fringes. Selecting too large a range reduces the sample’s signal-to-noise ratio and, worse, can introduce interference from neighboring spheres. A suitable range can be estimated by counting diffraction fringes [27]. Additional efficiency can be gained by sampling a subset of the pixels within that range [36].

II.3 Parameter Estimation

Information is extracted from the selected pixels by fitting their intensity values with the generative model in Eq. (3). Such fits typically involve iterative nonlinear refinement of the adjustable parameters whose convergence to an optimal solution is never certain [37]. Successful optimization relies on good starting estimates for the adjustable parameters and typically yields values with part-per-thousand precision [14].

Pioneering implementations of holographic particle characterization relied on manual annotation of features in holograms and a priori knowledge of particle properties to initialize fits to generative models [11]. Automated initialization might use wavefront curvature to estimate axial position [38, 39, 11] and fringe spacings to estimate particle size [40, 41, 38, 11]. These methods typically work well over a limited range of parameters. Monte Carlo methods can cover a wider range by initializing fits from multiple starting points and selecting the best solution overall [42]. This approach achieves robust convergence, but at a considerably higher computational cost.

II.4 Effective parameter ranges

The Lorenz-Mie theory for light scattering by homogeneous spheres is the simplest and most effective model for analyzing holograms of colloidal particles. Real-time implementations return tracking and characterization data as fast as the camera records holograms. Applying this analysis to holograms of aspherical and inhomogeneous particles yields values for particle position, size and refractive index that reflect the properties of an effective sphere enclosing the particle [43]. Effective-sphere properties can be related to an inhomogeneous particles’ true properties through effective-medium theory [43, 44, 16, 45, 46, 47]. When applied to colloidal dimers, for example, effective-sphere analysis can yield the asymmetric particle’s three-dimensional orientation in addition to its three dimensional position [29]. Practical implementations of Lorenz-Mie analysis, however, are limited by instrumental and computational constraints.

Refer to caption
Figure 2: Schematic overview of the Estimator network. A cropped hologram with dimensions w×w is scaled to a standard size of 201×201 pixels before being fed in to a cascade of convolutional layers that distill it into a 400-wide vector. These values together with the scale factor are analyzed by a fully-connected layer to produce a 20-wide vector that serves as an optimal representation of the information in the original hologram. This representation is then parsed by separate fully-connected layers into estimates for the particle’s axial position, zp, radius, ap, and refractive index, np.

II.4.1 Axial Position

The scale and nature of a recorded holographic feature depends on how far the scattered light propagates before it reaches the imaging plane and the phase of the reference beam at that plane. As the particle approaches the imaging plane, the separation between diffraction fringes becomes smaller than the camera’s pixel size, and information about the particle’s properties are lost. The spatial resolution of our reference instrument sets the lower bound for axial tracking at roughly zp5 µm. Conversely, as the particle moves away from the focal plane, its scattering pattern spreads over increasingly many pixels to the detriment of the signal-to-noise ratio. This sets an upper limit on axial tracking in our microscope to zp50 µm [31].

II.4.2 Particle Size

Both commercial and academic implementations of holographic particle characterization work with particles ranging in diameter from 500 nm to 10 µm [11]. The lower limit is set by the low signal-to-noise ratio for in-line holograms created by weak scatterers. Switching to off-axis or dark-field holography improves the signal-to-noise ratio for weak scatterers and extends the lower size limit down to 50 nm [48]. The upper limit is set by the tendency of large particles to scatter light strongly enough to saturate the camera. Mitigating this effect by moving to lower magnification, including lensless implementations, provides tracking and sizing information for particles as large as 50 µm using simplified generative models [49].

II.4.3 Refractive Index

Hologram fitting has been demonstrated for dielectric spheres with refractive indexes ranging from bubbles with np=1.0 [18] up to titanium dioxide with np=2.8 [11]. The particles can have refractive indexes either higher or lower than that of the surrounding medium [47]. Successful tracking and characterization require only that the refractive index of the particle differ from that of the medium by Δn=npnm=±0.002 [18].

II.4.4 Morphology

The choice of scattering function, 𝐟s(k(𝐫𝐫p), in Eq. (3) establishes what kinds of particles can be analyzed. The present study focuses on the Lorenz-Mie scattering function for homogeneous spheres. Other choices include scattering functions for core-shell particles and layered spheres, for ellipsoids, and for spherocylinders [50, 51, 52]. Elementary scattering functions can be combined to treat more highly structured particles such as dimers and clusters of spheres [53, 54]. Increasing the complexity of the model increases the demands on the analytical pipeline to find optimal solutions for each of the model’s adjustable parameters.

III CATCH

Figure 1(c) presents the CATCH machine-learning system that performs all of the analytical operations identified in Sec. II with an integrated pipeline [4]. Machine-learning algorithms have been adopted for holographic particle characterization to expand the effective range, improve robustness against false positive and false negative detections, and reduce processing time compared with conventional image-analysis techniques [10, 8, 4, 55]. The original implementations used distinct types of trainable algorithms for feature detection [8, 4], pixel selection, and parameter regression [10, 4]. CATCH, by contrast, uses the widely-adopted YOLO family of object-detection networks [56, 57, 58, 59] to detect and localize features [4, 55] and then feeds the selected pixels directly into a custom Estimator network that extracts optimal values for the adjustable parameters.

The original implementation of CATCH [4] uses YOLOv3, which is based on the open-source darknet library [58]. The complementary Estimator network is written in TensorFlow [60]. Having both darknet and TensorFlow as separate requirements complicates installation, maintenance and customization. The two systems, furthermore, require distinct training protocols.

The version of CATCH developed for this study (CATCHv2) uses YOLOv5, which is built with the PyTorch [61] machine-learning framework. The updated Estimator also is defined in PyTorch, thereby fully integrating the two stages, simplifying installation and maintenance, and facilitating training.

III.1 Continuous Scaling

CATCHv2 features a set of critical innovations that dramatically improve its performance. The most important of these involves how features identified by the Localizer are transferred into the Estimator. Features vary in size depending on the nature and position of the particle. As shown in Fig. 2, each feature must be scaled from its true dimensions, w×w, to a standard size of 201×201 pixels before it can be processed by the Estimator. The original implementation of CATCH either cropped a given feature to this size or else scaled it by an integer factor before cropping, depending on the ideal size determined by the Localizer. CATCHv2 instead continuously scales the block of pixels to the required size with bilinear interpolation.

Continuous scaling allows features to be precisely cropped for analysis, which improves the signal-to-noise ratio. It also increases the system’s reliance on the Localizer to estimate feature extents accurately. CATCHv2 achieves this by adopting the “small” variant of YOLOv5, which estimates feature extent with single-pixel precision. The original CATCH implementation, by contrast, achieved ten-pixel precision using the “tiny” variant of YOLOv3 [4]. Improving the estimate for feature extent also encodes more information about the particle’s size and axial position in the scale factor. CATCHv2 leverages this to improve the precision of its overall parameter estimation.

III.2 Architecture of the Estimator

CATCH’s Estimator uses four convolutional layers to distill the 40 401 8-bit values that constitute a scaled feature into a set of 400 single-precision floating-point values. This set together with the scale factor is then processed by a fully-connected neural-network layer into a 20-element vector that optimally represents the particle’s properties in an abstract vector space that is parameterized by zp, ap and np. The vector of values computed from the feature is parsed by three specialized fully-connected layers into each of these parameters.

The Estimator’s convolutional layers distill information from a feature using sets of 3×3 pixel masks. Intermediate results are combined with three stages of two-fold max-pooling and one stage of four-fold max-pooling. The fully-connected layers use rectified linear unit (ReLU) activation, which has been shown to facilitate rapid training in regression networks [62]. Final results are scaled into physical units by a linear fully-connected layer.

The vector space of optimal representations may be viewed as an idealized model of the Lorenz-Mie scattering theory in the relevant range of parameter values. If the network can be appropriately trained, the 20 values that span the space could encode 1096 distinct values of zp, ap and np, which would be more than enough to estimate parameters with the part-per-thousand precision provided by conventional optimization algorithms. The entire Estimator has 34 983 trainable parameters that can be stored in 200 kB. Whether such a small network can achieve the potential suggested by the naive interpretation of the optimal representation depends on the success of the training protocol.

III.3 Training

Like its predecessors [10, 8, 4], CATCHv2 streamlines training by using a generative model such as Eq. (3) to produce synthetic data with established ground-truth parameters, rather than relying on manually annotated experimental data. Synthetic training data not only eliminates annotation errors but also can cover the parameter space more comprehensively than would be feasible with experimentally-derived data.

III.3.1 Training Data

The training set consists of 105 synthetic holograms for training and an additional 104 for validation. Within the same field of view as the reference microscope, each hologram is computed as the superposition of up to 6 particles’ fields. This number corresponds to experimental concentrations up to 106 particles/mL. The simulated particles’ properties are drawn at random from the range ap[200 nm,5 µm] and np[1.38,2.500]. Particles are placed at random in the field of view and are located in the range zp[2.5 µm,29 µm] with the caveat that collisions between particles are not allowed. To further mimic experimental data, each calculated hologram is scaled to a mean intensity value of 100 and cast to 8 bits/pixel. These images are degraded with 5 % additive Gaussian noise, which is consistent with the median-absolute-deviation noise estimate for holograms recorded by the reference holographic microscope, including the example in Fig. 1 (b). Incorporating noise into the training holograms and allowing for overlapping features helps to prevent overtraining and improves the network’s performance with experimental holograms.

Refer to caption
Figure 3: False negative detections from the CATCHv1 and CATCHv2 Localizers as a function of ap and np. Green points represent the 25 000 sets of properties tested. The 6 features not detected by CATCHv1 (orange) are all small and weakly scattering. The 86 particles missed by CATCHv2 (teal) are either weakly scattering because they are nearly index matched to the medium, or else scatter light especially strongly because they are large and have high refractive indexes.

III.3.2 Training Protocol

Both the Localizer and the Estimator are trained by backpropagation. The Localizer uses stochastic gradient descent (SGD) for its optimizer, and the Estimator is trained using root-mean-square propagation (RMSprop) [63]. The original implementation of CATCH minimized the L2 loss for both the Localizer and the Estimator, which is equivalent to minimizing squared errors in the estimates for the the features’ centroids and extents, and also for the particles’ properties and axial positions. While effective for training YOLO, L2 loss overemphasizes outliers due to particularly problematic holograms. Such bad outcomes are inherent in the optimization problem because the Lorenz-Mie theory admits near-degeneracies in which distinct sets of parameters produce nearly identical holograms [37]. CATCHv2 deemphasizes degeneracies by minimizing the smooth-L1 loss, which interpolates between the mean-square error for small errors and the mean-absolute error for large errors [64].

The training protocol for CATCHv2 achieves convergence for all of its outputs without overtraining any of them by incorporating early-stopping callbacks that monitor the loss metric of each output for validation data. Once the network converges on a solution for one of its outputs, the callback freezes the values of all of the network parameters that contribute to that output, thereby preventing overfitting. Training continues for the remaining network parameters until the second output converges, and then the third. This training protocol requires no user input and accounts naturally for differences in learning speed for each of the three output parameters of our Estimator model.

Using this protocol, the Estimator was trained for 7882 epochs with a batch size of 64. The Localizer was trained using a batch size of 32 for 3163 epochs. Training was performed on a desktop workstation outfitted with an NVIDIA Titan RTX graphical processing unit (GPU) for hardware acceleration. The two modules were trained sequentially using an average of 80 % of the GPU’s processors for a total of seven days.

The trained networks can be adapted through transfer learning [65] to work with microscopes with different wavelength and magnification and media with different refractive indexes. Typically, this only requires retraining the fully-connected layers in the final stage of the Estimator, which can be completed in two hours once a suitable set of training data has been computed.

IV Performance

IV.1 Validation with Synthetic Data

We first evaluate the performance of CATCH on a set of synthetic images similar to those used in training. The Localizer is evaluated on a set of 25 000 full-frame images of size 1280×1024 pixels, each containing exactly one holographic feature with randomized properties. The images are degraded with 5 % Gaussian noise. This test establishes the system’s performance under ideal conditions without the added complication of overlapping features.

IV.1.1 Detection Accuracy

Detection accuracy is assessed with two metrics: the rates of false positive detections and false negative detections. Of these two kinds of errors, false negatives pose a greater challenge because they correspond to a loss of information. False positives generally can be identified and eliminated at later stages of analysis. The false positive detections from both Localizer versions are plotted in Fig. 3. CATCHv1 (orange) boasted an impressive false-negative rate of 0.02 %, missing only 6 out of 25 000 holographic features. By comparison, CATCHv2 (teal) performs with a false negative rate of 0.3 % on the same data set, missing a total of 86 holographic features. Neither CATCHv1 nor CATCHv2 has false-positive detections for these test images. In both cases, losses are limited to either the most weakly scattering particles or the largest and most strongly scattering particles. The overall detection efficiency of 99.7 % achieved by CATCHv2 greatly improves upon the 60 % rate previously reported for conventional algorithms over the same range of parameters [4] and matches that of CATCHv1 over most of the parameter range. The slight loss of detection efficiency is compensated by a very substantial gain in localization accuracy, which is critical for accurate parameter estimation.

IV.1.2 Localization Accuracy

We evaluate localization accuracy using the true positive detections from the previous analysis. Of those 24 914 detections, 14 997 were situated such that their bounding box was not cut off by the edge of the field of view. We compute the radial distance, Δr, of those features’ predicted centroids from the ground truth. The original implementation of CATCH has a mean in-plane localization error of Δr=2.7 pixels=130 nm for this data set. The updated Localizer achieves a mean localization error of Δr=0.63 pixel=30 nm. Its performance across the range of parameters is summarized in Fig. 4 and Table 1.

Localization errors contribute to errors in parameter estimation if the Estimator is trained to expect perfectly centered features. This source of error can be mitigated by training the Estimator with synthetic holograms that are randomly offset to reflect the Localizer’s performance. Improving the Localizer reduces these offsets and therefore reduces the complexity of the data-analysis problem that the Estimator is required to solve. This, in turn, improves the Estimator’s performance.

Alternative deep-learning particle trackers such as DeepTrack [66] and LodeSTAR [67] offer substantially better in-plane localization accuracy than YOLOv5. YOLO, however, provides the reproducibly accurate bounding boxes that the CATCHv2 Estimator requires for successful particle characterization [59].

Refer to caption
Figure 4: In-plane localization error, Δr, for CATCHv2 averaged over axial position, zp. The maximum error is smaller than 2 pixel over the entire parameter space, with the largest errors occurring for the largest, most weakly-refracting particles.

IV.1.3 Parameter Estimation

Refer to caption
Figure 5: Performance of the Estimator module on synthetic data for (a-c) CATCHv1 and (d-i) CATCHv2. Results are presented as a function of particle radius, ap, and refractive index, np and are averaged over axial position, zp. (a) and (d) Absolute error in axial position, Δzp. (b) and (e) Absolute error in particle radius, ap. (c) and (f) Absolute error in refractive index, np. (g), (h) and (i) recast the CATCHv2 errors from (d), (e) and (f), respectively, as percentages of the ground-truth values. Color bars have consistent scales to aid with comparison.

We evaluate the Estimator using a separate data set consisting of 104 cropped holograms of spheres with randomly selected properties in the range zp[2.5 µm,29 µm], ap[0.2 µm,5.0 µm], and np[1.38,2.5]. Consistent with training conditions, a feature’s ideal extent is set to twice the radius of the twentieth interference node. We introduce 5 % Gaussian random offsets into the feature’s extent to simulate errors by the Localizer, and then add 5 % Gaussian noise to the feature’s calculated intensity.

Figure 5 and Table 1 illustrate the extent to which the modified architecture improves the network’s performance. Accuracy in axial localization improves by better than a factor of two across the entire range of parameters, with a median error of Δzp=0.35 µm. Accuracy in particle sizing also improves by a factor of two, with a median error of just Δap=40 nm. Errors in refractive index are reduced to a median value of Δnp=0.027, which is more than sufficient to differentiate particles by their composition [11, 68]. The updated network also resolves property-dependent variations in the error that are most evident when comparing Fig. 5(b) with Fig. 5(e). Relative errors, plotted in Fig. 5(g), (h) and (i), are smaller than 10 % over the entire parameter domain and are smaller than 3 % for all but the smallest and most weakly scattering particles.

Table 1: Median and maximum errors in r=(xp,yp), zp, ap and np predicted by CATCHv1 and CATCHv2.
CATCHv1 CATCHv2
Δr [pixel] Median 2.7 0.63
Max 59 7
Δzp [pixel] Median 15.0 7.2
Max 659 167
Δap [µm] Median 0.09 0.04
Max 2.62 1.01
Δnp [ppt] Median 45.5 26.6
Max 781 886

Most of the improvements in CATCHv2’s accuracy relative to CATCHv1 can be ascribed to incorporating continuous scaling into the network’s optimal representation for a particle’s properties. This innovation’s effectiveness hinges on coordinated improvements in localization and feature-extent estimation afforded by adopting a larger and more capable Localizer network. Training with the robust smooth-L1 loss metric speeds convergence and also contributes secondarily to improvements in accuracy.

IV.2 Experimental Validation

Refer to caption
Figure 6: (a) Experimentally recorded hologram of a colloidal silica sphere at the upper wall of a rectangular channel that is filled with water. (b) Hologram of the same sphere after it sediments to the lower wall of the channel. Both holograms are cropped to include 20 diffraction fringes. This sphere’s trajectory is used to assess the performance of CATCHv1 and CATCHv2 for tracking (c, d) and characterization (e, f). Machine-learning estimates are compared with ground truth values obtained from fitting to the generative model (red). Values for the axial position, zp, obtained from the holograms in (a) and (b) are plotted with large (red) symbols in (c) and (d). (c) Axial trajectory, zp(t), compared with predictions of CATCHv1 (orange) and (d) CATCHv2 (blue). (e) Values for the particle radius, ap, and refractive index, np, estimated at each time step in the trajectory by CATCHv1 (orange) and (f) CATCHv2 (blue). Ground truth values for ap and np are estimated by conventional optimization.
Refer to caption
Figure 7: Detection and characterization of particles in a heterogeneous sample. (a) Typical hologram of colloidal spheres flowing through the microscope’s field of view overlaid with bounding boxes automatically detected by CATCHv2. (b) Characterization estimates provided by CATCHv2 (circles) for 1133 particles together with refined fits (hexagons) that were initialized with those estimates. CATCH results are colored by the relative density of observations, ρ(ap,np).

Training and validation with synthetic data does not guarantee a network’s performance with experimental data. Confounding factors such as correlated noise, artifacts from normalization, and instrumental imperfections may cause experimental data to differ enough from ideal synthetic data that the trained model cannot make accurate predictions.

We illustrate the performance of CATCH on experimental data by measuring the sedimentation of a colloidal sphere between parallel walls [69, 4]. A 3 µm diameter silica bead (Bangs Laboratories, catalog number SS05N) is dispersed in a 30 µL aliquot of water that is confined between a glass #1.5 cover slip and a glass microscope slide. Silica being twice as dense as water, the bead tends to settle to the bottom of the chamber. Using a holographic optical trap [70], we lift the bead to the top of the chamber, release it and record its subsequent trajectory [71]. Examples of experimentally recorded holograms of this particle are presented in Fig. 6(a) when the particle is at the top of the chamber, and Fig. 6(b) when the particle is at the bottom of the chamber. We analyze the resulting holographic video with CATCHv1, CATCHv2, and by fitting to the generative model in Eq. (3) using a conventional least-squares fitter. These analyses also yield estimates for the particle’s radius and refractive index.

Figure 6(c) and (e) compare results obtained with the original implementation of CATCH with results obtained by conventional fitting. Figure 6(d) and (f) present complementary results for CATCHv2. In both cases, we treat the nonlinear least-squares fit as the ground truth for the comparison. The sedimenting particle’s axial position, plotted as (red) points in Fig. 6(c) and (d), follows the sigmoidal trajectory expected for confined sedimentation in a horizontal slit pore [72]. Predictions for zp(t) by CATCHv1 generally follow this trend, but with substantial random and systematic errors. CATCHv2, by contrast, tracks the particle’s motion in excellent quantitative agreement with the ground truth. The updated machine-learning system improves mean errors in axial tracking by nearly a factor of ten, from Δzp=4.2 µm to Δzp=0.46 µm, which is consistent with expectations based on the numerical validation data in Fig. 5.

The particle’s radius and refractive index, plotted in Fig 6(e) and (f), form a tight cluster when reported by the conventional fitter (red). All of these values represent properties of a single particle that should not change as the particle moves through the sample cell. Results from CATCHv1 in Fig. 6(e) generally cluster in the correct region of parameter space, albeit with a systematic error of Δap=1 µm and a standard deviation of 500 nm. CATCHv2, by contrast, accurately estimates the radius and refractive index of the particle. The Δap=12 nm precision and accuracy for particle size is consistent with expectations from the numerical study in Fig. 5, and would suffice for the differential measurements required for holographic molecular binding assays [73, 74, 21, 20, 28]. CATCHv2 therefore could provide a computationally cost-effective basis for label-free bead-based medical diagnostic testing.

Whereas the single-particle study in Fig. 6 is useful for illustrating the performance of CATCHv2 for an individual colloidal sphere moving in three dimensions, Fig 7 illustrates its performance for heterogeneous dispersions of colloidal particles. The sample for this demonstration is composed of equal concentrations of silica spheres (Thermo Fisher, catalog no. 8150) and polystyrene spheres (Bangs Laboratories, catalog no. NT16N), each with a nominal radius of ap=0.75 µm, dispersed in water. The hologram in Fig. 7(a) captures four of those particles as they move through the 61×49 µm field of view in a pressure-driven flow. Superimposed bounding boxes are identified automatically by the Localizer stage of CATCHv2. The scatter plot in Fig. 7(b) presents characterization results from the Estimator stage for 1133 particles that flowed through the observation volume in 5 min, together with refined estimates for those particles’ characteristics that were obtained by fitting to Lorenz-Mie theory. Each data point represents the radius and refractive index of one particle. Machine-learning estimates are colored by the relative density of observations, ρ(ap,np). The two populations of particles are clearly differentiated by refractive index, even though their size distributions overlap.

CATCHv2 Lorenz-Mie Error Expected
ap[µm] ap[µm] Δap[nm] Δap[nm]
PS 0.770±0.053 0.729±0.008 42±54. 57±74.
SiO2 0.953±0.070 0.779±0.035 173±79. 65±78.
np np Δnp[ppt] Δnp[ppt]
PS 1.631±0.040 1.598±0.006 33±40. 42±44.
SiO2 1.455±0.008 1.449±0.006 6±11. 26±40.
Table 2: Particle-characterization performance of CATCHv2 for the two-component dispersion presented in Fig. 7. Population-averaged values for the radius and refractive index for the two types of particles are compared with refined estimates obtained by fitting the same set of holograms to Eq. (3). Differences between estimates and refined values are compared with the expected performance from Fig. 5.

Table 2 reports the average radii and refractive indexes estimated by CATCHv2 for the two types of particles in the sample. These values are compared with the averages obtained by fitting the same holographic features to the Lorenz-Mie model. Uncertainties are computed as standard deviations of the single-particle results and therefore combine estimation errors with intrinsic particle-to-particle variations in the two populations. We interpret differences between machine-learning estimates and refined values as errors in the machine-learning estimates. Table 2 compares these discrepancies with expectations for the performance of CATCHv2 based on the numerical validation results presented in Fig. 5.

Systematic discrepancies between machine-learning estimates for the particle radii and refined values are not surprising because errors in machine-learning estimates generally are not normally distributed. Such discrepancies are likely to be exacerbated by defects such as aberrations in experimentally recorded holograms [31, 50] that are not accounted for in the generative model used to synthesize training data. Even so, errors in the polystyrene spheres’ radii fall within the expected range of Δap=±60 nm, as do the sizing errors for the 1.5 µm-radius silica sphere reported in Fig. 6. By contrast, the 170 nm systematic offset for the more weakly scattering 0.8 µm-radius silica spheres in Fig. 7 is nearly three times larger than expected. In assessing these performance, it should be noted that the only particle-resolved sizing technique that consistently surpasses the precision and accuracy of CATCHv2 in this range is the full Lorenz-Mie implementation.

Machine-learning estimates for the two populations’ refractive indexes agree well with the refined values. Results for the refractive index, in particular, are sufficiently precise to distinguish the two types of particles by composition [10], which represents a very substantial improvement over CATCHv1.

Because CATCHv2 performs independent estimates for size and refractive index, errors in these estimates tend not to be correlated [4]. This contrasts with conventional optimization techniques whose results more directly reflect the structure of the error surface for the theory. Correlations are less prominent, for example, in the machine-learning estimates plotted in Fig. 7(b) than in the iteratively optimized results.

Predictions by CATCHv2 are sufficiently accurate and precise that refinement may not be necessary for many applications, including classifying impurity particles for quality control in biopharmaceuticals [17, 18], semiconductor processing [75] and environmental monitoring [76]. Reliable detection of particles across a wide range of parameters ensures accurate measurement of particle concentrations [76, 19], which is valuable across many industries. Combining this with accurate sizing and differentiation by material composition affords a particularly detailed view into the composition of heterogeneous dispersions. All such applications will benefit from the processing speed afforded by an end-to-end machine-learning implementation.

IV.3 Speed

Both CATCHv1 and CATCHv2 make efficient use of hardware acceleration on CUDA-capable graphics cards. CATCHv1 processes a single 1280×1024 pixel hologram in 21 ms on an NVIDIA Titan Xp GPU, with 20 ms required for the Localizer and 0.9 ms for the Estimator [4]. These times are reduced by 5 % when the same code is run on an NVIDIA Titan RTX GPU.

Moving from the C-language implementation of CATCHv1 to the pure python implementation of CATCHv2 incurs a performance penalty, all the more so because of the increased size and complexity of the Localizer module. Running on the Titan RTX platform, the updated Localizer requires 24 ms to process one frame, and the Estimator requires an additional 1.3 ms to analyze each feature. This is still fast enough to process frames in real time at 30 frames/s.

The dramatic improvement in prediction accuracy gained with CATCHv2 translates into particularly substantial performance gains for those applications that no longer require optimization by conventional algorithms. Even when further refinement is required, the improved initial estimates provided by CATCHv2 increase the likelihood of successful convergence and reduce the time to convergence.

Performance differences between C-based and python-based implementations should decrease as development efforts continue to improve the processing speed of python programs. CATCHv2 also would benefit from optimizations such as parameter pruning and quantization, neither of which have been applied to the demonstration implementation presented here.

V Discussion

The implementation of the CATCH machine-learning system presented in this study solves a central problem in soft-matter science: characterizing and tracking individual colloidal particles in their native media in real time. When analyzing data from the reference holographic microscopy instrument, CATCH provides three-dimensional tracking data with Δr=50 nm accuracy in-plane and Δzp=350 nm along the axial direction in a 100×100×30 µm observation volume. CATCH simultaneously measures a micrometer-scale particle’s radius with a median accuracy of Δap=40 nm for particles ranging in radius from ap=200 nm to ap=5 µm. In large regions of parameter space, CATCH achieves precision and accuracy which rivals that of the conventional algorithms.

Holographic characterization offers the substantial advantage relative to other particle-characterization technologies of measuring a recorded particle’s refractive index, thereby providing information about its composition. CATCH estimates the refractive index with an accuracy of Δnp=0.026 over the range from near-index-matching, np=1.38, to very strong scattering, np=2.5.

These estimates for the system’s accuracy are consistent with the illustrative example of a colloidal sphere sedimenting through water. The training protocol therefore appropriately accounts for instrumental imperfections that might otherwise degrade prediction accuracy.

The ability of CATCH to estimate particles’ positions and characteristics with an accuracy of one or two percent is sufficient for many of the applications that already have been identified for holographic particle characterization, including particle characterization in biopharmaceuticals [18, 19], agglomerate detection in semiconductor polishing slurries [75] and process control for materials synthesis [73]. In more specialized applications where part-per-thousand accuracy is desirable, predictions from CATCH can be used to initialize parameter refinement using the generative model from Eq. (3). Both the speed and reliability of iterative optimization are improved by the high quality of the starting estimates provided by CATCH [37].

CATCH can be readily adapted to work with new instruments and can be trained automatically to cover different parameter ranges. For example, CATCH can be trained to accommodate particles with refractive indexes smaller than that of the medium. Training over a smaller parameter range can improve the accuracy of CATCH’s predictions to the point that machine-learning estimates rival the precision and accuracy of state-of-the-art optimization while retaining their substantial speed advantage.

Having demonstrated that a machine-learning system can provide precise end-to-end holographic analysis of colloidal spheres in real time, we can speculate on possible generalizations of our implementation. CATCH currently treats the refractive index of the medium as a fixed parameter, for example. More generally, nm can be allowed to vary at the cost of increased training complexity, and indeed could be obtained as an output of the Estimator. Such a generalized model would be useful for analyzing most dispersions of micrometer-scale colloids without any a priori knowledge about their composition and without requiring any retraining. The Localizer can be trained to differentiate holograms into categories such as “spherical”, “rod-like” and “irregular”, “large” and “small”, “high-index” and “low-index”. Such classifications could be used to transfer holograms to specialized variants of the Estimator for detailed analysis. The value of such elaborations hinges on the rapidly increasing variety of applications for holographic particle characterization.

As the simplest and presumably smallest machine-learning implementation of holographic particle characterization, the present implementation CATCH can be incorporated readily into commercial instrumentation. CATCH is small enough, for example, to be realized on a field-programmable gate array (FPGA) suitable for board-level integration.

The diminutive 200 kB memory footprint of the CATCH model also hints at opportunities for recasting Lorenz-Mie theory itself. The standard formulation of light scattering by small particles is technically challenging to compute. It is possible that the condensed representation learned by CATCH can guide the development of a greatly simplified analytic formulation [77], which would be broadly useful. CATCH therefore can play a role in the emerging paradigm shift toward machine-driven discovery of fundamental principles.

The full open-source implementation of CATCHv2 is available at https://github.com/laltman2/CATCH/. The open-source pylorenzmie package for Lorenz-Mie analysis is available at https://github.com/davidgrier/pylorenzmie/

Conflicts of Interest

David Grier is a founder of Spheryx, Inc., which manufactures instruments for holographic particle characterization.

Acknowledgements

This work was primarily supported by the SBIR program of the National Institutes of Health under Award Number R44TR001590. Additional support was provided by the National Science Foundation under Award Number DMR-2104837. The Titan Xp and Titan RTX GPUs used for this work were provided by GPU Grants from NVIDIA. The custom holographic characterization instrument was constructed with support from the MRI program of the NSF under Award Number DMR-0922680.

References

  • [1] M. Wittwer and M. Seita. “A machine learning approach to map crystal orientation by optical microscopy.” npj Comput. Mater. 8, 1–9 (2022).
  • [2] G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto and L. Zdeborová. “Machine learning and the physical sciences.” Rev. Mod. Phys. 91, 045002 (2019).
  • [3] A. W. Long, J. Zhang, S. Granick and A. L. Ferguson. “Machine learning assembly landscapes from particle tracking data.” Soft Matter 11, 8141–8153 (2015).
  • [4] L. E. Altman and D. G. Grier. “CATCH: Characterizing and tracking colloids holographically using deep neural networks.” J. Phys. Chem. B 124, 1602–1610 (2020).
  • [5] E. N. Minor, S. D. Howard, A. A. S. Green, M. A. Glaser, C. S. Park and N. A. Clark. “End-to-end machine learning for experimental physics: Using simulated data to train a neural network for object detection in video microscopy.” Soft Matter 16, 1751–1759 (2020).
  • [6] W. F. Reinhart, A. W. Long, M. P. Howard, A. L. Ferguson and A. Z. Panagiotopoulos. “Machine learning for autonomous crystal structure identification.” Soft Matter 13, 4733–4745 (2017).
  • [7] C. Martin, L. E. Altman, S. Rawat, A. Wang, D. G. Grier and V. N. Manoharan. “In-line holographic microscopy with model-based analysis.” Nat. Rev. Methods Primers 2, 1–17 (2022).
  • [8] M. D. Hannel, A. Abdulali, M. O’Brien and D. G. Grier. “Machine-learning techniques for fast and accurate feature localization in holograms of colloidal particles.” Opt. Express 26, 15221–15231 (2018).
  • [9] B. Midtvedt, E. Olsèn, F. Eklund, F. Höök, C. B. Adiels, G. Volpe and D. Midtvedt. “Fast and accurate nanoparticle characterization using deep-learning-enhanced off-axis holography.” ACS nano 15, 2240–2250 (2021).
  • [10] A. Yevick, M. Hannel and D. G. Grier. “Machine-learning approach to holographic particle characterization.” Opt. Express 22, 26884–26890 (2014).
  • [11] S.-H. Lee, Y. Roichman, G.-R. Yi, S.-H. Kim, S.-M. Yang, A. Van Blaaderen, P. Van Oostrum and D. G. Grier. “Characterizing and tracking single colloidal particles with video holographic microscopy.” Opt. Express 15, 18275–18282 (2007).
  • [12] P. Memmolo, L. Miccio, M. Paturzo, G. Di Caprio, G. Coppola, P. A. Netti and P. Ferraro. “Recent advances in holographic 3D particle tracking.” Adv. Opt. Photonics 7, 713–755 (2015).
  • [13] Z. Wang, L. Miccio, S. Coppola, V. Bianco, P. Memmolo, V. Tkachenko, V. Ferraro, E. Di Maio, P. L. Maffettone and P. Ferraro. “Digital holography as metrology tool at micro-nanoscale for soft matter.” Light Adv. Manuf. 3, 151–176 (2022).
  • [14] B. J. Krishnatreya, A. Colen-Landy, P. Hasebe, B. A. Bell, J. R. Jones, A. Sunda-Meya and D. G. Grier. “Measuring Boltzmann’s constant through holographic video microscopy of a single sphere.” Am. J. Phys. 82, 23–31 (2014).
  • [15] J. Sheng, E. Malkiel and J. Katz. “Using digital holographic microscopy for simultaneous measurements of 3D near wall velocity and wall shear stress in a turbulent boundary layer.” Exp. Fluids 45, 1023–1035 (2008).
  • [16] C. Wang, X. Zhong, D. B. Ruffner, A. Stutt, L. A. Philips, M. D. Ward and D. G. Grier. “Holographic characterization of protein aggregates.” J. Pharm. Sci. 105, 1074–1085 (2016).
  • [17] P. N. Kasimbeg, F. C. Cheong, D. B. Ruffner, J. M. Blusewicz and L. A. Philips. “Holographic characterization of protein aggregates in the presence of silicone oil and surfactants.” J. Pharm. Sci. 108, 155–161 (2019).
  • [18] A. Winters, F. C. Cheong, M. A. Odete, J. Lumer, D. B. Ruffner, K. I. Mishra, D. G. Grier and L. A. Philips. “Quantitative differentiation of protein aggregates from other subvisible particles in viscous mixtures through holographic characterization.” J. Pharm. Sci. 109, 2405–2412 (2020).
  • [19] H. Rahn, M. Oeztuerk, N. Hentze, F. Junge and M. Hollmann. “The Strengths of Total Holographic Video Microscopy in detecting sub-visible protein particles in Biopharmaceuticals: A comparison to Flow Imaging and Resonant Mass Measurement.” J. Pharm. Sci. in press (2022).
  • [20] K. Snyder, R. Quddus, A. D. Hollingsworth, K. Kirshenbaum and D. G. Grier. “Holographic immunoassays: direct detection of antibodies binding to colloidal spheres.” Soft Matter 16, 10180–10186 (2020).
  • [21] L. E. Altman and D. G. Grier. “Interpreting holographic molecular binding assays with effective medium theory.” Biomed. Opt. Express 11, 5225–5236 (2020).
  • [22] M. Bertero, P. Boccacci and C. De Mol. Introduction to Inverse Problems in Imaging (CRC press, 2021).
  • [23] C. F. Bohren and D. R. Huffman. Absorption and Scattering of Light by Small Particles (Wiley Interscience, New York, 1983).
  • [24] M. I. Mishchenko, L. D. Travis and A. A. Lacis. Scattering, Absorption and Emission of Light by Small Particles (Cambridge University Press, Cambridge, 2001).
  • [25] G. Gouesbet and G. Gréhan. Generalized Lorenz-Mie Theories (Springer-Verlag, Berlin, 2011).
  • [26] S. Shao, K. Mallery, S. S. Kumar and J. Hong. “Machine learning holography for 3D particle field imaging.” Opt. Express 28, 2987–2999 (2020).
  • [27] B. J. Krishnatreya and D. G. Grier. “Fast feature identification for holographic tracking: The orientation alignment transform.” Opt. Express 22, 12773–12778 (2014).
  • [28] Y. Zagzag, M. F. Soddu, A. D. Hollingsworth and D. G. Grier. “Holographic molecular binding assays.” Sci. Rep. 10, 1932 (2020).
  • [29] L. E. Altman, R. Quddus, F. C. Cheong and D. G. Grier. “Holographic characterization and tracking of colloidal dimers in the effective-sphere approximation.” Soft Matter 17, 2695–2703 (2021).
  • [30] J. Sheng, E. Malkiel and J. Katz. “Digital holographic microscope for measuring three-dimensional particle distributions and motions.” Appl. Opt. 45, 3893–3901 (2006).
  • [31] B. Leahy, R. Alexander, C. Martin, S. Barkley and V. N. Manoharan. “Large depth-of-field tracking of colloidal spheres in holographic microscopy by modeling the objective lens.” Opt. Express 28, 1061–1075 (2020).
  • [32] F. C. Cheong, B. Sun, R. Dreyfus, J. Amato-Grill, K. Xiao, L. Dixon and D. G. Grier. “Flow visualization and flow cytometry with holographic video microscopy.” Opt. Express 17, 13071–13079 (2009).
  • [33] R. Parthasarathy. “Rapid, accurate particle tracking by calculation of radial symmetry centers.” Nat. Methods 9, 724–726 (2012).
  • [34] A. D. Kashkanova, A. B. Shkarin, R. G. Mahmoodabadi, M. Blessing, Y. Tuna, A. Gemeinhardt and V. Sandoghdar. “Precision single-particle localization using radial variance transform.” Opt. Express 29, 11070–11083 (2021).
  • [35] J. C. Crocker and D. G. Grier. “Methods of digital video microscopy for colloidal studies.” J. Colloid Interface Sci. 179, 298–310 (1996).
  • [36] T. G. Dimiduk, R. W. Perry, J. Fung and V. N. Manoharan. “Random-subset fitting of digital holograms for fast three-dimensional particle tracking.” Appl. Opt. 53, G177–G183 (2014).
  • [37] D. B. Ruffner, F. C. Cheong, J. M. Blusewicz and L. A. Philips. “Lifting degeneracy in holographic characterization of colloidal particles using multi-color imaging.” Opt. Express 26, 13239–13251 (2018).
  • [38] L. Denis, C. Fournier, T. Fournel, C. Ducottet and D. Jeulin. “Direct extraction of the mean particle size from a digital hologram.” Appl. Opt. 45, 944–952 (2006).
  • [39] J. Öhman and M. Sjödahl. “Off-axis digital holographic particle positioning based on polarization-sensitive wavefront curvature estimation.” Appl. Opt. 55, 7503–7510 (2016).
  • [40] B. J. Thompson. “Holographic particle sizing techniques.” J. Phys. E: Sci. Instru. 7, 781–788 (1974).
  • [41] S. Soontaranon, J. Widjaja and T. Asakura. “Improved holographic particle sizing by using absolute values of the wavelet transform.” Opt. Commun. 240, 253–260 (2004).
  • [42] S. Barkley, T. G. Dimiduk, J. Fung, D. M. Kaz, V. N. Manoharan, R. J. McGorty, R. W. Perry and A. Wang. “Holographic Microscopy With Python and HoloPy.” Comput. Sci. Eng. 22, 72–82 (2020).
  • [43] F. C. Cheong, K. Xiao, D. J. Pine and D. G. Grier. “Holographic characterization of individual colloidal spheres’ porosities.” Soft Matter 7, 6816–6819 (2011).
  • [44] V. Markel. “Introduction to the Maxwell Garnett approximation: tutorial.” J. Opt. Soc. Am. A 33, 1244–1256 (2016).
  • [45] C. Wang, F. C. Cheong, D. B. Ruffner, X. Zhong, M. D. Ward and D. G. Grier. “Holographic characterization of colloidal fractal aggregates.” Soft Matter 12, 8774–8780 (2016).
  • [46] J. Fung and S. Hoang. ‘‘Computational assessment of an effective-sphere model for characterizing colloidal fractal aggregates with holographic microscopy.” J. Quant. Spectr. Rad. Trans. 236, 106591 (2019).
  • [47] M. A. Odete, F. C. Cheong, A. Winters, J. J. Elliott, L. A. Philips and D. G. Grier. “The role of the medium in the effective-sphere interpretation of holographic particle characterization data.” Soft Matter 16, 891–898 (2020).
  • [48] A. Ray, M. A. Khalid, A. Demčenko, M. Daloglu, D. Tseng, J. Reboud, J. M. Cooper and A. Ozcan. “Holographic detection of nanoparticles using acoustically actuated nanolenses.” Nature Commun. 11, 1–10 (2020).
  • [49] G. Ding, J. Wang, J. Zou, C. Wang, T. Wang, J. Meng and D. Li. “A Novel Method Based on Optofluidic Lensless-Holography for Detecting the Composition of Oil Droplets.” IEEE Sensors J. 20, 6928–6936 (2020).
  • [50] R. Alexander, B. Leahy and V. N. Manoharan. “Precise measurements in digital holographic microscopy by modeling the optical train.” J. Appl. Phys. 128, 060902 (2020).
  • [51] A. Wang, W. B. Rogers and V. N. Manoharan. “Effects of contact-line pinning on the adsorption of nonspherical colloids at liquid interfaces.” Phys. Rev. Lett. 119, 108004 (2017).
  • [52] A. Wang, T. G. Dimiduk, J. Fung, S. Razavi, I. Kretzschmar, K. Chaudhary and V. N. Manoharan. “Using the discrete dipole approximation and holographic microscopy to measure rotational dynamics of non-spherical colloidal particles.” J. Quant. Spectr. Rad. Trans. 146, 499–509 (2014).
  • [53] M. I. Mishchenko, L. D. Travis and D. W. Mackowski. “T-matrix computations of light scattering by nonspherical particles: A review.” J. Quant. Spectr. Rad. Trans. 55, 535–575 (1996).
  • [54] D. W. Mackowski and M. I. Mishchenko. “Calculation of the T matrix and the scattering matrix for an ensemble of spheres.” J. Opt. Soc. Am. A 13, 2266–2278 (1996).
  • [55] Y. Zhang, Y. Zhu and E. Y. Lam. “Holographic 3D particle reconstruction using a one-stage network.” Appl. Opt. 61, B111–B120 (2022).
  • [56] J. Redmon, S. Divvala, R. Girshick and A. Farhadi. “You Only Look Once: Unified, real-time object detection.” In “Proc. IEEE Comp. Vision Pattern Recog.”, 779–788 (2016).
  • [57] J. Redmon and A. Farhadi. “YOLO9000: better, faster, stronger.” In “Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.”, 7263–7271 (2017).
  • [58] J. Redmon and A. Farhadi. “YOLOv3: An Incremental Improvement.” CoRR abs/1804.02767, 1–6 (2018).
  • [59] G. Jocher, A. Chaurasia, A. Stoken, J. Borovec, NanoCode012, Y. Kwon, TaoXie, J. Fang, imyhxy, K. Michael, Lorna, A. V, D. Montes, J. Nadar, Laughing, tkianai, yxNONG, P. Skalski, Z. Wang, A. Hogan, C. Fati, L. Mammana, AlexWang1900, D. Patel, D. Yiwei, F. You, J. Hajek, L. Diaconu and M. T. Minh. “ultralytics/yolov5: v6.1 - TensorRT, TensorFlow Edge TPU and OpenVINO Export and Inference.” (2022).
  • [60] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu and X. Zheng. “TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems.” (2015).
  • [61] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai and S. Chintala. “PyTorch: An Imperative Style, High-Performance Deep Learning Library.” In “Advances in Neural Information Processing Systems 32,” 8024–8035 (Curran Associates, Inc., 2019).
  • [62] X. Glorot, A. Bordes and Y. Bengio. “Deep Sparse Rectifier Neural Networks.” In “Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,” edited by G. Gordon, D. Dunson and M. Dudík, vol. 15 of Proceedings of Machine Learning Research, 315–323 (PMLR, Fort Lauderdale, FL, USA, 2011).
  • [63] T. Kurbiel and S. Khaleghian. “Training of Deep Neural Networks based on Distance Measures using RMSProp.” (2017).
  • [64] R. Girshick. “Fast R-CNN.” In ‘‘Proceedings of the IEEE International Conference on Computer Vision,” 1440–1448 (2015).
  • [65] H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura and R. M. Summers. “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning.” IEEE Trans. Med. Imag. 35, 1285–1298 (2016).
  • [66] B. Midtvedt, S. Helgadottir, A. Argun, J. Pineda, D. Midtvedt and G. Volpe. “Quantitative digital microscopy with deep learning.” Appl. Phys. Rev. 8, 011310 (2021).
  • [67] B. Midtvedt, J. Pineda, F. Skärberg, E. Olsén, H. Bachimanchi, E. Wesén, E. K. Esbjörner, E. Selander, F. Höök, D. Midtvedt et al. “Single-shot self-supervised object detection in microscopy.” Nat. Commun. 13, 7492 (2022).
  • [68] K. Xiao and D. G. Grier. “Multidimensional optical fractionation with holographic verification.” Phys. Rev. Lett. 104, 028302 (2010).
  • [69] E. R. Dufresne, D. Altman and D. G. Grier. “Brownian dynamics of a sphere between parallel walls.” Europhys. Lett. 53, 264–270 (2001).
  • [70] D. G. Grier. “A revolution in optical manipulation.” Nature 424, 810–816 (2003).
  • [71] M. J. O’Brien and D. G. Grier. “Above and beyond: Holographic tracking of axial displacements in holographic optical tweezers.” Opt. Express 27, 24866–25435 (2019).
  • [72] L. E. Altman and D. G. Grier. “Holographic analysis of colloidal spheres sedimenting in horizontal slit pores.” Phys. Rev. E 106, 044605 (2022).
  • [73] C. Wang, H. Shpaisman, A. D. Hollingsworth and D. G. Grier. “Celebrating Soft Matter’s 10th Anniversary: Monitoring colloidal growth with holographic microscopy.” Soft Matter 11, 1062–1066 (2015).
  • [74] C. Wang, H. W. Moyses and D. G. Grier. “Stimulus-responsive colloidal sensors with fast holographic readout.” Appl. Phys. Lett. 107, 051903 (2015).
  • [75] F. C. Cheong, P. Kasimbeg, D. B. Ruffner, E. H. Hlaing, J. M. Blusewicz, L. A. Philips and D. G. Grier. “Holographic characterization of colloidal particles in turbid media.” Appl. Phys. Lett. 111, 153702 (2017).
  • [76] L. A. Philips, D. B. Ruffner, F. C. Cheong, J. M. Blusewicz, P. Kasimbeg, B. Waisi, J. R. McCutcheon and D. G. Grier. “Holographic characterization of contaminants in water: Differentiation of suspended particles in heterogeneous dispersions.” Water Res. 122, 431–439 (2017).
  • [77] M. Cranmer, A. Sanchez Gonzalez, P. Battaglia, R. Xu, K. Cranmer, D. Spergel and S. Ho. “Discovering symbolic models from deep learning with inductive biases.” Adv. Neural Inf. Process. Syst. 33, 17429–17442 (2020).