Methods of Digital Video Microscopy for Colloidal Studies

John C. Crocker and David G. Grier
The James Franck Institute and Department of Physics
The University of Chicago, Chicago, IL 60637
Abstract.

We describe a set of image processing algorithms for extracting quantitative data from digitized video microscope images of colloidal suspensions. In a typical application, these direct imaging techniques can locate submicron spheres to within 10 nm in the focal plane and 150 nm in depth. Combining information from a sequence of video images into single-particle trajectories makes possible measurements of quantities of fundamental and practical interest such as diffusion coefficients and pair-wise interaction potentials. The measurements we describe in detail combine the outstanding resolution of digital imaging with video-synchronized optical trapping to obtain highly accurate and reproducible results very rapidly.

§ I. Introduction

Recent developments in digital image processing provide revolutionary new tools for studying colloidal suspensions. Combined with video microscopy, image processing greatly facilitates time-resolved measurements of individual colloidal particles' trajectories. In the 90 years since Perrin's pioneering photographic study of diffusion and Brownian motion(1), quantitative analysis of colloidal images has been used to study phase transitions in three-(2) and two-dimensional systems (3), to probe the effects of external fields on colloidal dynamics(4), and to measure directly the interaction between isolated pairs of colloidal microspheres(5); (6); (7). While both image processing and the ultramicroscopy of colloidal suspensions are well developed fields, applying the former to the latter poses some problems whose resolution has not been discussed in a unified way. A rapidly growing community of researchers will come up against these practical hurdles as technological advances associated with multimedia computing make these techniques more widely available. In this light, we present some practical methods which we have found useful in studying the microscopic structure and dynamics of colloidal systems. While this treatment centers on suspensions of submicron spheres, many of the methods described below can be generalized to suspensions of non-spherical particles.

In section II, we describe typical instrumentation required for acquiring digital video images of colloidal particles. Section III describes in some detail the steps required to convert a digital movie of colloidal particles into an ensemble of single-particle trajectories. We stress those aspects of the analysis which allow us to track particles with spatial resolution much finer than the wavelength of light used to create the images. High-resolution trajectory data makes possible a wide range of quantitative measurements of colloidal processes at the microscopic scale. As practical examples, in Section IV we describe measurements of microspheres' self-diffusion coefficients and of the separation dependence of the pair-wise interaction potential for charge-stabilized colloid.

§ II. Typical Instrumentation

Individual colloidal particles larger than roughly 200 nm can be resolved with a conventional light microscope(8). We use an Olympus IMT2 inverted microscope with a 100\times N.A. 1.2 oil immersion objective. The objective lens' \pm 400 nm depth of focus is comparable to a typical sphere diameter, so that only a single layer of spheres is in focus at any time. This is convenient for distinguishing particles in dense suspensions. Three-dimensional views can be reconstructed from a series of images at different focal planes, although multiple light scattering rapidly reduces contrast with increasing depth. The limited working distance of high-powered objective lenses (typically around 200 \mathrm{\upmu}\mathrm{m} including the thickness of the cover glass) poses the ultimate constraint to viewing in depth. Scanning laser confocal microscopes offer true three-dimensional microscopy but are at least an order of magnitude more expensive than conventional microscopes.

Conventional photomicrographs of colloidal suspensions such as that in Fig. 1(a) can capture thousands of particles with spatial resolution limited by the wavelength of light. An optical scanner then can convert these photographs to digital arrays with a degree of detail limited only by the amount of computer memory available. While the image processing methods discussed below can be applied to photographic data, our main interest is in the quantitative analysis of video images which record both spatial and temporal information.

Standard commercial video cameras produce 30 complete images per second but with poorer spatial resolution than is offered by photographic film. The convenience, flexibility, and economy of video technology, however, encourage its use. The usable portion of a single video image typically consists of 480 horizontal lines of 640 pixels, where each pixel records the average brightness of a discrete area in the original scene.

Figure 1. Stages of image processing. (a) Detail of a video micrograph of the (111) plane of a face-centered cubic colloidal crystal. The radius of each polystyrene sulfonate spheres is \sigma=0.163~\mathrm{\upmu}\mathrm{m}. The scale bar indicates 2~\mathrm{\upmu}\mathrm{m}. (b) The same image filtered with the convolution kernel in eqn. (4). (c) Gray-scale dilation of the image in (b). Dark spots represent the initial estimates for particle locations based on the neighborhood maximum algorithm. (d) Final particle location estimates. The lines connecting sites constitute the network of nearest-neighbor bonds computed as a Delaunay triangulation [F. P. Preparata and M. I. Shamos, Computational Geometry (New York, Springer-Verlag, 1985)]. Such a network is useful as the basis of many measurements of local ordering.

Of the many cameras available, those based on charge-coupled device (CCD) technology are superior to older vidicon tube models which suffer from geometric distortions, nonuniform sensitivity, and various nonlinearities which vary with time. Tube cameras also are quite bulky. Monochrome CCD cameras may be preferable to color models not only because they are less expensive but also because they tend to have superior noise figures and greater sensitivity to subtle brightness variations. Color information, furthermore, is not used in the techniques we describe below. We use an NEC TI-324A CCD camera attached to our microscope's video port. The addition of a 5\times video eyepiece provides a total system magnification of M = 85 nm/pixel on the CCD. The choice of system magnification is a trade-off among several figures of merit including size of field of view, degree of image contrast, desired tracking precision, and speed of image processing. In our studies of phase transitions and dynamics in suspensions of monodisperse latex microspheres, such compromises typically dictate selecting the magnification to produce images with apparent radii of s\approx M\sigma\approx 3 pixels.

Video images must be converted into digital format before they can be analyzed. Digitizing video frames requires a dedicated frame grabber which typically takes the form of an add-on board for a computer. The frame grabber used in this study is a Data Translation DT-3851A installed in a 486-class personal computer. Frame grabbers such as the DT-3851A convert the analog video stream to digital images in real time, a process which requires more than 12 million analog to digital (A/D) conversions per second. Such high-speed digitization limits most frame grabbers to 8 bits of dynamic range, or 256 gradations of gray scale per pixel. While photographs capture much more subtle gradations, 8 bit resolution suffices if the video signal is adjusted to fill the grabber's dynamic range.

Although frame grabbers can digitize and display images in real time, storing a digital image is time consuming. A single uncompressed gray-scale image takes up about one third of a megabyte. Typical hard disks can store such an image in a quarter second, which is not fast enough to acquire full motion video in real time. Disk arrays and high speed video drives can archive full-screen full-motion digital video in real time and probably will become cost-effective solutions to the video storage problem in the near future. Storing images to fast memory before transferring to disk also is becoming increasingly feasible with the advent of local bus frame grabbers. Real-time hardware compression schemes such as MPEG generally are not appropriate since they achieve their results by throwing out the subtle gradations which we hope to study. Recordable video disks offer another hardware solution, but are prohibitively expensive for routine recording, particularly when only a small percentage of the recorded video is likely to be analyzed. Real-time digital video recording, however, is not the only practical approach for time-resolved digital microscopy.

Video images can be recorded in analog form with commercial video tape recorders. The more advanced formats such as S-VHS or Hi8 faithfully preserve most of the information from the original video signal. Video tape decks with computer interfaces such as the SONY EVO-9650 can be controlled by the same computer which hosts the frame grabber card. A fairly straightforward program then can direct the tape deck to seek out and pause at a particular video frame, have the frame grabber digitize the paused image, and store the result to disk. Repeating this process permits digitizing any sequence of video frames. It should be noted that some low cost frame grabbers have difficulty digitizing pause-mode video signals. While such problems are becoming less common as video technology progresses, they should still be considered when designing a video acquisition system.

§ III. Five Stages of Colloidal Particle Tracking

Digital video analysis enables us to extract the trajectories of individual colloidal microspheres from a video tape of their microscope images. The time evolution of the distribution of particles,

\rho({\bf r},t)=\sum _{{i=1}}^{N}\delta({\bf r}-{\bf r}_{i}(t)), (1)

then can be used to calculate quantities of interest, some examples of which we discuss in later sections. In eqn. (1), {\bf r}_{i}(t) is the location of the i-th particle in a field of N particles at time t.

The software we have developed to extract \rho({\bf r},t) from a sequence of digital images consists of five logical steps: correcting imperfections in the individual images, locating candidate particle positions, refining these positions, discriminating “false” particles, and finally linking the time-resolved particle locations into trajectories. In this section, we discuss our solutions to these interrelated problems.

The difficulty of measuring \rho({\bf r},t) can vary greatly from system to system. For instance, images of a dilute suspension whose particles are geometrically confined at the microscope's focal plane are simpler to process than pictures of a dense suspension of colloid moving in three dimensions. In the latter case, the tracking algorithm has to deal with the changing appearance of the spheres as they move in and out of focus. Furthermore, it has to distinguish marginally focused particles from noise. The need to locate a large number of spheres by processing the much larger number of pixels in each of a stream of images also places stringent efficiency constraints on the code. The set of algorithms presented below reliably processes “easy” and “difficult” images alike, and does so with minimal intervention in the form of adjustable tuning parameters.

Our image processing system is implemented in IDL, a programming language optimized for visual data analysis(9). We have found IDL to be more convenient than conventional languages such as C/C++ or Fortran for rapidly developing, testing, and modifying image analysis software. Although IDL is an interpreted language, its performance on typical tasks such as matrix convolutions is comparable to that of compiled languages thanks to the availability of highly optimized analysis modules and array processing primitives.

§ III.1. Image Restoration

Digitized images typically suffer from a range of imperfections including geometric distortion, nonuniform contrast, and noise. These all introduce errors into \rho({\bf r},t) unless steps are taken to restore the image to its “ideal” state. Some geometric distortions are caused by defects in the microscope optics, but most are introduced in later stages of digitization. Video signals adhering to the RS-170 standard, for example, consist of rectangular pixels with a 4:3 aspect ratio. A circle imaged by a video camera appears uniaxially distorted into an ellipse when digitized and displayed by a computer, whose pixels are square. The analysis routines we describe below are most easily implemented for images consisting of square pixels. While many digitizing boards attempt to correct for uniaxial distortion, they often leave a residual anisotropy of a few percent. Both uniform and nonuniform geometric distortions can be measured by creating images of standard grids, identifying features in the images with features in the standards, and determining how far the image features are displaced from their ideal locations in an undistorted image. The algorithms we describe below for locating colloidal spheres also are useful for locating features in such calibration standards. Standard image processing texts describe algorithms for measuring apparent distortions in the calibration grid image and removing the distortion by spatial warping(10); (11); (12). Many image processing packages such as IDL include efficient implementations.

Contrast gradients can arise from nonuniform sensitivity among the camera's pixels. More significant variation often is due to uneven illumination. Long wavelength modulation of the background brightness complicates the design of criteria capable of locating spheres' images throughout an entire image. Subtracting off such a background is not difficult if the features of interest are relatively small and well separated as is frequently the case for colloidal images. Under these circumstances, the background is reasonably well modeled by a boxcar average over a region of extent 2w+1, where w is an integer larger than a single sphere's apparent radius in pixels, but smaller than an intersphere separation:

A_{w}(x,y)={1\over(2w+1)^{2}}\sum _{{i,j=-w}}^{w}A(x+i,y+j). (2)

While long-wavelength contrast variations waste the digital imaging system's dynamic range, noise actually destroys information. Coherent noise from radio frequency interference (RFI) can be removed with Fourier transform techniques (10); (11); (12) but is best avoided with proper electrical shielding. Digitization noise in the CCD camera and the frame grabber, however, is unavoidable. Such noise tends to be purely random with a correlation length \lambda _{n}\approx 1 pixel. Convolving an image A(x,y) with a Gaussian surface of revolution of half width \lambda _{n} strongly suppresses such noise without unduly blurring the image:

A_{{\lambda _{n}}}(x,y)={1\over B}\sum _{{i,j=-w}}^{w}A(x+i,y+j)\exp\left(-{i^{2}+j^{2}\over 4\lambda _{n}^{2}}\right), (3)

with normalization B=\left[\sum\limits _{{i=-w}}^{w}\exp\left(-{i^{2}\over 4\lambda _{n}^{2}}\right)\right]^{2}.

The difference between the noise-reduced and background images is an estimate of the ideal image. Since both eqn. (2) and eqn. (3) can be implemented as convolutions of the image A(x,y) with simple kernels of support 2w+1, we can compute both in a single step with the convolution kernel

K(i,j)={1\over K_{0}}\left[{1\over B}\exp\left(-{i^{2}+j^{2}\over 4\lambda _{n}^{2}}\right)-{1\over(2w+1)^{2}}\right]. (4)

The normalization constant K_{0}={1\over B}\left[\sum\limits _{{i=-w}}^{w}\exp\left(-{i^{2}\over 2\lambda _{n}^{2}}\right)\right]^{2}-{B\over(2w+1)^{2}} facilitates comparison among images filtered with different values of w. The correlation length of the noise generally is not used as input parameter, with \lambda _{n} instead being set to unity. The efficacy of the filter can be judged from the example in Fig. 1(b).

In practice, the image A(x,y) must be cast from an array of bytes to a higher precision data format, such as a floating point array, before convolution. This scaling, together with the actual convolution operation can be implemented in hardware with an array processor such as the Data Translation DT-2878. Further speed enhancement is realized by decomposing the circularly symmetric two-dimensional convolution kernel K(i,j) into four one-dimensional convolution kernels, so that filtering can be computed in O(w) operations rather than O(w^{2}).

§ III.2. Locating Particles

We identify local brightness maxima within an image as candidate particle locations. In practice, a pixel is adopted as a candidate if no other pixel within a distance w is brighter. Extending the comparison beyond a pixel's immediate neighborhood in this way accounts for the center-to-center separation of non-overlapping spheres and so greatly reduces the number of duplicate candidate sites. Because only the brightest pixels correspond to particle locations, we further require candidates to be in the upper 30th percentile of brightness for the entire image.

While not the most computationally efficient approach, the gray-scale dilation operation (10); (12) provides a conceptually clear implementation of the regional maximum selection criterion. Gray-scale dilation is an elementary morphological operation which sets the value of pixel A(x,y) to the maximum value within a distance w of coordinates (x,y), as is shown in Fig. 1(c). A pixel in the original image which has the same value in the dilated image is then a candidate particle location. We use the same value of w as was used in the filtering step.

§ III.3. Refining Location Estimates

In principle, any single-pixel regional maximum algorithm should be able to locate particle centroids to within half a pixel. This is the accuracy estimated by Schaertl and Sillescu (13) for their particle locating algorithm. In practice, such an approach suffers from poor noise rejection and includes false identifications. It is not difficult, however, to reduce the standard deviation of the position measurement to better than 1/10 pixel even with moderate signal noise. Other information gathered in the process can be used to estimate the spheres' displacements in the z-direction and to reject spurious identifications.

Having already found a locally brightest pixel at (x,y), which presumably is near a sphere's geometric center at (x_{0},y_{0}), we calculate the offset from (x,y) to the brightness-weighted centroid of the pixels in a region around (x,y):

{\epsilon _{x}\choose\epsilon _{y}}={1\over m_{0}}\sum\limits _{{i^{2}+j^{2}\leq w^{2}}}{i\choose j}A(x+i,y+j), (5)

where m_{0}=\sum\limits _{{i^{2}+j^{2}\leq w^{2}}}A(x+i,y+j) is the integrated brightness of the sphere's image. The refined location estimate is then (x_{0},y_{0})=(x+\epsilon _{x},y+\epsilon _{y}). Neglecting the background subtraction performed by the convolution kernel in eqn. (4) would bias \epsilon _{x} and \epsilon _{y} toward the center of the fitting region and away from the particle image's centroid. If either |\epsilon _{x}| or |\epsilon _{y}| exceeds 0.5, the candidate centroid location can be moved accordingly and the refinement recalculated.

§ III.4. Noise Discrimination and Tracking in Depth

While looping over candidate particle locations to calculate centroid refinements, we calculate other moments of each sphere image's brightness distribution:

m_{0}=\sum _{{i^{2}+j^{2}\leq w^{2}}}A(x+i,y+j) (6)

and

m_{2}={1\over m_{0}}\sum _{{i^{2}+j^{2}\leq w^{2}}}(i^{2}+j^{2})A(x+i,y+j), (7)

where (x,y) are the coordinates of the sphere's centroid. These additional moments are useful for distinguishing spheres from noise and for estimating their displacements from the focal plane.

Figure 2. Clustering of colloidal images in the (m_{0},m_{2}) plane. 15000 images of \sigma=0.325~\mathrm{\upmu}\mathrm{m} radius spheres.

Colloidal spheres tend to fall into broad yet well-separated clusters in the (m_{0},m_{2}) plane as can be seen in Fig. 2. Non-particle identifications, including colloidal aggregates, misidentified noise, and imperfections in the optical system, generally fall well outside the target cluster. The breadth of the cluster of valid points arises from the changing appearance of spheres as they move out of the microscope's focal plane. The exact nature of the broadening depends on whether spheres are being imaged in transmitted or reflected light. In the absence of a convenient formulation for the anticipated distribution of sphere images in the (m_{0},m_{2}) plane, we find that statistical cluster analysis(14) is effective for categorizing candidate identifications as either particles or noise and at distinguishing different classes of particles in bi- and polydisperse suspensions. Consequently, the spatial coordinates of features selected in the (m_{0},m_{2}) cluster analysis, such as those shown in Fig. 1(d), constitute the measured particle locations \rho({\bf r},t) in the snap-shot at time t.

The distribution of data in the (m_{0},m_{2}) plane reflects the spheres' positions along the direction normal to the imaging plane. This dependence is difficult to calculate, but straightforward to calibrate. We obtain calibration data by preparing a single layer sample of each of the monodisperse colloidal suspensions in our study, either by confining the spheres between parallel glass walls, or by allowing spheres to aggregate onto a glass substrate. The first method more closely mimics the configuration in our investigations. The second has the advantage of being easier to prepare. The calibration sample is mounted on the microscope and aligned so that the plane containing the particles is parallel to the focal plane. An electric motor then is coupled to the microscope's focusing knob so that the layer of particles moves through the usable depth of focus at a rate of about 1~\mathrm{\upmu}\mathrm{m} per second. When such a focus scan is digitized at 30 frames per second, each frame is displaced vertically by about dz=33 nm relative to the one before.

Since all the particles in a given frame of a focus scan are at the same displacement z from the focal plane, their images form a compact, roughly elliptical cluster in the (m_{0},m_{2}) plane. The mean and standard deviations of m_{0} and m_{2} for each discrete step z_{i} in z then can be collected into a probability distribution P(z_{i}|m_{0},m_{2})dz for a given particle to be within dz of z_{i} given its descriptors m_{0} and m_{2}. This probability distribution is then used to estimate particles' vertical positions through

z=\sum _{i}P(z_{i}|m_{0},m_{2})z_{i}, (8)

where the sum runs over the frames from the focus scan.

Applying eqn. (8) to the original calibration data provides an estimate for the error in z as a function of z. The values of z measured for the i\/-th frame fall in a Gaussian distribution about z_{i}. We adopt the width of this Gaussian as an estimate of the error in the location estimate for spheres near z_{i}. In practice, we find this value to be at best 10 times larger than the in-plane location error.

§ III.5. Linking Locations into Trajectories

Having located colloidal particles in a sequence of video images, we match up locations in each image with corresponding locations in later images to produce the trajectories in \rho({\bf r},t). This requires determining which particle in a given image most likely corresponds to one in the preceding image. Tracking more than one particle requires care since any particle can be identified with only one particle in each of the successive and preceding frames. Thus, we seek the most probable set of N identifications between N locations in two consecutive images. If the particles are indistinguishable, as for monodisperse colloidal spheres, this likelihood can be estimated only by proximity in the two images. The corresponding algorithm for trajectory linking can be motivated by consideration of the dynamics of noninteracting Brownian particles.

The probability that a single Brownian particle will diffuse a distance \delta in the plane in time \tau is

P(\delta|\tau)={1\over 4\pi D\tau}\exp\left(-{\delta^{2}\over 4D\tau}\right), (9)

where D is the particle's self-diffusion coefficient. For an ensemble of N noninteracting identical particles, the corresponding probability distribution is the product of single particle results:

P(\{\delta _{i}\}|\tau)=\left({1\over 4\pi D\tau}\right)^{N}\exp\left(-\sum\limits _{{i=1}}^{N}{\delta _{i}^{2}\over 4D\tau}\right). (10)

The most likely assignment of particle labels from one image to the next is the one which maximizes P(\{\delta _{i}\}|\tau), or, equivalently, minimizes \sum\limits _{{i=1}}^{N}\delta _{i}^{2}. While this criterion is rigorously correct for noninteracting systems, it also performs well in practice for interacting spheres provided the time interval between frames is sufficiently small. We discuss this restriction in more detail below.

Each label assignment can be thought of as a bond drawn between a pair of particles in consecutive frames. Calculating P(\{\delta _{i}\}|\tau) for all possible combinations of label assignments would require O(N!) computations, which is impractical for all but the most trivial distributions. The set of all possible label assignments forms a network of N_{n}N_{{n+1}} bonds connecting the N_{n} particles in the n-th frame with the N_{{n+1}} particles in the next. To reduce the complexity of assigning labels, we select only those bonds shorter than a characteristic length scale L to be possible candidates for the assignment of particle labels between two frames. This is equivalent to truncating the single-particle probability distribution P(\delta|\tau) at \delta=L. If L is somewhat smaller than the typical interparticle spacing in a snapshot, the remaining network usually is reduced to a collection of disconnected subnetworks. We then can solve for the optimal set of identifications within each of these subnetworks separately. For small enough values of L, most subnetworks contain only single bonds for which the assignment of labels is trivial. Trajectory linking proceeds in O(N\ln N) time for such trivial cases.

The least difficult nontrivial subnetworks to resolve consist of bonds linking M particles in one frame to M in the next, with M\ll N. The assignment of labels in this case proceeds according to eqn. (10) with no more than O(M!) operations and often much fewer. Subnetworks connecting unequal numbers of sites in two frames pose more of a problem since some labeling assignments then cannot be resolved. Such subnetworks are not unusual because particles often wander into and out of the observable sample volume, particularly near the edges. To proceed with labeling, we add as many “missing” bonds as are needed to complete trial labeling assignments. These missing bonds are assigned the length \delta _{i}=L for the purpose of evaluating eqn. (10). The most probable set of identifications therefore requires labeling some particles as “missing” in individual time steps. The last known locations of missing particles are retained in case unassigned particles reappear sufficiently nearby to resume the trajectory. This process is repeated for the particle locations in each frame until \rho({\bf r},t) is completely determined. Trajectories for monodisperse colloidal spheres at a crystal-fluid interface appear in Fig. 3.

Figure 3. Trajectories of \sigma=0.163~\mathrm{\upmu}\mathrm{m} radius colloidal microspheres at a crystal-fluid interface over 1 sec [C. A. Murray, W. O. Sprenger, and D. G. Grier (preprint)].

Linking particle distributions into trajectories is only feasible if the typical single particle displacement \delta in one time step is sufficiently smaller than the typical interparticle spacing, a. Otherwise, particle positions will become inextricably confused between snapshots. The optimal cutoff parameter falls in the range \delta<L<a/2. Colloidal particles large enough to image with a conventional light microscope (\sigma>150 nm) typically diffuse through water a distance smaller than their diameters in a 1/60 sec video field interval. Consequently, particle misidentification rarely imposes a practical limit on particle tracking at video frame rates.

§ III.6. Error Estimates and Optimal Settings

A simple model suffices to gauge the performance of the brightness-weighted centroid estimation. Although scattering of light by submicron dielectric spheres is fairly complicated (15), a typical sphere's image is reasonably well modeled by a Gaussian surface of revolution,

A(x,y)=A_{0}\exp\left(-{2|{\bf r}-{\bf r}_{0}|^{2}\over s^{2}}\right) (11)

with apparent radius s centered at {\bf r}_{0}=(x_{0},y_{0}). We assume implicitly in eqn. (5) that the center coordinates (x_{0},y_{0}) are registered with the camera's digitizing grid. This need not be the case. If the estimating mask is not much broader than the image, then uneven clipping at the edges skews the centroid estimate. Say the ideal image were offset along one of the grid's axes by a small amount \epsilon which we wish to estimate. The corresponding error due to clipping in the displacement estimate is

\Delta _{\epsilon}^{{\rm clipping}}\approx\epsilon\left({2w^{2}\over s^{2}}\right)\exp\left({-{2w^{2}\over s^{2}}}\right)\left[1-\exp\left({-{2w^{2}\over s^{2}}}\right)\right]^{{-1}}. (12)
Figure 4. Error in estimating the displacement \epsilon of a particle location from the digitizing grid as a function of the size w of the mask used for centroid refinement. Heavy lines are quadrature sums of the error estimates in eqns. (12) and (13). Circles are Monte-Carlo calculations for Gaussian surfaces of revolution with additive noise \Delta _{A}/A=0.02 and halfwidths s=6 pixels (a) and s=2 pixels (b). In both cases, the error estimate provides an adequate value for both the optimal kernel support, w, and for the accuracy of the position estimate at that value.

The value at each pixel, furthermore, has an associated measurement error due to noise of rms magnitude \Delta _{A} which contributes

\Delta _{\epsilon}^{{\rm noise}}\approx\left({\Delta _{A}\over A_{0}}\right)\left({2w^{2}\over s^{2}}\right){1\over 2\pi^{{1\over 2}}}\left[1-\exp\left({-{2w^{2}\over s^{2}}}\right)\right]^{{-1}} (13)

to the error in estimating \epsilon. The expected average displacement for an ensemble of spheres is \epsilon=0.382 and we estimate \Delta _{A}/A_{0} by measuring the rms variation in background brightness. The combined error for locating stationary particles, \Delta _{\epsilon}^{{\rm static}}=\left[\left(\Delta _{\epsilon}^{{\rm clipping}}\right)^{2}+\left(\Delta _{\epsilon}^{{\rm noise}}\right)^{2}\right]^{{1\over 2}}, appears in Fig. 4 for typical values of s and \Delta _{A}/A_{0} and has a minimum value somewhat better than 0.05 pixel in each direction for the optimal choice of w. A conservative estimate for the measurement error in our system therefore is \Delta _{\epsilon}^{{\rm static}} = 10 nm.

Using video images to make uncorrelated measurements of fluctuating particle locations requires the shutter interval, \tau _{e}, to be considerably shorter than the 1/30 sec interval between consecutive images. Video cameras such as the NEC TI-324A have adjustable shutters with exposure times ranging from 1/60 sec down to 1/10000 sec. Shortening the exposure time, however, reduces the contrast level A_{0} and also may increase the noise level \Delta _{A} in some cameras. The dependence of the relative noise level on adjustable parameters is given by the rule of thumb

{\Delta _{A}\over A_{0}}\propto{M^{2}\over\tau _{e}}, (14)

where M\approx s/\sigma is the system magnification. The choice of magnification thus is constrained by two mutually incompatible considerations: increasing the apparent particle size s and reducing \Delta _{A}/A_{0}. Studies of ordering in suspensions further require as many spheres as possible to be in the field of view and so place an additional constraint on M. Give such a system of constraints, our system produces images of acceptable quality for \tau _{e}\geq 1 msec.

Interlaced video images pose an additional problem for video microscopists studying rapidly moving particles. A single interlaced frame consists of two fields, one for the odd lines and one for the even. Usually, these two fields are not exposed simultaneously, but rather 1/60 sec apart regardless of the shutter speed. A particle which moves significantly in the period between the two field exposures will produce a jagged image such as that shown in Fig. 5. While some video cameras can be adjusted to produce non-interlaced images, not all video recorders and frame grabbers process such signals correctly. When interlacing poses problems, we analyze the even and odd fields separately, and thereby acquire data at 1/60 sec intervals. Since each field has only half as many lines as a full frame, the tracking accuracy is degraded and differs in the two directions. Whenever possible, we arrange our experiments so that interesting motion occurs along the row direction to exploit its higher spatial resolution.

Figure 5. Top: An interlaced image of a pair of rapidly moving colloidal spheres 1~\mathrm{\upmu}\mathrm{m} in diameter shows significant displacement between the even and odd fields. Scale bar indicates 1~\mathrm{\upmu}\mathrm{m}. Bottom: The even (left) and odd (right) fields of the same frame at half scale bilinearly interpolated to a 1:1 aspect ratio.

§ IV. Dynamical Measurements

§ IV.1. Diffusion Coefficients

A Brownian particle's trajectory {\bf r}(t) is parameterized by its self-diffusion coefficient D through the Einstein-Smoluchowsky equation

\langle|{\bf r}(t+\tau)-{\bf r}(t)|^{2}\rangle=2dD\tau, (15)

where d is the number of dimensions of trajectory data. The angle brackets indicate a thermodynamic average over many starting times t for a single particle or over many particles for an ensemble. Time-resolved particle trajectories from digital video microscopy observations therefore provide a simple means of measuring colloidal particles' self-diffusion coefficients. As a typical application of the imaging techniques described above, diffusion coefficient measurements provide a quantitative consistency check of our particle-locating and track-reconstruction methods' accuracy. As an analytical tool, such measurements complement traditional light scattering techniques by permitting direct measurements on polydisperse, inhomogeneous, strongly interacting, or extremely dilute suspensions.

While eqn. (15) can be used to measure D directly, fitting the histogram of particle displacements \delta=|{\bf r}(t+\tau)-{\bf r}(t)| to the expected Gaussian distribution

P(\delta|\tau)=P_{0}(\tau)\exp\left(-{|{\bf\delta}-{\bf\delta}_{0}(\tau)|^{2}\over\Delta^{2}(\tau)}\right) (16)

also affords consistency checks. The offset \delta _{0}(\tau) reflects secular drift in the sample of particles perhaps due to flow in the supporting fluid, while P_{0}(\tau) is a normalization constant. Information regarding the particle dynamics appears in the width of the distribution \Delta(\tau).

Figure 6. (a) Number of particles which diffused a distance \delta in \tau=33 msec from a sample of 26000 trajectory steps. The thin line is a fit to eqn. (16). (b) The evolution in time of the square of the width, \Delta(\tau), from fits such as that in (a). The linear least-squares fit to eqn. (17) provides a measurement of the spheres' long-time self-diffusion coefficient. The unresolvable offset \Delta _{0}^{2} in this fit is consistent with our estimated measurement error for particle locations for a dilute suspension with minimal caging.

Typical diffusion data for a single isolated sphere with radius \sigma=0.498~\mathrm{\upmu}\mathrm{m} appear in Fig. 6(a) together with a least squares fit to eqn. (16). The ionic strength of this suspension (\kappa\sigma\approx 5) is sufficiently large to suppress long-range electrostatic interactions. Optical tweezers described in the next section are used to position the test sphere at least 10~\mathrm{\upmu}\mathrm{m} from the container walls and other spheres to minimize external interactions. These precautions were used to closely mimic the behavior of isolated Brownian hard spheres for this measurement and are not necessary in general. Indeed, we regularly measure diffusion data from ensembles of strongly interacting spheres with comparable techniques. Interpretation of the data in these cases, however, is less straightforward.

The quality of the fit to eqn. (16) is a sensitive test of the proper functioning of the image processing software. For instance, if the size w of the convolution kernels used for sub-pixel position refinement is too small or if the image has an uncorrected background brightness, the histogram of displacement probabilities, P(\delta|\tau), shows strong modulation with a wavelength of one pixel. A strong peak at zero displacement usually indicates that the software is mistaking motionless image defects (such as dust on the optics) for actual particles. Outliers and shoulders on the histogram usually signify unreliable particle identifications and could be warnings of poor image quality or an inappropriate choice of system parameters. Steady flow of the suspending medium shifts the distribution P(\delta|\tau) uniformly by an amount \delta _{0}(\tau). Such an effect can be seen in Fig. 6(a). Care should be taken to control non-steady flows which tend to broaden and distort P(\delta|\tau). Dynamical data averaged over an ensemble of particles rather than over an ensemble of trajectory steps for a single particle can be corrected for the effects of non-steady flows.

The long-time self-diffusion coefficient D can be extracted from the time dependence of the distribution function's width through

\Delta^{2}(\tau)=2dD\tau+\Delta _{0}^{2}. (17)

The additive offset \Delta _{0}^{2} arises in part from rapid short-time diffusion and in part from measurement errors which contribute 2d\Delta _{\epsilon}^{2}. The error in centroid location estimated from the fit in Fig. 6(b), \Delta _{\epsilon}=15\pm 15 nm, is consistent with the 10 nm resolution quoted in section III.F. Non-linear evolution of \Delta^{2}(\tau) can reflect such effects as caging in dense suspensions (16), non-Newtonian behavior in the suspending fluid, or two-dimensional corrections for geometrically confined suspensions. For the spheres in the example data, the slope of the fit to eqn. (17) shown in Fig. 6(b) indicates a self-diffusion coefficient of D=0.46\pm 0.01~\mathrm{\mathrm{\upmu}\mathrm{m}}^{{2}}\mathrm{/}\mathrm{s}.

The self diffusion coefficient for an isolated Brownian sphere is given by the Stokes-Einstein equation,

D_{0}={k_{B}T\over 6\pi\eta\sigma}, (18)

where \eta is the viscosity of the suspending fluid and \sigma is the sphere's radius. The viscosity of water at the temperature T=22^{{\circ}}\mathrm{C} of the example experiment is \eta=0.955~\text{cP} so that D_{0}=0.469~\mathrm{\mathrm{\upmu}\mathrm{m}}^{{2}}\mathrm{/}\mathrm{s} in good agreement with the measured value. In addition to providing values of D, diffusion measurements on tracer particles also can be used to measure suspension properties at very small length scales such as the local viscosity \eta of the suspending medium.

A variety of hydrodynamic effects tend to reduce the observed diffusion coefficient below the Stokes-Einstein value. Hydrodynamic coupling between the sphere and a flat wall (such as the microscope cover slip) a distance h away is both predicted (17) and measured (18) to reduce the lateral diffusivity to approximately

{D\over D_{0}}\approx 1-{9\over 16}{\sigma\over h} (19)

for h\gg\sigma. Similarly, coupling between a highly charged particle and its surrounding counterions can reduce the particle's diffusivity by as much as 10 percent when the particle's dimensions are comparable to the Debye-Hückel screening length (19). The double-layer effect was minimized in the example data in Fig. 6 by ensuring that the screening length was much shorter than the sphere radius.

Extracting local-scale information is greatly simplified if extraneous coupling to the system's walls and neighboring particles can be minimized by restricting observations to dilute suspensions far from walls. Measurements at high dilution can become prohibitively time consuming, however, since particles readily diffuse out of the observation volume and away from experimentally desirable configurations. Optical trapping provides a means to reproduce useful arrangements of particles and thereby to take maximum advantage of the accuracy and resolution offered by digital video microscopy.

§ IV.2. Blinking Optical Tweezers

Optical tweezers (20) exploit optical gradient forces to trap dielectric particles in three dimensions. The electric dipole moment induced in a dielectric particle by an impinging beam of light induces an electric dipole moment in a dielectric particle. This dipole senses gradients in the field intensity and is drawn to the brightest region. Tightly focusing a laser beam with a high numerical aperture lens creates a local intensity maximum which can attract a particle strongly enough to overcome both radiation pressure and thermal forces. The same high numerical aperture objective lens used to image particles also can be used to form optical tweezers. Since the focal point can be arranged to lie in the microscope's focal plane, trapped particles are guaranteed to be in focus for the imaging system.

Our dual optical tweezer system is powered by a Spectra Diode Labs 100 mW diode laser operating at 780 nm. We introduce laser light into the objective lens with a dichroic beam splitter which facilitates simultaneous trapping and imaging. Varying the angle at which the collimated beam enters the back aperture of the objective lens moves the trapping point across the focal plane. This allows precise placement of one or more optical tweezers in the field of view. Projecting the conjugate plane to the objective lens' back aperture out of the microscope with a Keplerian telescope allows us to steer each trap with an external gimbal mounted mirror. Moving optical tweezers have been used in video microscopy studies of DNA and other polymers tethered to dielectric spheres (21). They also have been used to mimic a temporally varying spatially anisotropic potential for imaging studies of directed diffusion (22).

We perform dynamical measurements by locating our tweezers conveniently in the field of view and chopping the laser beam several times a second. Particles are free to diffuse while the laser is interrupted and are drawn back to the starting point in three dimensions when the traps reform. By collecting data only when the traps are off, we readily amass a large sample of dynamical data for isolated freely moving particles without having to chase them or refocus the microscope. We also can ensure that particles never wander close to walls or to other particles and thereby control systematic effects in our measurements. Synchronizing the chopper to the video sync signal from our camera ensures that the traps are either completely on or completely off in each video frame. The diffusion data in Fig. 6 was collected in this manner in less than two hours.

Using optical tweezers to manipulate colloidal particles would not be a useful adjunct to dynamical measurements if the laser illumination altered the system's behavior. Simple calculations show, however, that the time scale for both thermal and viscous relaxation in water on micron length scales is much shorter than a millisecond. Thus, any trap-induced thermal or hydrodynamic perturbation has already died out by the time we start collecting data. It should be noted that for some systems with slow relaxation, such as spheres in polymer solutions, this criterion might not be so easily met.

§ IV.3. Interaction Measurements

Most physical properties of dense colloidal suspensions are either determined or modified by the form of the interaction potential between the individual colloidal spheres. While present models for the pair potential have been used to achieve qualitative agreement between theoretical and observed phase diagrams and rheological properties, detailed measurements are needed to secure quantitative agreement and resolve questions of interpretation surrounding a variety of unexplained experimental phenomena. For example, the dependence of the pairwise interaction on temperature, volume fraction and boundary conditions are still being investigated. In practical applications, pairwise interaction measurements provide detailed information regarding the state of charge and local chemical environment of particles in suspension.

The effective pairwise interaction potential U_{{\rm eff}}(r) is encoded in the equilibrium distribution of particles in a suspension through the relation

g(r)=\exp\left(-{U_{{\rm eff}}(r)\over k_{B}T}\right), (20)

where

g(r)={\langle\int\rho({\bf x}-{\bf r})\rho({\bf x})d{\bf x}\rangle\over\left[\int\rho({\bf x})d{\bf x}\right]^{2}} (21)

is the two-particle correlation function and angle brackets indicate an average over angles. Static snapshots in principal can yield measurements of U_{{\rm eff}}(r) provided that the suspension's concentration is low enough to avoid many-body contributions(7). This restriction was circumvented by Kepler and Fraden (6) who augmented their imaging measurements of g(r) with molecular dynamics simulations to correct for many-body effects. Direct imaging avoids the problems encountered in efforts to invert light scattering data to extract pair potentials (23), since detector noise contributes little to the error in estimating g(r). Geometrically confining colloid to a single layer to avoid complications from the poor depth resolution of video microscopy, however, introduces wall-mediated interactions (6).

As with the measurements of diffusivity described above, we use blinking optical tweezers to facilitate measurements of the pair interaction in unconfined colloid. For the purposes of elucidating the technique as a typical application of digital video microscopy, we describe interaction measurements on a suspension of polystyrene sulfate spheres of radius \sigma=0.498~\mathrm{\upmu}\mathrm{m}. The suspension in these experiments is strongly deionized and maintained in contact with mixed-bed ion exchange resin in a hermetically sealed glass container. An earlier measurement on smaller spheres was published in reference [5] and a more extensive survey of such measurements will be presented elsewhere.

Rather than measuring the equilibrium pair distribution g(r) directly, we use optical tweezers to create reproducible initial configurations away from which an otherwise isolated pair of particles diffuse. This technique allows us to collect a series of short trajectories from one pair of spheres located far from the sample container's glass walls and far from other particles. The pair of traps is aligned in the microscope's focal plane along a line parallel to the camera's video lines and separated by no less than ten times the typical distance a free sphere can diffuse in 1/60 sec. Under these conditions, out-of-plane diffusion is sufficiently small that the three-dimensional center-to-center separation can be approximated by its projection into the plane. An analysis of interacting Brownian particles' dynamics allows us to extract the equilibrium pair distribution function g(r) from the ensemble of trajectories.

Since the inertial damping time for the system is much smaller than the time scales on which we measure dynamics, we can approximate the master equation for the dynamical pair correlation function g(r,t) by

g(r,t+\tau)=\int P(r,\tau|r^{{\prime}},0)g(r^{{\prime}},t)dr^{{\prime}} (22)

where P(r,\tau|r^{{\prime}},0) is a Markovian propagator (24). The steady-state solution to eqn. (22) is the equilibrium pair distribution function, g(r). After spatially discretizing eqn. (22) the discrete pair correlation function g(r_{i}) is the nontrivial eigenvector of a system of linear equations:

g(r_{i})=\sum _{j}P_{{ij}}g(r_{j}) (23)

where P_{{ij}} is the transition probability matrix for a pair of particles initially separated by distance r_{j} to be separated by distance r_{i} 1/30 sec later and corresponds to P(r,{1\over 30}|r^{{\prime}},0).

In practice, we build up P_{{ij}} by binning pair trajectory data according to the initial and final center-to-center separations in each time step. Each row of the counting data is normalized independently to conserve probability in eqn. (23). A typical example of P_{{ij}} measured in this manner is shown in Fig. 7(a). Details for calculating such matrices are presented elsewhere (5). With a chopper wheel set to turn off the tweezers for six fields out of every twenty, we typically are able to videotape 15,000 fields of data in less than half an hour. To collect data at a range of different pair separations, we adjust the spacing between the traps every few minutes by slightly rotating a gimbal-mounted mirror in the laser projection optics. We programmed our frame grabber to selectively digitize only those frames with the traps off by monitoring each image's background brightness, which decreases noticeably when the laser is chopped. Once P_{{ij}} has been calculated from the trajectory data, g(r) can be calculated by solving eqn. (22). The logarithm of g(r) is then an estimate of the interaction potential in units of the thermal energy k_{{\rm B}}T. The pair potential for the propagator matrix shown in Fig. 7(a) appears in Fig. 7(b).

The method we have outlined for measuring the effective pair interaction potential rests on very few assumptions regarding the nature of the interaction. We require, for example, non-potential interactions to be negligible to ensure history independence of the propagator. Interactions lacking spherical symmetry such as those between Brownian dipoles or charged ellipsoids also would require a more sophisticated analysis. With these caveats, however, we are free to interpret our results for charged spherical latices within the conventional DLVO theory (25) for colloidal interactions.

The electrostatic part of the DLVO potential resembles a Yukawa interaction (26)

U_{{\rm DLVO}}(r)={{Z^{*}}^{2}e^{2}\over\epsilon}\left[{e^{{\kappa\sigma}}\over 1+\kappa\sigma}\right]^{2}{e^{{-\kappa r}}\over r} (24)

where Z^{*} is the effective charge for spheres of radius \sigma, \kappa^{{-1}} is the Debye-Hückel screening length and \epsilon is the dielectric coefficient of the suspending fluid. The screening length depends on the concentration n of z-valent counterions and determines the range of the interaction. The charge renormalization theory of Alexander et al. (27) suggests that a sphere's effective charge scales with its radius \sigma

Z^{*}=C{\sigma\over\lambda _{{\rm B}}}, (25)
Figure 7. Measurement of pair-wise colloidal interaction potentials from digital video data. Top: Distribution of particle separations initially at r(t) which evolved to r(t+\tau) after \tau=33 msec. In the absence of an interparticle interaction, the freely diffusing spheres would form a distribution of points along the dashed diagonal line. Deviations from that line indicate the presence of interactions. The density of points does not reflect the probability of finding particles at a particular separation, but rather the frequency with which initial conditions were set in that region with the blinking optical tweezers. Inset: Grey-scale histogram of the same data set normalized as a propagation matrix, P_{{ij}}. Bottom: Interaction potential in units of the thermal energy as a function of center-to-center separation measured in units of sphere radii. The solid line is a fit to the DLVO theory with corrections for charge renormalization as described in eqns. (24) and (25). Inset: modified semi-logarithmic plot to emphasize the screened-Coulomb nature of the measured interaction.

where \lambda _{{\rm B}}={z^{2}e^{2}\over k_{{\rm B}}T} is known as the Bjerrum length, and C is a constant predicted (27) to be about 10. The solid line in Fig. 7(b) is a fit of eqns. (24) and (25) to our measured interaction potential. The extracted screening length \kappa^{{-1}}=320\pm 30nm corresponds to an electrolyte concentration of about 10^{{-6}} M, which presumably is due to dissolved airborne CO{}_{2}. The fit value of C=6.6\pm 1.0 is consistent with the values predicted by molecular dynamics simulations (28) (C=7) and neutron scattering measurements on micelles (29) (C=6).

We chose to fit our data to eqn. (24) as a test of the charge renormalization theory for spheres of constant surface charge. Fitting to other forms would allow estimation of quantities such as the zeta potential. Consequently, this technique might be considered complementary to electrophoretic surface potential measurements.

Our prototype interaction measurement system reports a reliable interaction curve complete with fits for quantities of interest in roughly 10 hours. Most of this time is spent acquiring and analyzing video frames under software control. Performing more of these operations with specialized yet readily available hardware would reduce the processing time to under 1 hour. This is comparable to the time required for more traditional colloidal characterization measurements. Unlike conventional techniques, moreover, digital video microscopy coupled with blinking optical tweezers can measure the interactions between a particular pair of particles rather than averaging over a sample.

§ V. Summary

The preceding sections describe image analysis methods we have developed to perform quantitative time-resolved imaging studies of colloidal suspensions. This presentation emphasized techniques which offer high accuracy while requiring only general purpose commercially available equipment. The demonstrated spatial resolution of 10 nm in the plane and 150 nm in depth for \sigma=0.3~\mathrm{\upmu}\mathrm{m} radius spheres should suffice for a wide range of applications. The two applications we discussed in detail, measurement of microspheres' self-diffusion coefficients and measurement of charged spheres' pair interaction potential, demonstrate the utility of these methods for studying the microscopic dynamics of colloidal suspensions. This work was supported by the National Science Foundation under Grant No. DMR-9320378.

References

  • (1)
    J. Perrin, Atoms, tr. D. Ll. Hammick (London, J. C. Constable and Co., Ltd., 1920).
  • (2)
    D. H. van Winkle and C. A. Murray, Phys. Rev. A 34, 562 (1986); D. G. Grier and C. A. Murray, J. Chem. Phys. 100, 9088 (1994); J. A. Weiss, D. W. Oxtoby, D. G. Grier, and C. A. Murray, J. Chem. Phys., in press (1995).
  • (3)
    P. Pieranski, L. Strzlecki, and B. Pansu, Phys. Rev. Lett. 50, 900 (1983); C. A. Murray and D. H. van Winkle, Phys. Rev. Lett. 58, 1200 (1987); C. A. Murray, W. O. Sprenger, and R. A. Wenk, Phys. Rev. B 42, 688 (1990); B. Poulingy, R. Malzbender, P. Ryan, and N. A. Clark, Phys. Rev. B 42, 988 (1990); R. E. Kusner, J. A. Mann, J. Kerins, and A. J. Dahm, Phys. Rev. Lett. 73, 3113 (1994). For a review of the field see C. A. Murray, in Bond-Orientational Order in Condensed Matter Systems, ed. K. J. Strandburg (New York, Springer-Verlag, 1992), pp. 137-215.
  • (4)
    A. T. Skjeltorp, J. Magn. Mat. 65, 195 (1987); T. Okubo, J. Chem. Soc., Faraday Trans. 1 84, 3377 (1988); B. R. Jennings and M. Stankiewicz, Proc. Royal Soc. London A 427, 321 (1990); G. Helgesen and A. T. Skjeltorp, Physica A 170, 488 (1991); J. Tang and S. Fraden, Phys. Rev. Lett. 71, 3509 (1993); E. M. Lawrence, M. L. Ivey, G. A. Flores, J. Liu, J. Bibette, and J. Richard, Int. J. Mod. Phys. B 8, 2765 (1995).
  • (5)
    J. C. Crocker and D. G. Grier, Phys. Rev. Lett. 73, 352 (1994).
  • (6)
    G. M. Kepler and S. Fraden, Phys. Rev. Lett. 73, 356 (1994).
  • (7)
    K. Vondermassen, J. Bongers, A. Mueller, and H. Versmold, Langmuir 10, 1351 (1994).
  • (8)
    S. Inoué, Video Microscopy (New York, Plenum, 1986).
  • (9)
    The Interactive Data Language is a product of Research Systems Inc., Boulder, CO. Direct inquiries to info@rsinc.com.
  • (10)
    A. K. Jain, Fundamentals of Image Processing (Englewood Cliffs, NJ, Prentice Hall 1986).
  • (11)
    R. C. Gonzalez and P. Wintz, Digital Image Processing, 2nd ed. (Reading, MA, Addison-Wesley, 1987).
  • (12)
    W. K. Pratt, Digital Image Processing (New York, Wiley, 1991).
  • (13)
    W. Schaertl and H. Sillescu, J. Colloid Interface Sci. 155, 313 (1993).
  • (14)
    J. A. Hartigen Clustering Algorithms (New York, Wiley, 1975).
  • (15)
    Milton Kerker, Scattering of Light (New York, Academic, 1969).
  • (16)
    P. N. Pusey and R. J. A. Tough, J. Phys. A 15, 1291 (1982); G. Nägele, M. Medina-Noyola, and J. L. Arauz-Lara, Physica A 149, 123 (1988).
  • (17)
    W. B. Russel, D. A. Saville, and W. R. Schowalter, Colloidal Dispersions (Cambridge, Cambridge University Press, 1989).
  • (18)
    L. P. Faucheux and A. J. Libchaber, Phys. Rev. E 49, 5158 (1994).
  • (19)
    Theo G. M. van de Ven, Colloidal Hydrodynamics, (San Diego, Academic, 1989).
  • (20)
    A. Ashkin, J. M. Dziedzic, J. E. Bjorkholm, and S. Chu, Optics Letters 11, 288 (1986).
  • (21)
    T. T. Perkins, D. E. Smith, and S. Chu, Science 264, 819 (1994); T. T. Perkins, D. E. Smith, and S. Chu, Science 264, 822 (1994); C. Bustamante, private communication (1995).
  • (22)
    L. P. Faucheux, L. S. Bourdieu, P. D. Kaplan, and A. J. Libchaber, Phys. Rev. Lett. 74, 1504 (1995).
  • (23)
    R. Rajagopalan, Langmuir 8, 2898 (1992).
  • (24)
    H. Risken, The Fokker-Planck Equation (Berlin, Springer-Verlag, 1989).
  • (25)
    B. V. Derjaguin and L. Landau, Acta Physicochim. (USSR) 14, 633 (1941); E. J. Verwey and J. Th. G. Overbeek, Theory of the Stability of Lyophobic Colloids (Amsterdam, Elsevier, 1948).
  • (26)
    L. Belloni, J. Chem. Phys. 86, 519 (1986).
  • (27)
    S. Alexander, P. M. Chaikin, P. Grant, G. J. Morales, P. Pincus, and D. Hone, J. Chem. Phys. 80, 5776 (1984).
  • (28)
    M. J. Stevens, M. L. Falk, and M. O. Robbins, preprint (1995).
  • (29)
    S. Bucci, C. Fagotti, V. Degiorgio, and R. Piazza, Langmuir 7, 824 (1991).