Methods of Digital Video Microscopy for Colloidal Studies
Abstract.
We describe a set of image processing algorithms for extracting quantitative data from digitized video microscope images of colloidal suspensions. In a typical application, these direct imaging techniques can locate submicron spheres to within 10 nm in the focal plane and 150 nm in depth. Combining information from a sequence of video images into single-particle trajectories makes possible measurements of quantities of fundamental and practical interest such as diffusion coefficients and pair-wise interaction potentials. The measurements we describe in detail combine the outstanding resolution of digital imaging with video-synchronized optical trapping to obtain highly accurate and reproducible results very rapidly.
§ I. Introduction
Recent developments in digital image processing provide revolutionary new tools for studying colloidal suspensions. Combined with video microscopy, image processing greatly facilitates time-resolved measurements of individual colloidal particles' trajectories. In the 90 years since Perrin's pioneering photographic study of diffusion and Brownian motion(1), quantitative analysis of colloidal images has been used to study phase transitions in three-(2) and two-dimensional systems (3), to probe the effects of external fields on colloidal dynamics(4), and to measure directly the interaction between isolated pairs of colloidal microspheres(5); (6); (7). While both image processing and the ultramicroscopy of colloidal suspensions are well developed fields, applying the former to the latter poses some problems whose resolution has not been discussed in a unified way. A rapidly growing community of researchers will come up against these practical hurdles as technological advances associated with multimedia computing make these techniques more widely available. In this light, we present some practical methods which we have found useful in studying the microscopic structure and dynamics of colloidal systems. While this treatment centers on suspensions of submicron spheres, many of the methods described below can be generalized to suspensions of non-spherical particles.
In section II, we describe typical instrumentation required for acquiring digital video images of colloidal particles. Section III describes in some detail the steps required to convert a digital movie of colloidal particles into an ensemble of single-particle trajectories. We stress those aspects of the analysis which allow us to track particles with spatial resolution much finer than the wavelength of light used to create the images. High-resolution trajectory data makes possible a wide range of quantitative measurements of colloidal processes at the microscopic scale. As practical examples, in Section IV we describe measurements of microspheres' self-diffusion coefficients and of the separation dependence of the pair-wise interaction potential for charge-stabilized colloid.
§ II. Typical Instrumentation
Individual colloidal particles larger than roughly 200 nm can be resolved
with a conventional light microscope(8).
We use an Olympus IMT2 inverted microscope with a 100
N.A. 1.2 oil immersion objective.
The objective lens'
400 nm depth of focus is
comparable to a typical sphere diameter, so that only a single layer of
spheres is in focus at any time.
This is convenient for distinguishing particles
in dense suspensions.
Three-dimensional views can be reconstructed from a series of images
at different focal planes, although multiple light scattering
rapidly reduces contrast with increasing depth.
The limited working distance of high-powered objective lenses
(typically around 200
including the thickness of the cover glass)
poses the ultimate constraint to viewing in depth.
Scanning laser confocal microscopes offer true three-dimensional
microscopy but are at least an order of magnitude more expensive
than conventional microscopes.
Conventional photomicrographs of colloidal suspensions such as that in Fig. 1(a) can capture thousands of particles with spatial resolution limited by the wavelength of light. An optical scanner then can convert these photographs to digital arrays with a degree of detail limited only by the amount of computer memory available. While the image processing methods discussed below can be applied to photographic data, our main interest is in the quantitative analysis of video images which record both spatial and temporal information.
Standard commercial video cameras produce 30 complete images per second but with poorer spatial resolution than is offered by photographic film. The convenience, flexibility, and economy of video technology, however, encourage its use. The usable portion of a single video image typically consists of 480 horizontal lines of 640 pixels, where each pixel records the average brightness of a discrete area in the original scene.



Of the many cameras available, those based on charge-coupled device (CCD)
technology are superior to older vidicon tube models
which suffer from geometric distortions, nonuniform sensitivity,
and various nonlinearities which vary with time.
Tube cameras also are quite bulky.
Monochrome CCD cameras may be preferable to color models
not only because they are less expensive
but also because they tend to have superior noise figures and
greater sensitivity to subtle brightness variations.
Color information, furthermore, is not used in the techniques we
describe below.
We use an NEC TI-324A CCD camera attached to our microscope's video port.
The addition of a video
eyepiece provides a total system magnification
of
= 85 nm/pixel on the CCD.
The choice of system magnification is a trade-off among several figures
of merit including size of field of view, degree of image contrast,
desired tracking precision, and speed of image processing.
In our studies of phase transitions and dynamics in suspensions
of monodisperse latex microspheres,
such compromises typically dictate selecting the magnification to
produce images with apparent radii of
pixels.
Video images must be converted into digital format before they can be analyzed. Digitizing video frames requires a dedicated frame grabber which typically takes the form of an add-on board for a computer. The frame grabber used in this study is a Data Translation DT-3851A installed in a 486-class personal computer. Frame grabbers such as the DT-3851A convert the analog video stream to digital images in real time, a process which requires more than 12 million analog to digital (A/D) conversions per second. Such high-speed digitization limits most frame grabbers to 8 bits of dynamic range, or 256 gradations of gray scale per pixel. While photographs capture much more subtle gradations, 8 bit resolution suffices if the video signal is adjusted to fill the grabber's dynamic range.
Although frame grabbers can digitize and display images in real time, storing a digital image is time consuming. A single uncompressed gray-scale image takes up about one third of a megabyte. Typical hard disks can store such an image in a quarter second, which is not fast enough to acquire full motion video in real time. Disk arrays and high speed video drives can archive full-screen full-motion digital video in real time and probably will become cost-effective solutions to the video storage problem in the near future. Storing images to fast memory before transferring to disk also is becoming increasingly feasible with the advent of local bus frame grabbers. Real-time hardware compression schemes such as MPEG generally are not appropriate since they achieve their results by throwing out the subtle gradations which we hope to study. Recordable video disks offer another hardware solution, but are prohibitively expensive for routine recording, particularly when only a small percentage of the recorded video is likely to be analyzed. Real-time digital video recording, however, is not the only practical approach for time-resolved digital microscopy.
Video images can be recorded in analog form with commercial video tape recorders. The more advanced formats such as S-VHS or Hi8 faithfully preserve most of the information from the original video signal. Video tape decks with computer interfaces such as the SONY EVO-9650 can be controlled by the same computer which hosts the frame grabber card. A fairly straightforward program then can direct the tape deck to seek out and pause at a particular video frame, have the frame grabber digitize the paused image, and store the result to disk. Repeating this process permits digitizing any sequence of video frames. It should be noted that some low cost frame grabbers have difficulty digitizing pause-mode video signals. While such problems are becoming less common as video technology progresses, they should still be considered when designing a video acquisition system.
§ III. Five Stages of Colloidal Particle Tracking
Digital video analysis enables us to extract the trajectories of individual colloidal microspheres from a video tape of their microscope images. The time evolution of the distribution of particles,
![]() |
(1) |
then can be used to calculate quantities of interest, some examples of
which we discuss in later sections.
In eqn. (1), is the location of the
-th
particle in a field of
particles at time
.
The software we have developed to extract
from a sequence of digital images consists
of five logical steps:
correcting imperfections in the individual images,
locating candidate particle positions,
refining these positions,
discriminating “false” particles,
and finally linking the time-resolved particle locations into trajectories.
In this section, we discuss our solutions to these interrelated
problems.
The difficulty of measuring can vary greatly
from system to system.
For instance, images of
a dilute suspension whose particles are geometrically
confined at the microscope's focal plane are simpler to process than
pictures of a
dense suspension of colloid moving in three dimensions.
In the latter case, the tracking
algorithm has to deal with the changing appearance
of the spheres as they move in and out of focus.
Furthermore, it has to distinguish marginally focused particles
from noise.
The need to locate a large number of spheres by processing the much larger
number of pixels in each of a stream of images also places stringent efficiency
constraints on the code.
The set of algorithms presented below
reliably processes “easy” and “difficult” images alike, and does
so with minimal intervention in the form of adjustable
tuning parameters.
Our image processing system is implemented in IDL, a programming language optimized for visual data analysis(9). We have found IDL to be more convenient than conventional languages such as C/C++ or Fortran for rapidly developing, testing, and modifying image analysis software. Although IDL is an interpreted language, its performance on typical tasks such as matrix convolutions is comparable to that of compiled languages thanks to the availability of highly optimized analysis modules and array processing primitives.
§ III.1. Image Restoration
Digitized images typically suffer from a range of imperfections including
geometric distortion, nonuniform contrast, and noise.
These all introduce errors into unless steps are
taken to restore the image to its “ideal” state.
Some geometric distortions are caused by defects in the microscope optics,
but most are introduced in later stages of digitization.
Video signals adhering to the RS-170 standard,
for example, consist of rectangular pixels
with a 4:3 aspect ratio.
A circle imaged by a video camera appears uniaxially distorted
into an ellipse when digitized and displayed by a computer, whose
pixels are square.
The analysis routines we describe below
are most easily implemented for images consisting of square pixels.
While many digitizing boards attempt to correct for uniaxial distortion,
they often leave a residual anisotropy of a few percent.
Both uniform and nonuniform geometric distortions can be measured
by creating images of standard grids, identifying features in the images
with features in the standards, and determining how far the image features
are displaced from their ideal locations in an undistorted image.
The algorithms we describe below for locating colloidal spheres
also are useful for locating features in such calibration standards.
Standard image processing texts describe algorithms
for measuring apparent distortions in the calibration grid image and removing
the distortion by spatial warping(10); (11); (12).
Many image processing packages such as IDL include efficient implementations.
Contrast gradients can arise from nonuniform sensitivity
among the camera's pixels.
More significant variation often is due to uneven illumination.
Long wavelength modulation of the background brightness complicates
the design of criteria
capable of locating spheres' images throughout an entire image.
Subtracting off such a background is not difficult if the features of
interest are relatively small and well separated as is frequently
the case for colloidal images.
Under these circumstances, the background is reasonably well modeled
by a boxcar average over a region of extent , where
is an
integer larger than a single sphere's apparent radius in pixels, but
smaller than an intersphere separation:
![]() |
(2) |
While long-wavelength contrast variations waste the digital imaging
system's dynamic range, noise actually
destroys information.
Coherent noise from radio frequency interference (RFI)
can be removed with Fourier transform techniques (10); (11); (12)
but is best avoided with proper electrical shielding.
Digitization noise in the CCD camera and the frame grabber, however, is
unavoidable.
Such noise tends to be purely random with a correlation length
pixel.
Convolving an image
with a Gaussian surface of revolution of
half width
strongly suppresses
such noise without unduly blurring the image:
![]() |
(3) |
with normalization
.
The difference between the
noise-reduced and background images is an estimate of the ideal image.
Since both eqn. (2) and eqn. (3)
can be implemented as
convolutions of the image with simple kernels of support
,
we can compute both in a single step with the convolution kernel
![]() |
(4) |
The normalization constant
facilitates comparison among images filtered with different values of
.
The correlation length of the noise generally is not used as input parameter,
with
instead being set to unity.
The efficacy of the filter can be judged from the example in
Fig. 1(b).
In practice, the image must be cast from an array of bytes to
a higher precision data format, such as a floating point array,
before convolution.
This scaling, together with the actual convolution operation can
be implemented in hardware with an array processor
such as the Data Translation DT-2878.
Further speed enhancement is realized by decomposing the circularly
symmetric two-dimensional
convolution kernel
into four one-dimensional convolution kernels,
so that filtering can be computed in
operations rather than
.
§ III.2. Locating Particles
We identify
local brightness maxima
within an image as candidate particle locations.
In practice, a pixel is adopted as a candidate if no other pixel
within a distance is brighter.
Extending the comparison beyond a pixel's immediate neighborhood in
this way accounts for the center-to-center
separation of non-overlapping spheres
and so greatly reduces the number of duplicate candidate sites.
Because only the brightest pixels correspond to particle
locations, we further require candidates to be in the upper 30th
percentile of brightness for the entire image.
While not the most computationally
efficient approach, the gray-scale dilation operation
(10); (12) provides a conceptually clear implementation of
the regional maximum selection criterion.
Gray-scale dilation is an elementary morphological operation which
sets the value of pixel
to the maximum value within a distance
of coordinates
,
as is shown in Fig. 1(c).
A pixel in the original image which has the same value in the dilated
image is then a candidate particle location.
We use the same value of
as was used in the filtering step.
§ III.3. Refining Location Estimates
In principle, any single-pixel
regional maximum algorithm should be able to locate
particle centroids to within half a pixel.
This is the accuracy estimated by Schaertl and Sillescu (13)
for their particle locating algorithm.
In practice, such an approach suffers from poor noise rejection
and includes false identifications.
It is not difficult, however, to reduce the standard
deviation of the position measurement
to better than 1/10 pixel even with moderate signal noise.
Other information gathered in the process can be used to estimate
the spheres' displacements in the -direction and to reject
spurious identifications.
Having already found a locally
brightest pixel at , which presumably is near
a sphere's geometric center at
,
we calculate the offset from
to the brightness-weighted
centroid of the pixels in a region around
:
![]() |
(5) |
where is the
integrated brightness of the sphere's image.
The refined location estimate is then
.
Neglecting the background subtraction performed by the convolution kernel
in eqn. (4) would bias
and
toward the center of the fitting region and
away from the particle image's centroid.
If either
or
exceeds 0.5, the
candidate centroid location can be moved accordingly and the
refinement recalculated.
§ III.4. Noise Discrimination and Tracking in Depth
While looping over candidate particle locations to calculate centroid refinements, we calculate other moments of each sphere image's brightness distribution:
![]() |
(6) |
and
![]() |
(7) |
where are the coordinates of the sphere's centroid.
These additional moments are useful for distinguishing spheres
from noise and for estimating their displacements from
the focal plane.



Colloidal spheres tend to fall into broad yet well-separated
clusters in the plane as can be seen in Fig. 2.
Non-particle identifications, including colloidal aggregates,
misidentified noise, and imperfections in the optical system,
generally fall well outside the target cluster.
The breadth of the cluster of valid points arises from the
changing appearance of spheres as they move out of the microscope's
focal plane.
The exact nature of the broadening depends on whether spheres
are being imaged in transmitted or reflected light.
In the absence of a convenient formulation for the anticipated
distribution of sphere images in the
plane, we find that
statistical cluster analysis(14) is
effective for categorizing candidate identifications as either
particles or noise and at distinguishing different classes of particles
in bi- and polydisperse suspensions.
Consequently,
the spatial coordinates of features selected in the
cluster
analysis, such as those shown in Fig. 1(d),
constitute the measured
particle locations
in the snap-shot at time
.
The distribution of data in the
plane reflects the spheres' positions along the direction normal to the
imaging plane.
This dependence is difficult to calculate, but straightforward to
calibrate.
We obtain calibration data by preparing a single layer
sample of each of the monodisperse colloidal suspensions in our study,
either by confining the spheres between parallel glass walls,
or by allowing spheres to aggregate onto a glass substrate.
The first method more closely mimics
the configuration in our investigations.
The second has the advantage
of being easier to prepare.
The calibration sample is mounted on the microscope
and aligned so that the plane containing the particles is parallel
to the focal plane.
An electric motor then is coupled to
the microscope's focusing knob
so that the layer of particles moves through the usable depth of focus
at a rate of about
per second.
When such a focus scan is digitized at 30 frames per second,
each frame is displaced vertically by about
nm
relative to the one before.
Since all the particles in a given frame of a focus scan
are at the same displacement from the focal
plane, their images form a compact,
roughly elliptical cluster in the
plane.
The mean and standard
deviations of
and
for
each discrete step
in
then can be collected into a probability distribution
for a given particle to be within
of
given its descriptors
and
.
This probability distribution is then used to estimate particles' vertical
positions through
![]() |
(8) |
where the sum runs over the frames from the focus scan.
Applying eqn. (8) to the original calibration data
provides an estimate for the error in as a function of
.
The values of
measured for the
-th frame fall in a
Gaussian distribution about
.
We adopt the width of this Gaussian as an estimate of the
error in the location estimate for spheres near
.
In practice, we find this value to be at best 10 times larger
than the in-plane location error.
§ III.5. Linking Locations into Trajectories
Having located colloidal particles in
a sequence of video images, we match up locations
in each image with corresponding locations in later images to
produce the trajectories in .
This requires determining which particle in a given
image most likely corresponds to one in the preceding
image.
Tracking more than one particle requires care since
any particle can be identified with only one particle
in each of the successive and preceding frames.
Thus, we seek the most probable set of
identifications
between
locations in two consecutive images.
If the particles
are indistinguishable, as for monodisperse colloidal spheres, this
likelihood can be estimated only by proximity in the two images.
The corresponding algorithm for trajectory linking
can be motivated by consideration of the dynamics of
noninteracting Brownian particles.
The probability that a single Brownian particle will diffuse a distance
in the plane in time
is
![]() |
(9) |
where is the particle's self-diffusion coefficient.
For an ensemble of
noninteracting identical particles,
the corresponding probability distribution is the product of
single particle results:
![]() |
(10) |
The most likely assignment of particle labels from one image to the
next is the one which maximizes ,
or, equivalently, minimizes
.
While this criterion is rigorously correct for noninteracting
systems, it also performs well in practice
for interacting spheres provided the time interval between
frames is sufficiently small.
We discuss this restriction in more detail below.
Each label assignment
can be thought of as a bond drawn between a pair of particles in
consecutive frames.
Calculating for all possible combinations
of label assignments would require
computations,
which is impractical for all but the most trivial distributions.
The set of all possible label assignments forms a network of
bonds connecting the
particles in the
-th frame with the
particles in the next.
To reduce the complexity of assigning labels,
we select only those bonds shorter than a characteristic
length scale
to be possible
candidates for the assignment of particle labels between
two frames.
This is equivalent to truncating the single-particle probability
distribution
at
.
If
is somewhat smaller
than the typical interparticle spacing in a snapshot, the remaining network
usually is reduced to a collection of disconnected subnetworks.
We then can solve for the optimal set of identifications
within each of these subnetworks separately.
For small enough values of
, most
subnetworks contain only single bonds for which the
assignment of labels is trivial.
Trajectory linking proceeds in
time for such trivial cases.
The least difficult nontrivial subnetworks to resolve consist of bonds
linking particles in one frame to
in the next, with
.
The assignment of labels in this case proceeds according to
eqn. (10) with no more than
operations
and often much fewer.
Subnetworks connecting unequal numbers of sites in two frames pose
more of a problem since some labeling assignments then cannot
be resolved.
Such subnetworks are not unusual because particles often wander into
and out of the observable sample volume, particularly near the edges.
To proceed with labeling, we add as many “missing” bonds
as are needed to complete trial labeling assignments.
These missing bonds are assigned the length
for the purpose of evaluating eqn. (10).
The most probable set of identifications therefore requires labeling
some particles as “missing” in individual time steps.
The last known locations of missing particles are retained in case
unassigned particles reappear sufficiently nearby to resume
the trajectory.
This process is repeated for the particle locations in each frame
until
is completely determined.
Trajectories for monodisperse colloidal spheres at a crystal-fluid
interface appear in Fig. 3.


Linking particle distributions into trajectories is only feasible
if the typical single particle displacement in one
time step is sufficiently smaller than the typical interparticle
spacing,
.
Otherwise, particle positions will become inextricably confused
between snapshots.
The optimal cutoff parameter falls in the range
.
Colloidal particles large enough to image with a conventional
light microscope (
nm) typically diffuse through water
a distance smaller than their diameters in a 1/60 sec video
field interval.
Consequently, particle misidentification rarely imposes a practical
limit on particle tracking at video frame rates.
§ III.6. Error Estimates and Optimal Settings
A simple model suffices to gauge the performance of the brightness-weighted centroid estimation. Although scattering of light by submicron dielectric spheres is fairly complicated (15), a typical sphere's image is reasonably well modeled by a Gaussian surface of revolution,
![]() |
(11) |
with apparent radius centered at
.
We assume implicitly in eqn. (5) that the center
coordinates
are registered with the camera's digitizing grid.
This need not be the case.
If the estimating mask is not much broader than the image, then
uneven clipping at the edges skews the centroid estimate.
Say the ideal image were offset along one of the grid's axes
by a small amount
which we wish to estimate.
The corresponding error due to clipping in the displacement estimate is
![]() |
(12) |







The value at each pixel, furthermore, has an associated measurement
error due to noise of rms magnitude which contributes
![]() |
(13) |
to the error in estimating .
The expected average displacement for an ensemble of spheres is
and
we estimate
by measuring the rms variation
in background brightness.
The combined error for locating stationary particles,
,
appears in Fig. 4 for typical values of
and
and has a minimum value somewhat
better than 0.05 pixel in each direction for the optimal choice of
.
A conservative estimate for the measurement error
in our system therefore is
= 10 nm.
Using video images to make uncorrelated measurements of
fluctuating particle locations requires the
shutter interval, , to be considerably shorter than
the 1/30 sec interval between consecutive images.
Video cameras such as the NEC TI-324A have adjustable shutters
with exposure times ranging from 1/60 sec down to 1/10000 sec.
Shortening the exposure time, however, reduces
the contrast level
and also may increase the noise level
in some cameras.
The dependence of the relative noise level on adjustable parameters
is given by the rule of thumb
![]() |
(14) |
where is the system magnification.
The choice of magnification thus is constrained by two
mutually incompatible considerations: increasing the apparent particle
size
and reducing
.
Studies of ordering in suspensions further require as many spheres
as possible to be in the field of view and so place
an additional constraint on
.
Give such a system of constraints,
our system produces images of acceptable quality for
1 msec.
Interlaced video images pose an additional problem for video microscopists studying rapidly moving particles. A single interlaced frame consists of two fields, one for the odd lines and one for the even. Usually, these two fields are not exposed simultaneously, but rather 1/60 sec apart regardless of the shutter speed. A particle which moves significantly in the period between the two field exposures will produce a jagged image such as that shown in Fig. 5. While some video cameras can be adjusted to produce non-interlaced images, not all video recorders and frame grabbers process such signals correctly. When interlacing poses problems, we analyze the even and odd fields separately, and thereby acquire data at 1/60 sec intervals. Since each field has only half as many lines as a full frame, the tracking accuracy is degraded and differs in the two directions. Whenever possible, we arrange our experiments so that interesting motion occurs along the row direction to exploit its higher spatial resolution.



§ IV. Dynamical Measurements
§ IV.1. Diffusion Coefficients
A Brownian particle's trajectory is parameterized by
its self-diffusion coefficient
through the
Einstein-Smoluchowsky equation
![]() |
(15) |
where is the number of dimensions of trajectory data.
The angle brackets indicate a thermodynamic average
over many starting times
for a single particle or
over many particles for an ensemble.
Time-resolved particle trajectories from digital video
microscopy observations therefore provide a simple
means of measuring colloidal particles' self-diffusion coefficients.
As a typical application of the imaging techniques
described above, diffusion coefficient measurements provide
a quantitative consistency check of our particle-locating
and track-reconstruction methods' accuracy.
As an analytical tool, such measurements complement traditional
light scattering techniques by permitting direct measurements on
polydisperse, inhomogeneous,
strongly interacting, or extremely dilute suspensions.
While eqn. (15) can be used to measure
directly, fitting the histogram of particle displacements
to the expected Gaussian distribution
![]() |
(16) |
also affords consistency checks.
The offset reflects secular drift in the sample
of particles perhaps due to flow in the supporting fluid,
while
is a normalization constant.
Information regarding the particle dynamics appears in the width of
the distribution
.





Typical diffusion data for a single isolated sphere with radius
appear in Fig. 6(a) together
with a least squares fit to eqn. (16).
The ionic strength of this suspension (
) is
sufficiently large to suppress long-range electrostatic interactions.
Optical tweezers described in the next section are used to position
the test sphere at least
from the container walls and
other spheres to minimize external interactions.
These precautions were used to closely mimic the behavior of
isolated Brownian hard spheres
for this measurement and are not necessary in general.
Indeed, we regularly measure diffusion data from ensembles of
strongly interacting spheres with comparable techniques.
Interpretation of the data in these cases, however, is less
straightforward.
The quality of the fit to eqn. (16)
is a sensitive test of the proper
functioning of the image processing software.
For instance,
if the size of the convolution kernels used
for sub-pixel position refinement is too small or if the image has
an uncorrected background brightness,
the histogram of displacement probabilities,
,
shows strong modulation with a wavelength of one pixel.
A strong peak at zero
displacement usually indicates that the software is mistaking
motionless image defects (such as dust on the optics) for actual
particles.
Outliers and shoulders on the histogram usually signify
unreliable particle identifications and could be warnings
of poor image quality or an inappropriate choice
of system parameters.
Steady flow of the suspending medium shifts the distribution
uniformly by an amount
.
Such an effect can be seen in
Fig. 6(a).
Care should be taken to control non-steady flows which
tend to broaden and distort
.
Dynamical data averaged over an ensemble of particles rather than
over an ensemble of trajectory steps for a single particle
can be corrected for the effects of non-steady flows.
The long-time self-diffusion coefficient can be extracted from the time
dependence of the distribution function's width through
![]() |
(17) |
The additive offset
arises in part from rapid short-time diffusion and
in part from measurement errors which contribute
.
The error in centroid location estimated from the fit
in Fig. 6(b),
nm, is
consistent with the 10 nm resolution quoted in section III.F.
Non-linear evolution of
can reflect
such effects as caging in dense suspensions (16),
non-Newtonian behavior in the suspending fluid, or
two-dimensional corrections for geometrically confined suspensions.
For the spheres in the example data, the slope of the fit to
eqn. (17) shown in Fig. 6(b)
indicates a self-diffusion coefficient of
.
The self diffusion coefficient for an isolated Brownian sphere is given by the Stokes-Einstein equation,
![]() |
(18) |
where is the viscosity of the suspending fluid
and
is the sphere's radius.
The viscosity of water at the temperature
of the example experiment is
so that
in good agreement with the measured value.
In addition to providing values of
,
diffusion measurements on tracer particles also can be
used to measure suspension
properties at very small length scales such as
the local viscosity
of the suspending medium.
A variety of hydrodynamic effects tend to reduce the
observed diffusion coefficient below the Stokes-Einstein value.
Hydrodynamic coupling between the sphere and a flat
wall (such as the microscope cover slip)
a distance away is both predicted (17)
and measured (18) to reduce the lateral diffusivity to
approximately
![]() |
(19) |
for .
Similarly, coupling between a highly
charged particle and its surrounding counterions
can reduce the particle's diffusivity by as much as 10 percent when
the particle's dimensions are comparable to the Debye-Hückel
screening length (19).
The double-layer effect was minimized in the example data
in Fig. 6 by ensuring
that the screening length was much
shorter than the sphere radius.
Extracting local-scale information is greatly simplified if extraneous coupling to the system's walls and neighboring particles can be minimized by restricting observations to dilute suspensions far from walls. Measurements at high dilution can become prohibitively time consuming, however, since particles readily diffuse out of the observation volume and away from experimentally desirable configurations. Optical trapping provides a means to reproduce useful arrangements of particles and thereby to take maximum advantage of the accuracy and resolution offered by digital video microscopy.
§ IV.2. Blinking Optical Tweezers
Optical tweezers (20) exploit optical gradient forces to trap dielectric particles in three dimensions. The electric dipole moment induced in a dielectric particle by an impinging beam of light induces an electric dipole moment in a dielectric particle. This dipole senses gradients in the field intensity and is drawn to the brightest region. Tightly focusing a laser beam with a high numerical aperture lens creates a local intensity maximum which can attract a particle strongly enough to overcome both radiation pressure and thermal forces. The same high numerical aperture objective lens used to image particles also can be used to form optical tweezers. Since the focal point can be arranged to lie in the microscope's focal plane, trapped particles are guaranteed to be in focus for the imaging system.
Our dual optical tweezer system is powered by a Spectra Diode Labs 100 mW diode laser operating at 780 nm. We introduce laser light into the objective lens with a dichroic beam splitter which facilitates simultaneous trapping and imaging. Varying the angle at which the collimated beam enters the back aperture of the objective lens moves the trapping point across the focal plane. This allows precise placement of one or more optical tweezers in the field of view. Projecting the conjugate plane to the objective lens' back aperture out of the microscope with a Keplerian telescope allows us to steer each trap with an external gimbal mounted mirror. Moving optical tweezers have been used in video microscopy studies of DNA and other polymers tethered to dielectric spheres (21). They also have been used to mimic a temporally varying spatially anisotropic potential for imaging studies of directed diffusion (22).
We perform dynamical measurements by locating our tweezers conveniently in the field of view and chopping the laser beam several times a second. Particles are free to diffuse while the laser is interrupted and are drawn back to the starting point in three dimensions when the traps reform. By collecting data only when the traps are off, we readily amass a large sample of dynamical data for isolated freely moving particles without having to chase them or refocus the microscope. We also can ensure that particles never wander close to walls or to other particles and thereby control systematic effects in our measurements. Synchronizing the chopper to the video sync signal from our camera ensures that the traps are either completely on or completely off in each video frame. The diffusion data in Fig. 6 was collected in this manner in less than two hours.
Using optical tweezers to manipulate colloidal particles would not be a useful adjunct to dynamical measurements if the laser illumination altered the system's behavior. Simple calculations show, however, that the time scale for both thermal and viscous relaxation in water on micron length scales is much shorter than a millisecond. Thus, any trap-induced thermal or hydrodynamic perturbation has already died out by the time we start collecting data. It should be noted that for some systems with slow relaxation, such as spheres in polymer solutions, this criterion might not be so easily met.
§ IV.3. Interaction Measurements
Most physical properties of dense colloidal suspensions are either determined or modified by the form of the interaction potential between the individual colloidal spheres. While present models for the pair potential have been used to achieve qualitative agreement between theoretical and observed phase diagrams and rheological properties, detailed measurements are needed to secure quantitative agreement and resolve questions of interpretation surrounding a variety of unexplained experimental phenomena. For example, the dependence of the pairwise interaction on temperature, volume fraction and boundary conditions are still being investigated. In practical applications, pairwise interaction measurements provide detailed information regarding the state of charge and local chemical environment of particles in suspension.
The effective pairwise interaction potential
is encoded in the equilibrium distribution of particles in
a suspension through the relation
![]() |
(20) |
where
![]() |
(21) |
is the two-particle correlation function and angle brackets indicate
an average over angles.
Static snapshots in principal can yield measurements of
provided that the suspension's concentration is
low enough to avoid many-body contributions(7).
This restriction was circumvented by Kepler and Fraden (6)
who augmented their imaging measurements of
with
molecular dynamics simulations to correct for many-body effects.
Direct imaging avoids the problems encountered in efforts to
invert light scattering data to extract pair potentials
(23), since detector noise
contributes little to the error in estimating
.
Geometrically confining colloid to a single layer to
avoid complications from the poor depth resolution of
video microscopy, however, introduces wall-mediated interactions (6).
As with the measurements of diffusivity described above,
we use blinking optical tweezers to facilitate measurements of the pair
interaction in unconfined colloid.
For the purposes of elucidating the technique as a typical application
of digital video microscopy, we describe interaction measurements on a
suspension of polystyrene sulfate spheres of radius .
The suspension in these experiments is strongly deionized and maintained
in contact with mixed-bed ion exchange resin in a hermetically sealed
glass container.
An earlier measurement on smaller spheres was published in reference
[5] and a more extensive survey of such measurements
will be presented elsewhere.
Rather than measuring the equilibrium pair distribution
directly, we use optical tweezers to create reproducible initial
configurations away from which an otherwise isolated
pair of particles diffuse.
This technique allows us to collect a series of short trajectories
from one pair of spheres located far from the sample container's
glass walls and far from other particles.
The pair of traps is aligned in the microscope's focal plane
along a line parallel to the camera's video lines
and separated by no less than ten times the typical distance
a free sphere can diffuse in 1/60 sec.
Under these conditions, out-of-plane diffusion is sufficiently
small that
the three-dimensional center-to-center separation
can be approximated by its projection into the plane.
An analysis of interacting Brownian particles' dynamics
allows us to extract the equilibrium pair distribution function
from the ensemble of trajectories.
Since the inertial damping
time for the system is much smaller than the time scales on which we
measure dynamics, we can approximate
the master equation for the dynamical pair
correlation function by
![]() |
(22) |
where is a Markovian propagator (24).
The steady-state solution to eqn. (22) is the equilibrium
pair distribution function,
.
After spatially
discretizing eqn. (22)
the discrete pair correlation function
is the
nontrivial eigenvector of
a system of linear equations:
![]() |
(23) |
where is the transition probability matrix for a pair
of particles initially separated by distance
to be
separated by distance
1/30 sec later and corresponds
to
.
In practice, we build up by binning pair trajectory
data according to the initial and final center-to-center separations
in each time step.
Each row of the counting data is normalized independently
to conserve probability in eqn. (23).
A typical example of
measured in this manner is
shown in Fig. 7(a).
Details for calculating such matrices are presented elsewhere (5).
With a chopper wheel set to turn off the tweezers
for six fields out of every twenty, we typically are able to
videotape 15,000 fields of data in less than half an hour.
To collect data at
a range of different pair separations, we adjust the spacing between the traps
every few minutes by slightly rotating a gimbal-mounted
mirror in the laser projection optics.
We programmed our frame grabber to selectively digitize
only those frames with the traps off by
monitoring each image's background brightness,
which decreases noticeably when the laser is chopped.
Once
has been calculated from the trajectory
data,
can be calculated by
solving eqn. (22).
The logarithm of
is then an estimate of the interaction potential
in units of the thermal energy
.
The pair potential for the propagator matrix shown in
Fig. 7(a) appears in Fig. 7(b).
The method we have outlined for measuring the effective pair interaction potential rests on very few assumptions regarding the nature of the interaction. We require, for example, non-potential interactions to be negligible to ensure history independence of the propagator. Interactions lacking spherical symmetry such as those between Brownian dipoles or charged ellipsoids also would require a more sophisticated analysis. With these caveats, however, we are free to interpret our results for charged spherical latices within the conventional DLVO theory (25) for colloidal interactions.
The electrostatic part of the DLVO potential resembles a Yukawa interaction (26)
![]() |
(24) |
where is the effective charge for spheres of radius
,
is the Debye-Hückel screening length
and
is the dielectric coefficient of the suspending fluid.
The screening length depends on the concentration
of
-valent
counterions and determines the range of the interaction.
The charge renormalization theory of Alexander et al. (27)
suggests that
a sphere's effective charge scales with its radius
![]() |
(25) |





where is known
as the Bjerrum length, and
is a constant predicted (27) to be
about 10.
The solid line in Fig. 7(b)
is a fit of eqns. (24) and (25)
to our measured interaction potential.
The extracted screening length
nm
corresponds to an electrolyte concentration of about
M, which
presumably is due to dissolved airborne CO
.
The fit value of
is consistent with the
values predicted by molecular dynamics simulations (28)
(
) and
neutron scattering measurements on micelles (29) (
).
We chose to fit our data to eqn. (24) as a test of the charge renormalization theory for spheres of constant surface charge. Fitting to other forms would allow estimation of quantities such as the zeta potential. Consequently, this technique might be considered complementary to electrophoretic surface potential measurements.
Our prototype interaction measurement system reports a reliable interaction curve complete with fits for quantities of interest in roughly 10 hours. Most of this time is spent acquiring and analyzing video frames under software control. Performing more of these operations with specialized yet readily available hardware would reduce the processing time to under 1 hour. This is comparable to the time required for more traditional colloidal characterization measurements. Unlike conventional techniques, moreover, digital video microscopy coupled with blinking optical tweezers can measure the interactions between a particular pair of particles rather than averaging over a sample.
§ V. Summary
The preceding sections describe image analysis methods we have developed
to perform quantitative time-resolved imaging studies of colloidal suspensions.
This presentation emphasized techniques which offer high accuracy
while requiring only general purpose commercially available equipment.
The demonstrated spatial resolution of 10 nm in the plane and 150 nm
in depth for radius spheres should suffice for a wide
range of applications.
The two applications we discussed in detail, measurement of
microspheres' self-diffusion coefficients and measurement of
charged spheres' pair interaction potential, demonstrate the utility
of these methods for studying the microscopic dynamics of colloidal
suspensions.
This work was supported by the National Science Foundation under
Grant No. DMR-9320378.
References
-
(1)
J. Perrin, Atoms, tr. D. Ll. Hammick (London, J. C. Constable and Co., Ltd., 1920).
-
(2)
D. H. van Winkle and C. A. Murray, Phys. Rev. A 34, 562 (1986); D. G. Grier and C. A. Murray, J. Chem. Phys. 100, 9088 (1994); J. A. Weiss, D. W. Oxtoby, D. G. Grier, and C. A. Murray, J. Chem. Phys., in press (1995).
-
(3)
P. Pieranski, L. Strzlecki, and B. Pansu, Phys. Rev. Lett. 50, 900 (1983); C. A. Murray and D. H. van Winkle, Phys. Rev. Lett. 58, 1200 (1987); C. A. Murray, W. O. Sprenger, and R. A. Wenk, Phys. Rev. B 42, 688 (1990); B. Poulingy, R. Malzbender, P. Ryan, and N. A. Clark, Phys. Rev. B 42, 988 (1990); R. E. Kusner, J. A. Mann, J. Kerins, and A. J. Dahm, Phys. Rev. Lett. 73, 3113 (1994). For a review of the field see C. A. Murray, in Bond-Orientational Order in Condensed Matter Systems, ed. K. J. Strandburg (New York, Springer-Verlag, 1992), pp. 137-215.
-
(4)
A. T. Skjeltorp, J. Magn. Mat. 65, 195 (1987); T. Okubo, J. Chem. Soc., Faraday Trans. 1 84, 3377 (1988); B. R. Jennings and M. Stankiewicz, Proc. Royal Soc. London A 427, 321 (1990); G. Helgesen and A. T. Skjeltorp, Physica A 170, 488 (1991); J. Tang and S. Fraden, Phys. Rev. Lett. 71, 3509 (1993); E. M. Lawrence, M. L. Ivey, G. A. Flores, J. Liu, J. Bibette, and J. Richard, Int. J. Mod. Phys. B 8, 2765 (1995).
-
(5)
J. C. Crocker and D. G. Grier, Phys. Rev. Lett. 73, 352 (1994).
-
(6)
G. M. Kepler and S. Fraden, Phys. Rev. Lett. 73, 356 (1994).
-
(7)
K. Vondermassen, J. Bongers, A. Mueller, and H. Versmold, Langmuir 10, 1351 (1994).
-
(8)
S. Inoué, Video Microscopy (New York, Plenum, 1986).
-
(9)
The Interactive Data Language is a product of Research Systems Inc., Boulder, CO. Direct inquiries to info@rsinc.com.
-
(10)
A. K. Jain, Fundamentals of Image Processing (Englewood Cliffs, NJ, Prentice Hall 1986).
-
(11)
R. C. Gonzalez and P. Wintz, Digital Image Processing, 2nd ed. (Reading, MA, Addison-Wesley, 1987).
-
(12)
W. K. Pratt, Digital Image Processing (New York, Wiley, 1991).
-
(13)
W. Schaertl and H. Sillescu, J. Colloid Interface Sci. 155, 313 (1993).
-
(14)
J. A. Hartigen Clustering Algorithms (New York, Wiley, 1975).
-
(15)
Milton Kerker, Scattering of Light (New York, Academic, 1969).
-
(16)
P. N. Pusey and R. J. A. Tough, J. Phys. A 15, 1291 (1982); G. Nägele, M. Medina-Noyola, and J. L. Arauz-Lara, Physica A 149, 123 (1988).
-
(17)
W. B. Russel, D. A. Saville, and W. R. Schowalter, Colloidal Dispersions (Cambridge, Cambridge University Press, 1989).
-
(18)
L. P. Faucheux and A. J. Libchaber, Phys. Rev. E 49, 5158 (1994).
-
(19)
Theo G. M. van de Ven, Colloidal Hydrodynamics, (San Diego, Academic, 1989).
-
(20)
A. Ashkin, J. M. Dziedzic, J. E. Bjorkholm, and S. Chu, Optics Letters 11, 288 (1986).
-
(21)
T. T. Perkins, D. E. Smith, and S. Chu, Science 264, 819 (1994); T. T. Perkins, D. E. Smith, and S. Chu, Science 264, 822 (1994); C. Bustamante, private communication (1995).
-
(22)
L. P. Faucheux, L. S. Bourdieu, P. D. Kaplan, and A. J. Libchaber, Phys. Rev. Lett. 74, 1504 (1995).
-
(23)
R. Rajagopalan, Langmuir 8, 2898 (1992).
-
(24)
H. Risken, The Fokker-Planck Equation (Berlin, Springer-Verlag, 1989).
-
(25)
B. V. Derjaguin and L. Landau, Acta Physicochim. (USSR) 14, 633 (1941); E. J. Verwey and J. Th. G. Overbeek, Theory of the Stability of Lyophobic Colloids (Amsterdam, Elsevier, 1948).
-
(26)
L. Belloni, J. Chem. Phys. 86, 519 (1986).
-
(27)
S. Alexander, P. M. Chaikin, P. Grant, G. J. Morales, P. Pincus, and D. Hone, J. Chem. Phys. 80, 5776 (1984).
-
(28)
M. J. Stevens, M. L. Falk, and M. O. Robbins, preprint (1995).
-
(29)
S. Bucci, C. Fagotti, V. Degiorgio, and R. Piazza, Langmuir 7, 824 (1991).