Digitized images typically suffer from a range of imperfections including
geometric distortion, nonuniform contrast, and noise.
These all introduce errors into unless steps are
taken to restore the image to its ``ideal'' state.
Some geometric distortions are caused by defects in the microscope optics,
but most are introduced in later stages of digitization.
Video signals adhering to the RS-170 standard,
for example, consist of rectangular pixels
with a 4:3 aspect ratio.
A circle imaged by a video camera appears uniaxially distorted
into an ellipse when digitized and displayed by a computer, whose
pixels are square.
The analysis routines we describe below
are most easily implemented for images consisting of square pixels.
While many digitizing boards attempt to correct for uniaxial distortion,
they often leave a residual anisotropy of a few percent.
Both uniform and nonuniform geometric distortions can be measured
by creating images of standard grids, identifying features in the images
with features in the standards, and determining how far the image features
are displaced from their ideal locations in an undistorted image.
The algorithms we describe below for locating colloidal spheres
also are useful for locating features in such calibration standards.
Standard image processing texts describe algorithms
for measuring apparent distortions in the calibration grid image and removing
the distortion by spatial warping[10, 11, 12].
Many image processing packages such as IDL include efficient implementations.
Contrast gradients can arise from nonuniform sensitivity among the camera's pixels. More significant variation often is due to uneven illumination. Long wavelength modulation of the background brightness complicates the design of criteria capable of locating spheres' images throughout an entire image. Subtracting off such a background is not difficult if the features of interest are relatively small and well separated as is frequently the case for colloidal images. Under these circumstances, the background is reasonably well modeled by a boxcar average over a region of extent 2w+1, where w is an integer larger than a single sphere's apparent radius in pixels, but smaller than an intersphere separation:
While long-wavelength contrast variations waste the digital imaging
system's dynamic range, noise actually
destroys information.
Coherent noise from radio frequency interference (RFI)
can be removed with Fourier transform techniques [10, 11, 12]
but is best avoided with proper electrical shielding.
Digitization noise in the CCD camera and the frame grabber, however, is
unavoidable.
Such noise tends to be purely random with a correlation length
pixel.
Convolving an image A(x,y) with a Gaussian surface of revolution of
half width
strongly suppresses
such noise without unduly blurring the image:
with normalization
.
The difference between the noise-reduced and background images is an estimate of the ideal image. Since both eqn. (2) and eqn. (3) can be implemented as convolutions of the image A(x,y) with simple kernels of support 2w + 1, we can compute both in a single step with the convolution kernel
The normalization constant
facilitates comparison among images filtered with different values of w.
The correlation length of the noise generally is not used as input parameter,
with
instead being set to unity.
The efficacy of the filter can be judged from the example in
Fig. 1(b).
In practice, the image A(x,y) must be cast from an array of bytes to
a higher precision data format, such as a floating point array,
before convolution.
This scaling, together with the actual convolution operation can
be implemented in hardware with an array processor
such as the Data Translation DT-2878.
Further speed enhancement is realized by decomposing the circularly
symmetric two-dimensional
convolution kernel K(i,j) into four one-dimensional convolution kernels,
so that filtering can be computed in O(w) operations rather than .