Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20010038717 A1
Publication typeApplication
Application numberUS 09/771,343
Publication dateNov 8, 2001
Filing dateJan 26, 2001
Priority dateJan 27, 2000
Also published asCA2397817A1, CA2397817C, DE60142678D1, EP1254430A2, EP1254430B1, US7228003, US20050008207, US20070201760, WO2001055964A2, WO2001055964A3, WO2001055964A9
Publication number09771343, 771343, US 2001/0038717 A1, US 2001/038717 A1, US 20010038717 A1, US 20010038717A1, US 2001038717 A1, US 2001038717A1, US-A1-20010038717, US-A1-2001038717, US2001/0038717A1, US2001/038717A1, US20010038717 A1, US20010038717A1, US2001038717 A1, US2001038717A1
InventorsCarl Brown
Original AssigneeBrown Carl S.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Flat-field, panel flattening, and panel connecting methods
US 20010038717 A1
Abstract
A plurality of panels are assembled into a single image. Each of the panels may have different intensities throughout the panel, as well as non-uniformities between panels. The panels are modified using flat-field calibration, panel flattening, and panel connecting techniques. These techniques correct for non-uniformities and provide a cleaner, single image.
Images(2)
Previous page
Next page
Claims(17)
What is claimed is:
1. A method of flat-field calibrating an image comprising:
obtaining a plurality of images
performing linear regression on the plurality of images to obtain a gain and an offset; and
determining the desired image using the gain and the offset.
2. The method of
claim 1
, further comprising obtaining a plurality of images ranging from dark current to full-well.
3. The method of
claim 1
, further comprising performing linear regression on each pixel of the plurality of images.
4. The method of
claim 1
, further comprising calculating the desired image using the equation:
Desired_image=(Measured_image−offset_map)/gain_map.
5. The method of
claim 1
, further comprising moving a calibration slide while obtaining the plurality of images.
6. A method of reducing offset map noise comprising:
obtaining a plurality of images
obtaining the average dark current of the plurality of images; and
determining the desired image using the gain and the average dark current.
7. The method of
claim 6
, further comprising obtaining a plurality of images ranging from dark current to full-well.
8. The method of
claim 6
, further comprising calculating the desired image using the equation:
Desired_image=(Measured_image−verage dark current)/gain_map.
9. The method of
claim 6
, further comprising averaging multiple frames to determine the desired image.
10. A method of reducing filed curvature in an image comprising:
obtaining an average curvature map of a plurality of image panels;
dividing each panel by the curvature map.
11. The method of
claim 10
, further comprising normalizing the curvature map by the average intensity of the curvature map.
12. The method of
claim 10
, further comprising smoothing the curvature map.
13. The method of
claim 10
, further comprising using only pixels above a background intensity to obtain the average curvature map.
14. The method of
claim 10
, further comprising reducing noise in the image by curve-fitting the image pixels.
15. A method of reducing discontinuities between adjacent panels in an image comprising:
comparing a border of each panel on all sides to generate border intensity scaling values; and
scaling a boundary of each panel to a point approximately midway between a current panel and an adjacent panel.
16. The method of
claim 15
, further comprising scaling the boundary of each panel using an inverse square weighting.
17. The method of
claim 15
, further comprising scaling the boundary of each panel using an inverse weighting.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims benefit of U.S. Provisional Application No. 60/178,476, filed Jan. 27, 2000.

TECHNICAL FIELD

[0002] This invention relates to image analysis, and more particularly to using correcting for non-uniformities among several panels of a single image.

BACKGROUND

[0003] Biomedical research has made rapid progress based on sequential processing of biological samples. Sequential processing techniques have resulted in important discoveries in a variety of biologically related fields, including, among others, genetics, biochemistry, immunology and enzymology. Historically, sequential processing involved the study of one or two biologically relevant molecules at the same time. These original sequential processing methods, however, were quite slow and tedious. Study of the required number of samples (up to tens of thousands) was time consuming and costly.

[0004] A breakthrough in the sequential processing of biological specimens occurred with the development of techniques of parallel processing of the biological specimens, using fluorescent marking. A plurality of samples are arranged in arrays, referred to herein as microarrays, of rows and columns into a field, on a substrate slide or similar member. The specimens on the slide are then biochemically processed in parallel. The specimen molecules are fluorescently marked as a result of interaction between the specimen molecule and other biological material. Such techniques enable the processing of a large number of specimens very quickly.

[0005] Some applications for imaging require two apparently contradictory attributes: high-resolution and high-content. The resolution requirement is driven by the need to have detail in the image that exceeds by at least 2× the information content of the object being images (the so called Nyquist Limit). The content requirement is driven by the need to have information over a large area. One method that addresses these needs is to acquire a plurality of individual images with high spatial resolution (panels) and to collect these panels over adjacent areas so as to encompass the large desired area. The multiple panels can then be assembled into a single large image based on the relative location of the optics and the sample when each panel was collected. When assembling the plurality of panels into a single montage, a number of steps may be taken to correct for intensity non-uniformities within each panel (known herein as flat-field Calibration and Panel Flattening) as well as non-uniformities in the panel to panel intensities.

DESCRIPTION OF DRAWINGS

[0006] These and other features and advantages of the invention will become more apparent upon reading the following detailed description and upon reference to the accompanying drawings.

[0007]FIG. 1 is a flat-field calibration map showing the overall curvature and offset maps according to one embodiment of the present invention.

[0008]FIG. 2 is a close-up view of a 20×20 region of the inverse gain map and offset map of FIG. 1.

[0009]FIG. 3 illustrates an image before and after applying curvature flattening according to one embodiment of the present invention.

DETAILED DESCRIPTION

[0010] To create a large image, a plurality of smaller images are collected by a detector and assembled into a single large image. Each of the plurality of smaller images collected by the detector may be affected by a combination of the non-uniform optics and detector response. In the case of the optics, illumination vignetting and collection vignetting introduce a substantial intensity curvature to the images collected by the detector. Non-uniform detector response comes in the form of gain and offset differences among all the detector elements.

[0011] To correct for these errors, a series of images are acquired that range from dark current (no exposure) to near full-well. Linear regression of each pixel in the detector yields a slope (gain) and intercept (offset). That is, for each pixel the following equation is solved for m and b:

Measured_image=Desired_image*m +b

[0012] Flat-field calibration is then accomplished with the following calculation (again for each pixel):

Desired_image=(Measured_image−offset_map)/gain_map

[0013] Where m has been replaced with “gain map” and b with “offset map”.

[0014] The gain and offset maps correct for the illumination optics, collection optics, and detector non-uniformity at the same time.

[0015] Flat-field calibration maps that correct the image field curvature and offset problem do so at the expense of adding noise to the image. Both maps contain measurement noise that is then passed on to the calibrated image. The gain map contains noise that is mostly photon counting noise (“shot noise”), whereas the offset map is dominated by the electronic read-noise of the CCD camera.

[0016] To correct for the offset map noise, the average dark current image (no exposure) may be used instead of the linear regression result. That is, the offset_map used to flat-field images is the average of many dark current images rather than the intercept calculated by the linear regression. Experience has shown that the intercept is inherently noisy (the intercept is measured at the low signal-to-noise part of the camera range). Use of the calculated offset map reduces the sensitivity of the instrument by increasing the baseline noise. The offset map shown in FIGS. 1 and 2 are the average dark current. The calculated intercept would have about double the noise of the average dark current.

[0017] Averaging multiple frames for each measurement improves the signal-to-noise of the data and reduces the noise in the resulting gain and offset maps (in the event that the calculated offset map is used for flat-fielding).

[0018] Another technique is to smooth the gain map with a low-pass filter.

[0019] Perfectly uniform flat-field calibration slides are nearly impossible to fabricate. Non-uniform fluorescence is typical even with very carefully prepared slides. However, moving the calibration slide during camera exposure averages non-uniform fluorescent response of the slide. Flat-field calibration maps can be generated from significantly lower quality calibration slides.

[0020]FIGS. 1 and 2 illustrate flat-field calibration maps made from uniformly fluorescent calibration slides. The gain map 105, 205 contains approximately 0.3% noise whereas the offset map 110, 210 contributes 1.24 counts (gain correction is multiplicative, offset is additive).

[0021] Although flat-field calibration is an effective technique, the technique introduces noise. Cleaning the flat-field calibration maps could yield substantial improvements in image quality. In particular, further reduction of offset map noise would improve low-end sensitivity. The read-noise in the CCD camera used to collect the maps above has about 1.77 counts of read-noise. Adding the offset map noise (in quadrature) yields about 2.2 counts of baseline noise, a 24% increase.

[0022] Another problem is that the intensity curvature of the panels creates a visible artifact. FIG. 3 illustrates an image 300 without any curvature correction. A combination of illumination vignetting and collection vignetting leads to more brightness or higher collection efficiency, respectively, in the center of the field-of-view. Even when flat-fielding techniques have been applied to the panels, a variety of factors contribute to a residual curvature. For instance, lamp fluctuation, camera bias instability change the general intensity level of the acquired image and affect the standard flat-fielding calculation, which is:

flat_image=(acquired_image−offset_map)/gain_map.

[0023] Small errors in the offset map cause the gain map (which is usually curved) to introduce a field curvature. The more curvature that exists in the acquired image, the greater the potential for residual curvature.

[0024] Because the intensity curvature is typically consistent from one panel to the next, averaging the intensity profile of each panel gives an average curvature map. Dividing each panel by the curvature map is then a way to flatten the intensity curvature that is consistent among all panels. Normalizing the curvature map by the average intensity, or similar value, of the curvature map allows the calculation to be performed without altering the net intensity scale of the image.

[0025] One example of how to average the intensity profile of each panel is to perform the following procedure for each pixel in each panel. First, if the pixel in the current panel is not signal, apply the following equations:

Accumulator_map=accumulator_map+pixel_intensity

Accumulation_counter_map=accumulation_counter_map+1

[0026] Second, for all pixels within the accumulator_map, calculate the curvature map using the following technique:

[0027] If counter map is greater than 0

Curvature_map=accumulator_map /accumulation_counter_map

[0028] Otherwise

Curvature_map=average of neighboring curvature values

[0029] This creates a curvature flattening map that is defined as:

Curvature_flattener=1/curvature_map

[0030] The procedure may be refined in several manners. First, the curvature map may be smoothed to reduce the sensitivity to noise and spurious signals in the average curvature image. Second, only the pixels from each panel that are not significantly above the background intensity may be averaged. A histogram of each panel is used to distinguish background areas (desired) from image signals (undesired). A map of the number of pixels added to each point in the curvature map is then required to calculate the average since not all panels contribute information to each pixel in the curvature map. Pixels that contain no information can be synthesized from the average of neighboring pixels. Third, the curvature map may be curve-fitted using a weighting scheme that emphasizes relatively low intensity values. Curve-fitting would be useful for reducing noise. The goal of curve-fitting is to measure only the background curvature and reduce the influence of the image signal. Other refinements include averaging lots of small panels reduces sensitivity to image signal corruption and over-scanning the desired image area to provide more panels for averaging and panels that contain only the background intensity curvature.

[0031] Another problem with combining a plurality of small images to form one large image is that small discontinuities between adjacent panels become visible. Intensity differences of 1-2 counts are readily detected by the human eye, even in the presence of 1-2 counts of random noise and when important information is much more intense. The remaining discontinuity create a visible stitching artifact. Examples of the discontinuities may be seen in the image 300 of FIG. 3.

[0032] To correct this problem, a panel edge connection technique is performed. In this technique, the border of each panel is compared with all neighbors to the left, right, top, and bottom. This comparison generates border intensity scaling values for the entire boundary of each panel. The boundary may then be scaled so that the result is half way between the boundary of the current panel and the adjacent panel. The intensities are then connected at the half-way point between the adjacent border intensities. The boundary scaling may be applied to each pixel in the panel based on the distance from the four boundaries. A weighted combination of the scaling factors is used such that a continuous intensity ramp is applied from one boundary to the next. (In the middle of the image, the scaling factor should be the average of the left, right, top, and bottom scaling factors.) Some examples of the weighting methods include inverse square weighting and inverse weighting. These techniques may be implemented using the following formulas:

[0033] Inverse square weighting: Left weight = 1 / ( i + 1 ) ^ 2 Right weight = 1 / ( nx - i + 1 ) ^ 2 Bottom weight = 1 / ( j + 1 ) ^ 2 Top weight = 1 / ( ny - j + 1 ) ^ 2

[0034] Inverse weighting: Left weight = 1 / ( i + 1 ) Right weight = 1 / ( nx - i + 1 ) Bottom weight = 1 / ( j + 1 ) Top weight = 1 / ( ny - j + 1 ) Total weight = Left weight + Right weight + Top weight + Bottom weight

[0035] Scaling Factors:

Left_scale(j)={fraction (1/2)}*[Left_border(j)+Right_border_of_left_panel(j)]/Left_border(j)

Right_scale(j)={fraction (1/2)}*[Right_border(j)+Left_border_of_right_panel(j)]/Right_border(j)

Top_scale(i)={fraction (1/2)}*[Top_border(i)+Bottom_border_of_upper_panel(i)]/Top_border(i)

Bottom_scale(i)={fraction (1/2)}*[Bottom border(i)+Top_border_of_lower_panel(i)]/Bottom_border(i)

Pixel(i,j) intensity scaling factor=[Left_scale(j)* Left_weight+Right scale(j)* Right_weight+Bottom_scale(i)*Bottom_weight+Top_scale(i)*Top_weight]/Total_weight

[0036] Definitions:

[0037] nx Number of pixel columns

[0038] ny Number of pixel rows

[0039] i Column number (0 based)

[0040] j Row number (0 based)

[0041] Both connection and curvature flattening are important for panels with significant background intensity. An image having curvature flattening is shown in FIG. 3. Further refinements include median filtering the boundary scaling values to reduce sensitivity to outliers. Misalignment of the panels causes miscalculation of the scaling factors. The miscalculation is significant when bright (or dark) spots do not overlap along the borders of adjacent panels. Additionally, smoothing of the median filtered boundary scaling values may be used to remove spikes caused by alignment problems. Finally, the boundary scaling values may be curve-fit to find the general trend and avoid noise and misalignment.

[0042] Numerous variations and modifications of the invention will become readily apparent to those skilled in the art. Accordingly, the invention may be embodied in other specific forms without departing from its spirit or essential characteristics.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7733357 *Jan 13, 2006Jun 8, 2010Hewlett-Packard Development Company, L.P.Display system
US7864369Dec 14, 2007Jan 4, 2011Dmetrix, Inc.Large-area imaging by concatenation with array microscope
Classifications
U.S. Classification382/284, 382/274, 382/268, 382/294, 382/298, 382/275
International ClassificationG06T5/00, G06T1/00, G06T3/00, G06T5/50, G01N21/64, G06K9/00, G06T5/40, G06K9/42, G06T3/40, G06K9/40, G06K9/36
Cooperative ClassificationG06T5/002, G06T2200/32, H04N5/3415, G06T2207/10064, H04N5/3572, G06T2207/30072, G06T2207/10056
European ClassificationG06T5/00D, G06T7/00B2, H04N5/357A, H04N5/341A, G06T5/40
Legal Events
DateCodeEventDescription
Sep 11, 2008ASAssignment
Owner name: APPLIED PRECISION, INC., WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:APPLIED PRECISION, LLC;REEL/FRAME:021517/0889
Effective date: 20080429
Owner name: APPLIED PRECISION, INC.,WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:APPLIED PRECISION, LLC;US-ASSIGNMENT DATABASE UPDATED:20100427;REEL/FRAME:21517/889
Feb 28, 2002ASAssignment
Owner name: APPLIED PRECISION, LLC, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:APPLIED PRECISION HOLDINGS, LLC;REEL/FRAME:012676/0600
Effective date: 20020117
Feb 26, 2002ASAssignment
Owner name: APPLIED PRECISION HOLDINGS, LLC, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:APPLIED PRECISION, INC.;REEL/FRAME:012653/0607
Effective date: 20020117
Jun 15, 2001ASAssignment
Owner name: APPLIED PRECISION, INC., WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROWN, CARL S.;REEL/FRAME:011895/0229
Effective date: 20010524