WO2004032058A2 - Methods and systems for correcting image misalignment - Google Patents

Methods and systems for correcting image misalignment Download PDF

Info

Publication number
WO2004032058A2
WO2004032058A2 PCT/US2003/030711 US0330711W WO2004032058A2 WO 2004032058 A2 WO2004032058 A2 WO 2004032058A2 US 0330711 W US0330711 W US 0330711W WO 2004032058 A2 WO2004032058 A2 WO 2004032058A2
Authority
WO
WIPO (PCT)
Prior art keywords
images
image
misalignment
tissue sample
correction
Prior art date
Application number
PCT/US2003/030711
Other languages
French (fr)
Other versions
WO2004032058A3 (en
Inventor
Thomas Clune
Philippe Schmid
Jiang Chunsheng
Original Assignee
Medispectra, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Medispectra, Inc. filed Critical Medispectra, Inc.
Priority to AU2003277051A priority Critical patent/AU2003277051A1/en
Priority to CA002500539A priority patent/CA2500539A1/en
Priority to EP03799315A priority patent/EP1554694A2/en
Publication of WO2004032058A2 publication Critical patent/WO2004032058A2/en
Publication of WO2004032058A3 publication Critical patent/WO2004032058A3/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000094Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00064Constructional details of the endoscope body
    • A61B1/00071Insertion part of the endoscope body
    • A61B1/0008Insertion part of the endoscope body characterised by distal tip features
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0082Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
    • A61B5/0084Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for introduction into the body, e.g. by catheters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/262Analysis of motion using transform domain methods, e.g. Fourier domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/37Determination of transform parameters for the alignment of images, i.e. image registration using transform domain methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00142Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with means for preventing contamination, e.g. by using a sanitary sheath
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/043Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances for fluorescence imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0071Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by measuring fluorescence emission
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M11/00Sprayers or atomisers specially adapted for therapeutic purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • This invention relates generally to image processing. More particularly, the invention relates to correcting image misalignment, where the misalignment is due at least in part to sample movement.
  • Accurate analysis of the sequence of images may require that the images be adjusted prior to analysis to compensate for misalignment caused at least in part by patient movement.
  • this method accounts for certain gross movements of a video camera — in particular, certain vibrations caused by the operator of a handheld camcorder.
  • the method does not compensate for misalignment caused by movement of a sample. For example, such a method could not be used to adequately correct an image misalignment caused by the small-scale movement of a patient during a diagnostic procedure.
  • Another image stabilization method is based on detecting the physical movement of the camera itself. See U.S. Patent No. 5,253,071 to MacKay, which describes the use of a gimbaled ring assembly that moves as a camera is physically jittered. These types of methods cannot be used to correct misalignments caused by the movement of a sample.
  • the invention provides methods of correcting misalignments between sequential images of a sample.
  • the invention is particularly useful for correcting image misalignment due to movement of the sample between images and/or during image acquisition.
  • the invention also allows for real-time, dynamic image alignment for improved optical diagnosis and assessment.
  • the invention comprises determining an x-displacement and a y-displacement corresponding to a misalignment between two images of a tissue sample, where the misalignment is caused by a shift in the position of the sample with respect to the image frame field.
  • an embodiment of the invention makes it possible to correct for small image misalignments caused by unavoidable patient motion, such as motion due to breathing. It has been discovered that validating misalignment corrections improves the accuracy of diagnostic procedures that use data from sequential images, particularly where the misalignments are small and the need for accuracy is great.
  • methods of the invention comprise validating misalignment corrections by splitting individual images into smaller subimages, determining displacement between these subimages, and comparing the subimage displacements to the overall image displacement.
  • validation may comprise adjusting two images according to a misalignment correction, then determining displacement between corresponding subimages and comparing these displacements with a threshold maximum value.
  • Both misalignment correction determination and validation may be performed such that an accurate adjustment is made for a misalignment before an entire sequence of images is obtained. This allows, for example, "on the fly" adjustment of a camera while a diagnostic exam is in progress. Thus, corrections may be determined, validated, and accurately adjusted for as misalignments occur, reducing the need for retakes and providing immediate feedback as to whether an examination is erroneous.
  • Automatic adjustment may be accomplished by adjusting aspects of the optical interrogation of the sample using a misalignment correction value. Adjustments may be performed, for example, by adjusting aspects of transmission and/or reception of electromagnetic energy associated with the sample.
  • This may include, for example, transmitting a correction signal to a galvanometer system or a voice coil to "null out" a misalignment by adjusting the position of a mirror or other component of the camera obtaining the images according to the correction signal.
  • adjustments may be performed by electronically adjusting an aspect of an image, for example, the frame and/or bounds of an image, according to a misalignment correction value, or by performing any other appropriate adjustment procedure.
  • acetic acid is applied to cervical tissue in order to whiten the tissue in a way that allows enhanced optical discrimination between normal tissue and certain kinds of diseased tissue.
  • the acetowhitening technique, as well as other diagnostic techniques, and the analysis of images and spectral data obtained during acetowhitening tests are described in co-owned U.S. patent application Serial No. 10/099,881, filed March 15, 2002, and co-owned U.S. patent application entitled, "Method and Apparatus for Identifying Spectral Artifacts," identified by Attorney Docket Number MDS-033, filed September 13, 2002, both of which are hereby incorporated by reference.
  • a typical misalignment between two images is less than about 0.55-mm within a two- dimensional, 480 x 500 pixel image frame field covering an area of approximately 25-mm x 25- mm.
  • These dimensions provide an example of the relative scale of misalignment versus image size. In some instances it is only necessary to compensate for misalignments of less than about one millimeter within the exemplary image frame field defined above. In other cases, it is necessary to compensate for misalignments of less than about 0.3-mm within the exemplary image frame field above. Also, the dimensions represented by the image frame field, the number of pixels of the image frame field, and/or the pixel resolution may differ from the values shown above.
  • a misalignment correction determination may be inaccurate, for example, due to any one or a combination of the following: non-translational sample motion such as rotational motion, local deformation, and/or warping; changing features of a sample such as whitening of tissue; and image recording problems such as focus adjustment, missing images, blurred or distorted images, low signal-to-noise ratio, and computational artifacts.
  • Validation procedures of the invention identify such inaccuracies.
  • the methods of validation may be conducted "on-the- fly" in concert with the methods of determining misalignment corrections in order to improve accuracy and to reduce the time required to conduct a given test.
  • an embodiment provides for automatically adjusting an optical signal detection device, such as a camera.
  • a camera may be adjusted "on-the-fly" to compensate for misalignments as images are obtained. This improves accuracy and reduces the time required to conduct a given test.
  • the optical signal detection device comprises a camera, a spectrometer, or any other device which detects optical signals.
  • the optical signal may be emitted by the sample, diffusely reflected by the sample, transmitted through the sample, or otherwise conveyed from the sample.
  • the optical signal comprises light of wavelength falling in a range between about 190-nm and about 1100-nm.
  • One embodiment comprises obtaining one or more of the following from one or more regions of the tissue sample: fluorescence spectral data, reflectance spectral data, and video images.
  • Methods comprise analysis of a sample of human tissue, such as cervical tissue. Methods of the invention also include analysis of other types of tissue, such as non-cervical tissue and/or nonhuman tissue.
  • methods comprise analysis of one or more of the following types of tissue: colorectal, gastroesophageal, urinary bladder, lung, skin, and any other tissue type comprising epithelial cells.
  • a common source of misalignment is movement of a sample.
  • Methods comprise the steps of: obtaining a plurality of sequential images of a sample using an optical signal detection device; determining a correction for a misalignment between two or more of the sequential images, where the misalignment is due at least in part to a movement of the sample; and compensating for the misalignment by automatically adjusting the optical signal detection device.
  • the two or more sequential images may be consecutive, or they may be nonconsecutive.
  • a misalignment correction is identified between a first image and a second image, where the second image is subsequent to the first image.
  • the first image and second image may be either consecutive or nonconsecutive.
  • Identifying a misalignment correction may involve data filtering. For example, some methods comprise filtering a subset of data from a first image of a plurality of sequential images. A variety of data filtering techniques may be used. In one embodiment, Laplacian of Gaussian filtering is performed. Identifying a misalignment may comprise preprocessing a subset of data from the first image prior to filtering. For example, color intensities may be converted to gray scale before filtering. In some embodiments, filtering comprises frequency domain filtering and/or discrete convolution in the space domain.
  • computing a cross correlation comprises computing a product represented by Fj(u,v) F j(u,v), where Fi(u,v) is a Fourier transform of data derived from a subset of data from a first image, i, of the plurality of sequential images, F * j(u,v) is a complex conjugate of a Fourier transform of data derived from a subset of data from a second image, j, of the plurality of sequential images, and u and v are frequency domain variables.
  • the computing of the cross correlation additionally comprises computing an inverse Fourier transform of the product represented by Fi(u,v) F j (u,v).
  • a method of the invention comprises validating a correction for a misalignment determined between a first image and a second image.
  • Validating a misalignment correction comprises defining one or more validation cells within a bounded image plane; computing for each validation cell a measure of displacement between two (or more) images bound by the image plane using data from the two images corresponding to each validation cell; and validating a correction for misalignment between the two images by comparing the validation cell displacements with the correction.
  • each validation cell comprises a subset of the bounded image plane.
  • the two (or more) images may be consecutive images.
  • the validating step includes eliminating from consideration one or more measures of displacement for corresponding validation cells.
  • identifying validation cells that are likely to contribute to an erroneous validation result comprises calculating a sum squared gradient for at least one validation cell.
  • Methods of the invention comprise obtaining a plurality of sequential images of the sample during an application of a chemical agent to the sample.
  • the chemical agent comprises at least one of the following: acetic acid, formic acid, propionic acid, butyric acid, Lugol's iodine, Shiller's iodine, methylene blue, toluidine blue, indigo carmine, indocyanine green, and fluorescein.
  • Some embodiments comprise obtaining sequential images of the sample during an acetowhitening test.
  • the movement of the sample is relative to the optical signal detection device and comprises at least one of the following: translational motion, rotational motion, warping, and local deformation.
  • One or more of the sequential images comprise measurements of an optical signal from the sample.
  • the optical signal comprises, for example, visible light, fluoresced light, and/or another form of electromagnetic radiation.
  • Methods of the invention comprise determining a correction for misalignment between each of a plurality of pairs of images. Such methods comprise the steps of: obtaining a set of sequential images of a sample using an optical signal detection device; and determining a correction for a misalignment between each of a plurality of pairs of the sequential images, where at least one of the misalignments is due at least in part to a movement of the sample. The correction may then be used to compensate for each of the misalignments by automatically adjusting the optical signal detection device.
  • the obtaining step and the determining step may be performed alternately or concurrently, for example.
  • One embodiment comprises determining a correction for a misalignment between a pair of the sequential images less than about 2 seconds after obtaining the latter of the pair of the sequential images. In another embodiment, this takes less than about one second.
  • the invention is directed to a method of determining a correction for a misalignment that includes validating the correction.
  • Methods comprise the steps of: obtaining a plurality of sequential images of a sample using an optical signal detection device; determining a correction for a misalignment between at least two of the sequential images; and validating the correction for misalignment between two of the images.
  • An embodiment further comprises compensating for the misalignment by automatically adjusting the optical signal detection device according to the correction determined.
  • determining a misalignment correction between two images and validating the correction is performed in less than about one second.
  • Methods of the invention comprise compensating for a misalignment by determining a correction for a misalignment between a pair of images, validating the misalignment, and automatically realigning one of the pair of images.
  • the realignment may be performed during the acquisition of the images, or afterwards.
  • Figure 1 A represents a 480 x 500 pixel image from a sequence of images of in vivo human cervix tissue and shows a 256 x 256 pixel portion of the image from which data is used in determining a correction for a misalignment between two images from a sequence of images of the tissue according to an illustrative embodiment of the invention.
  • Figure IB depicts the image represented in Figure 1 A and shows a 128 x 128 pixel portion of the image, made up of 16 individual 32 x 32 pixel validation cells, from which data is used in performing a validation of the misalignment correction determination according to an illustrative embodiment of the invention.
  • Figure 2A is a schematic flow diagram depicting steps in a method of determining a correction for a misalignment between two images due to at least in part to the movement of a sample according to an illustrative embodiment of the invention.
  • Figure 2B is a schematic flow diagram depicting steps in a version of the method shown in Figure 2A of determining a correction for a misalignment between two images due to at least in part to the movement of a sample according to an illustrative embodiment of the invention.
  • Figure 2C is a schematic flow diagram depicting steps in a version of the method shown in Figure 2 A of determining a correction for a misalignment between two images due to at least in part to the movement of a sample according to an illustrative embodiment of the invention.
  • Figure 3 depicts a subset of adjusted images from a sequence of images of a tissue with an overlay of gridlines showing the validation cells used in validating the determinations of misalignment correction between the images according to an illustrative embodiment of the invention.
  • Figure 4A depicts a sample image after application of a 9-pixel size (9 x 9) Laplacian of Gaussian filter (LoG 9 filter) on an exemplary image from a sequence of images of tissue according to an illustrative embodiment of the invention.
  • Figure 4B depicts the application of both a feathering technique and a Laplacian of Gaussian filter on the exemplary unfiltered image used in Figure 4A to account for border processing effects according to an illustrative embodiment of the invention.
  • Figure 5 A depicts a sample image after application of a LoG 9 filter on an exemplary image from a sequence of images of tissue according to an illustrative embodiment of the invention.
  • Figure 5B depicts the application of both a Hamming window technique and a LoG 9 filter on the exemplary unfiltered image used in Figure 5A to account for border processing effects according to an illustrative embodiment of the invention.
  • Figure 6 depicts the determination of a correction for misalignment between two images using methods including the application of LoG filters of various sizes, as well as the application of a Hamming window technique and a feathering technique according to illustrative embodiments of the invention.
  • the invention provides methods of determining a correction for a misalignment between images in a sequence due to movement of a sample. These methods are useful, for example, in the preparation of a sequence of images for analysis, as in medical diagnostics.
  • methods of the invention comprise applying an agent to a tissue in order to change its optical properties in a way that is indicative of the physiological state of the tissue.
  • the rate and manner in which the tissue changes are important in the characterization of the tissue.
  • Certain embodiments of the invention comprise automated and semi-automated analysis of diagnostic procedures that have traditionally required analysis by trained medical personnel. Diagnostic procedures which use automatic image-based tissue analysis provide results having increased sensitivity and/or specificity. See, e.g., co-owned U.S. patent application Serial No. 10/099,881, filed March 15, 2002, and co-owned U.S. patent application entitled, "Method and Apparatus for Identifying Spectral Artifacts," identified by Attorney Docket Number MDS-033, filed September 13, 2002, both of which are incorporated herein by reference.
  • in vivo tissue may spatially shift within the image frame field from one image to the next due to movement of the patient. Accurate diagnosis requires that this movement be taken into account in the automated analysis of the tissue sample.
  • spatial shift correction made at the time images are obtained is more accurate than correction made after all the images are obtained, since "on-the-fly" corrections compensate for smaller shifts occurring over shorter periods of time, rather than larger, more cumulative shifts occurring over longer periods of time.
  • a sample moves while a sequence of images is obtained, the procedure may have to be repeated. For example, this may be because the shift between consecutive images is too large to be accurately compensated for, or because a region of interest moves outside of a usable portion of the frame captured by the optical signal detection device. It is often preferable to compensate for misalignments resulting from sample movement during the collection of images rather than wait until the entire sequence of images has been obtained before compensating for misalignments. Stepwise adjustment of an optical signal detection device throughout image capture reduces the cumulative effect of sample movement. If adjustment is made only after an entire sequence is obtained, it may not be possible to accurately compensate for some types of sample movement. On-the-fly, stepwise compensation for misalignment reduces the need for retakes.
  • On-the-fly compensation may also obviate the need to obtain an entire sequence of images before making the decision to abort a failed procedure, particularly when coupled with on-the-fly, stepwise validation of the misalignment correction determination. For example, if the validation procedure detects that a misalignment correction determination is either too large for adequate compensation to be made or is invalid, the procedure may be aborted before obtaining the entire sequence of images. It can be immediately determined whether or not the obtained data is useable. Retakes may be performed during the same patient visit; no follow-up visit to repeat an erroneous test is required. A diagnostic test invalidated by excessive movement of the patient may be aborted before obtaining the entire sequence of images.
  • a determination of misalignment correction is expressed as a translational displacement in two dimensions, x and y.
  • x and y represent Cartesian coordinates indicating displacement on the image frame field plane.
  • corrections for misalignment are expressed in terms of non-Cartesian coordinate systems, such as biradical, spherical, and cylindrical coordinate systems, among others. Alternatives to Cartesian- coordinate systems may be useful, for example, where the image frame field is non-planar.
  • Some types of sample motion may result in an invalid misalignment correction determination, since it may be impossible to express certain instances of these types of sample motion in terms of a translational displacement, for example, in the two Cartesian coordinates x and y. It is noted, however, that in some embodiments, rotational motion, warping, local deformation, and/or other kinds of non- translational motion are acceptably accounted for by a correction expressed in terms of a translational displacement.
  • the changing features of the tissue as in acetowhitening, may also affect the determination of a misalignment correction.
  • Image recording problems such as focus adjustment, missing images, blurred or distorted images, low signal-to-noise ratio (i.e.
  • a validation step includes determining whether an individual correction for misalignment is erroneous, as well as determining whether to abort or continue the test in progress.
  • validation comprises splitting at least a portion of each of a pair of images into smaller, corresponding units (subimages), determining for each of these smaller units a measure of the displacement that occurs within the unit between the two images, and comparing the unit displacements to the overall displacement between the two images.
  • the method of validation takes into account the fact that features of a tissue sample may change during the capture of a sequence of images. For example, the optical intensity of certain regions of tissue change during an acetowhitening test. Therefore, in preferred embodiments, validation of a misalignment correction determination is performed using a pair of consecutive images. In this way, the difference between the corresponding validation cells of the two consecutive images is less affected by gradual tissue whitening changes, as compared with images obtained further apart in time. In some embodiments, validation is performed using pairs of nonconsecutive images taken within a relatively short period of time, compared with the time in which the overall sequence of images is obtained. In other embodiments, validation comprises the use of any two images in the sequence of images.
  • a determination of misalignment correction between two images may be inadequate if significant portions of the images are featureless or have low signal-to-noise ratio (i.e. are affected by glare).
  • validation using cells containing significant portions which are featureless or which have low signal-to-noise ratio may result in the erroneous invalidation of valid misalignment correction determinations in cases where the featureless portion of the overall image is small enough so that it does not adversely affect the misalignment correction determination.
  • analysis of featureless validation cells may produce meaningless correlation coefficients.
  • One embodiment comprises identifying one or more featureless cells and eliminating them from consideration in the validation of a misalignment correction determination, thereby preventing rejection of a good misalignment correction.
  • a determination of misalignment correction may be erroneous due to a computational artifact of data filtering at the image borders. For example, in one exemplary embodiment, an image with large intensity differences between the upper and lower borders and/or the left and right borders of the image frame field undergoes Laplacian of Gaussian frequency domain filtering.
  • Laplacian of Gaussian frequency domain filtering corresponds to cyclic convolution in the space-time domain, these intensity differences (discontinuities) yield a large gradient value at the image border, and cause the overall misalignment correction determination to be erroneous, since changes between the two images due to spatial shift are dwarfed by the edge effects.
  • Certain embodiments employ pre-multiplication of image data by a Hamming window to remove or reduce this "wraparound error.”
  • Preferred embodiments employ image- blending techniques such as feathering, to smooth any border discontinuity, while requiring only a minimal amount of additional processing time.
  • Figure 1A represents a 480 x 500 pixel image 102 from a sequence of images of in vivo human cervix tissue and shows a 256 x 256 pixel portion 104 of the image from which data is used in identifying a misalignment correction between two images from a sequence of images of the tissue, according to an illustrative embodiment of the invention.
  • Preferred embodiments comprise illuminating the tissue using either or both a white light source and a UN light source.
  • the image 102 of Figure 1A has a pixel resolution of about 0.054-mm.
  • the embodiments described herein show images with pixel resolutions of about 0.0547-mm to about 0.0537-mm. Other embodiments have pixel resolutions outside this range.
  • the images of a sequence have an average pixel resolution of between about 0.044-mm and about 0.064-mm.
  • the central 256 x 256 pixels 104 of the image 102 are chosen for use in motion tracking. Other embodiments use regions of different sizes for motion tracking, and these regions are not necessarily located in the center of the image frame field.
  • the method of motion tracking determines an x-displacement and a y-displacement corresponding to the translational shift (misalignment) between the 256 x 256 central portions 104 of two images in the sequence of images.
  • validation comprises splitting an image into smaller units (called cells), determining displacements of these cells, and comparing the cell displacements to the overall displacement.
  • Figure IB depicts the image represented in Figure 1A and shows a 128 x 128 pixel portion 154 of the image, made up of 16 individual 32 x 32 pixel validation cells 156, from which data is used in performing a validation of the misalignment correction, according to an illustrative embodiment of the invention.
  • Figure 2A, Figure 2B, and Figure 2C depict steps in illustrative embodiment methods of determining a misalignment correction between two images of a sequence, and methods of validating that determination.
  • Steps 202 and 204 of Figure 2 A depict steps of developing data from an initial image with which data from a subsequent image are compared in order to determine a misalignment correction between the subsequent image and the initial image.
  • An initial image "o" is preprocessed 202, then filtered 204 to obtain a matrix of values, for example, optical intensities, representing a portion of the initial image.
  • preprocessing includes transforming the three RGB color components into a single intensity component.
  • An exemplary intensity component is CCIR 601, shown in Equation 1 :
  • I 0.299R + 0.587G + 0.114B (1)
  • I is the CCIR 601 "gray scale" intensity component, expressed in terms of red (R), green (G), and blue (B) intensities.
  • CCIR 601 intensity may be used, for example, as a measure of the "whiteness" of a particular pixel in an image from an acetowhitening test. Different expressions for intensity may be used, and the choice may be geared to the specific type of diagnostic test conducted. In an alternative embodiment, a measure of radiant power as determined by a spectrometer may be used in place of the intensity component of Equation (1).
  • Some embodiments comprise obtaining multiple types of optical signals simultaneously or contemporaneously; for example, some embodiments comprise obtaining a combination of two or more of the following signals: fluorescence spectra, reflectance (backscatter) spectra, and a video signal.
  • Step 202 of Figure 2A is illustrated in blocks 240, 242, and 244 of Figure 2B, where block 240 represents the initial color image, "o", in the sequence, block 242 represents conversion of color data to gray scale using Equation 1, and block 244 represents the image of block 240 after conversion to gray scale.
  • Step 204 of Figure 2A represents filtering a 256 x 256 portion of the initial image, for example, a portion analogous to the 256 x 256 central portion 104 of the image 102 of Figure 1 A, using Laplacian of Gaussian filtering.
  • Other filtering techniques are used in other embodiments.
  • Preferred embodiments employ Laplacian of Gaussian filtering, which combines the Laplacian second derivative approximation with the Gaussian smoothing filter to reduce the high frequency noise components prior to differentiation.
  • This filtering step may be performed by discrete convolution in the space domain, or by frequency domain filtering.
  • the Laplacian of Gaussian (LoG) filter may be expressed in terms of x and y coordinates (centered on zero) as shown in Equation (2):
  • x and y are space coordinates and ⁇ is the Gaussian standard deviation.
  • an approximation to the LoG function is used.
  • approximation kernels of size 9 x 9, 21 x 21, and 31 x 31 are used.
  • Other embodiments employ different kernel approximations and/or different values of Gaussian standard deviation.
  • the LoG filter size may be chosen so that invalid scans are failed and valid scans are passed with a minimum of error.
  • use of a larger filter size is better at reducing large structured noise and is more sensitive to larger image features and larger motion, while use of a smaller filter size is more sensitive to smaller features and smaller motion.
  • One embodiment of the invention comprises using more than one filter size, adjusting to coordinate with the kind of motion being tracked and the features being imaged.
  • Step 204 of Figure 2 A is illustrated in Figure 2B in blocks 244, 246, and 248, where block 244 represents data from the initial image in the sequence after conversion to gray scale intensity, block 246 represents the application of the LoG filter, and block 248 represents the 256 x 256 matrix of data values, G 0 (x,y), which is the "gold standard" by which other images are compared in validating misalignment correction determinations in this embodiment.
  • block 244 represents data from the initial image in the sequence after conversion to gray scale intensity
  • block 246 represents the application of the LoG filter
  • block 248 represents the 256 x 256 matrix of data values, G 0 (x,y), which is the "gold standard" by which other images are compared in validating misalignment correction determinations in this embodiment.
  • G 0 (x,y) the "gold standard" by which other images are compared in validating misalignment correction determinations in this embodiment.
  • preferred embodiments validate a misalignment correction determination by comparing a given image to its preced
  • Figure 2 A, Figure 2B, and Figure 2C show application of the LoG filter as a discrete convolution in the space domain, resulting in a standard expressed in space coordinates
  • other preferred embodiments comprise applying the LoG filter in the frequency domain.
  • the LoG filter is preferably zero padded to the image size.
  • Steps 206 and 208 of Figure 2 A represent preprocessing an image "i", for example, by converting RGB values to gray scale intensity as discussed above, and performing LoG filtering to obtain G ⁇ (x,y), a matrix of values from image "i" which is compared with that of another image in the sequence in order to determine a misalignment correction between the two images.
  • Steps 206 and 208 of Figure 2A are illustrated in Figure 2B in blocks 250, 252, 254, 256, and 258, where fi(x,y) in block 250 is the raw image data from image "i", block 252 represents conversion of the fi(x,y) data to gray scale intensities as shown in block 254, and block 256 represents application of the LoG filter on the data of block 254 to produce the data of block 258, Gi(x,y).
  • steps 212 and 214 of Figure 2 A represent preprocessing an image "j", for example, by converting RGB values to gray scale intensity as discussed above, and performing LoG filtering to obtain Gj(x,y), a matrix of values from image "j” which is compared with image "i” in order to determine a measure of misalignment between the two images.
  • image "j" is subsequent to image "i” in the sequence.
  • "i" and “j” are consecutive images.
  • Steps 212 and 214 of Figure 2 A are illustrated in Figure 2B in blocks 264, 266, 268, 270, and 272, where "j" is “i+1", the image consecutive to image "i” in the sequence.
  • block 264 is the raw "i+1" image data
  • block 266 represents conversion of the "i+1” data to gray scale intensities as shown in block 268,
  • block 270 represents application of the LoG filter on the data of block 268 to produce the data of block 272, G i+1 (x,y).
  • Steps 210 and 216 of Figure 2A represent applying a Fourier transform, for example, a Fast Fourier Transform (FFT), using G;(x,y) and Gj(x,y), respectively, to obtain Fj(u,v) and F j (u,v), which are matrices of values in the frequency domain corresponding to data from images "i" and "j", respectively.
  • Steps 210 and 216 of Figure 2A are illustrated in Figure 2B by blocks 258, 260, 262, 272, 274, and 276, where "j" is "i+1", the image consecutive to image "i” in the sequence.
  • block 258 represents the LoG filtered data, Gi(x,y), corresponding to image "i”
  • block 260 represents taking the Fast Fourier Transform of Gj(x,y) to obtain Fj(u,v), shown in block 262.
  • block 272 is the LoG filtered data, Gi + ⁇ (x,y), corresponding to image "i+1”
  • block 274 represents taking the Fast Fourier Transform of Gj +1 (x,y) to obtain Fj + ⁇ (u,v), shown in block 276.
  • Step 218 of Figure 2 A represents computing the cross correlation Fj(u,v) F j (u,v), where Fi(u,v) is the Fourier transform of data from image "i", F j(u,v) is the complex conjugate of the Fourier transform of data from image "j”, and u and v are frequency domain variables.
  • the cross-correlation of two signals of length Ni and N 2 provides N ⁇ +N 2 -1 values; therefore, to avoid aliasing problems due to under-sampling, the two signals should be padded with zeros up to N ⁇ +N 2 -1 samples.
  • Step 218 of Figure 2A is represented in Figure 2B by blocks 262, 276, and 278.
  • Block 278 of Figure 2B represents computing the cross correlation, Fj(u,v) F i + ⁇ (u,v), using Fj(u,v), the Fourier transform of data from image "i", and F*j +1 (u,v), the complex conjugate of the Fourier transform of data from image "i+1".
  • Step 220 of Figure 2 A represents computing the inverse Fourier transform of the cross- correlation computed in step 218.
  • Step 220 of Figure 2 A is represented in Figure 2B by block 280.
  • the resulting inverse Fourier transform maps how well the 256 x 256 portions of images "i" and "j" match up with each other given various combinations of x- and y-shifts.
  • the normalized correlation coefficient closest to 1.0 corresponds to the x-shift and y-shift position providing the best match, and is determined from the resulting inverse Fourier transform.
  • correlation coefficients are normalized by dividing matrix values by a scalar computed as the product of the square root of the (0,0) value of the ⁇ wto-correlation of each image. In this way, variations in overall brightness between the two images have a more limited effect on the correlation coefficient, so that the actual movement within the image frame field between the two images is better reflected in the misalignment determination.
  • Step 222 of Figure 2 A represents determining misalignment values d x , d y , d, sum(d x ), sum(d y ), and Sum(d j ), where d x is the computed displacement between the two images "i" and “j" in the x-direction, d y is the computed displacement between the two images in the y- direction, d is the square root of the sum d x 2 +d y 2 and represents an overall displacement between the two images, sum(d x ) is the cumulative x-displacement between the current image "j" and the first image in the sequence "o", sum(d y ) is the cumulative y-displacement between the current image "j” and the first image in the sequence "o”, and Sum(dj) is the cumulative displacement, d, between the current image "j” and the first image in the sequence "o".
  • Step 222 of Figure 2A is represented in Figure 2B by blocks 282, 284, and 286.
  • Blocks 284 and 286 represent finding the maximum value in the data of block 282 in order to calculate d x , d y , d, sum(d x ), sum(d y ), and Sum(di +1 ) as described above, where image "j" in Figure 2A is "i+1" in Figure 2B, the image consecutive to image "i".
  • Steps 224, 226, and 228 of Figure 2A represent one method of validating the misalignment correction determined for image "j" in step 222 of Figure 2A.
  • This method of validating misalignment correction is represented in blocks 287, 289, 291, 296, 297, and 298 of Figure 2C.
  • Another method of validating a misalignment correction is represented in steps 230, 232, and 234 of Figure 2A; and this method is represented in blocks 288, 290, 292, 293, 294, and 295 of Figure 2B.
  • Figure 2C is a schematic flow diagram depicting steps in a version of the methods shown in Figure 2A of determining a correction for a misalignment between two images in which validation is performed using data from two consecutive images.
  • Preferred embodiments comprise using consecutive or near-consecutive images to validate a misalignment correction determination, as in Figure 2C.
  • Other embodiments comprise using the initial image to validate a misalignment correction determination for a given image, as in Figure 2B.
  • step 224 represents realigning Gj(x,y), the LoG-filtered data from image "j”, to match up with Gj(x,y), the LoG-filtered data from image "i”, using the misalignment values d x and d y determined in step 222.
  • image "j" is consecutive to image "i" in the sequence of images.
  • step 230 represents realigning Gj(x,y), the LoG-filtered data from image "j”, to match up with G 0 (x,y), the LoG- filtered "gold standard” data from the initial image "o”, using the displacement values sum(d x ) and sum(dy) determined in step 222.
  • Step 230 of Figure 2A is represented in block 288 of Figure 2B.
  • Step 226 of Figure 2 A represents comparing corresponding validation cells from Gj(x,y) and G;(x,y) by computing correlation coefficients for each cell.
  • a 128 x 128 pixel central portion of the realigned Gj + ⁇ (x,y) is selected, and the corresponding 128 x 128 pixel central portion of Gj(x,y) is selected, as shown in blocks 289 and 291 of Figure 2C.
  • An exemplary 128 x 128 pixel validation region 154 is shown in Figure IB.
  • the embodiment comprises computing a correlation coefficient for each of 16 validation cells.
  • An exemplary validation cell from each of the realigned Gi + ⁇ (x,y) matrix 291 and Gj(x,y) matrix 289 is shown in blocks 297 and 296 of Figure 2C.
  • the validation cells are as depicted in the 32 x 32 pixel divisions 156 of the 128 x 128 pixel validation region 154 of Figure IB. Different embodiments use different numbers and/or different sizes of validation cells.
  • Correlation coefficients are computed for each of the 16 cells, as shown in block 298 of Figure 2C. Each correlation coefficient is a normalized cross-correlation coefficient as shown in Equation (5):
  • c'(m,n) is the normalized cross-correlation coefficient for the validation cell (m,n)
  • m is an integer 1 to 4 corresponding to the column of the validation cell whose correlation coefficient is being calculated
  • n is an integer 1 to 4 corresponding to the row of the validation cell whose correlation coefficient is being calculated
  • p and q are matrix element markers
  • It[p,q] are elements of the cell in column m and row n of the 128 x 128 portion of the realigned image shown in block 291 of Figure 2C
  • l 2 [p,q] are elements of the cell in column m and row n of the 128 x 128 portion of Gj(x,y) shown in block 289 of Figure 2C.
  • Equation (5) is similar to an auto-correlation in the sense that a subsequent image is realigned with a prior image based on the determined misalignment correction so that, ideally, the aligned images appear to be identical.
  • a low value of c'(m,n) indicates a mismatching between two corresponding cells.
  • the misalignment correction determination is then either validated or rejected based on the values of the 16 correlation coefficients computed in step 298 of Figure 2C. For example, each correlation coefficient may be compared against a threshold maximum value. This corresponds to step 228 of Figure 2 A.
  • Step 232 of Figure 2 A represents comparing corresponding validation cells from G j (x,y) and G 0 (x,y) by computing correlation coefficients for each cell.
  • a 128 x 128 pixel central portion of the realigned is selected, and the corresponding 128 x 128 pixel central portion of G 0 (x,y) is selected, as shown in blocks 292 and 290 of Figure 2B.
  • An exemplary 128 x 128 pixel validation region 154 is shown in Figure IB.
  • the embodiment comprises computing a correlation coefficient for each of the 16 validation cells.
  • An exemplary validation cell from each of the realigned Gj+ ⁇ (x,y) matrix 292 and G 0 (x,y) matrix 290 is shown in blocks 294 and 293 of Figure 2B.
  • the validation cells are as depicted in the 32 x 32 pixel divisions 156 of the 128 x 128 pixel validation region 154 of Figure IB. Different embodiments use different numbers of and/or different sizes of validation cells. Correlation coefficients are computed for each of the 16 cells, as shown in block 295 of Figure 2B.
  • Each correlation coefficient is a normalized "auto"-correlation coefficient as shown in Equation (5) above, where I ⁇ [p,q] are elements of the cell in column m and row n of the 128 x 128 portion of the realigned subsequent image shown in block 292 of Figure 2B, and I 2 [p,q] are elements of the cell in column m and row n of the 128 x 128 portion of G 0 (x,y) shown in block 290 of Figure 2B.
  • a low value of c'(m,n) indicates a mismatching between two corresponding cells.
  • the misalignment determination is then either validated or rejected based on the values of the 16 correlation coefficients computed in step 295 of Figure 2C. This corresponds to step 234 of Figure 2 A.
  • determinations of misalignment correction and validation of these determinations as shown in each of Figure 2 A, Figure 2B, and Figure 2C are performed using a plurality of the images in a given sequence.
  • determinations of misalignment correction and validations thereof are performed while images are being obtained, so that an examination in which a given sequence of images is obtained may be aborted before all the images are obtained.
  • a misalignment correction is determined, validated, and compensated for by adjusting the optical signal detection device obtaining the images.
  • an adjustment of the optical signal detection device is made after each of a plurality of images are obtained.
  • an adjustment if required by the misalignment correction determination, is made after every image subsequent to the first image (except the last image), and prior to the next consecutive image.
  • a cervical tissue scan comprising a sequence of 13 images is performed using on-the-fly misalignment correction determination, validation, and camera adjustment, such that the scan is completed in about 12 seconds.
  • Other embodiments comprise obtaining sequences of any number of images in more or less time than indicated here.
  • Each of steps 228 and 234 of the embodiment of Figure 2 A represents applying a validation algorithm to determine at least the following: (1) whether the misalignment correction can be made, for example, by adjusting the optical signal detection device, and (2) whether the misalignment correction determined is valid.
  • the validation algorithm determines that a misalignment correction cannot be executed during an acetowhitening exam conducted on cervical tissue in time to provide sufficiently aligned subsequent images, if either of conditions (a) or (b) is met, as follows: (a) dj, the displacement between the current image "i" and the immediately preceding image "i-1" is greater than 0.55- mm or (b) Sum(dj), the total displacement between the current image and the first image in the sequence, "o", is greater than 2.5-mm. If either of these conditions is met, the exam in progress is aborted, and another exam must be performed.
  • Other embodiments may comprise the use of different validation rules.
  • validation is performed for each determination of misalignment correction by counting how many of the correlation coefficients c' r (m,n) shown in Equation (5), corresponding to the 16 validation cells, is less than 0.5. If this number is greater than 1, the exam in progress is aborted.
  • Other embodiments may comprise the use of different validation rules. Gradual changes in image features, such as acetowhitening of tissue or changes in glare, cause discrepancies which are reflected in the correlation coefficients of the validation cells, but which do not represent a spatial shift.
  • the validation is performed as shown in Figure 2C, where validation cells of consecutive images are used to calculate the correlation coefficients.
  • Figure 3 depicts a subset of adjusted, filtered images 302, 306, 310, 314, 318, 322 from a sequence of images of a tissue with an overlay of gridlines showing the validation cells used in validating the determinations of misalignment correction between the images, according to an illustrative embodiment of the invention.
  • the number of validation cells with correlation coefficient below 0.5 for the misalignment-corrected images of Figure 3 is 0, 1, 0, 0, and 1 for images 306, 310, 314, 318, and 322, respectively. Since none of the images have more than one coefficient below 0.5, this sequence is successful and is not aborted. This is a good result in the example of Figure 3, since there is no significant tissue movement occurring between the misalignment-corrected images. There is only a gradually changing glare, seen to move within the validation region 304, 308, 312, 316, 320, 324 of each image.
  • the number of validation cells with correlation coefficient below 0.5 for the misalignment-corrected images of Figure 3 is 3, 4, 5, 5, and 6 for images 306, 310, 314, 318, and 322, respectively. This is not a good result in this example, since the exam would be erroneously aborted, due only to gradual changes in glare or whitening of tissue, not uncompensated movement of the tissue sample.
  • validation cells that are featureless or have low signal-to- noise ratio are eliminated from consideration. These cells can produce meaningless correlation coefficients. Featureless cells in a preferred embodiment are identified and eliminated from consideration by examining the deviation of the sum squared gradient of a given validation cell from the mean of the sum squared gradient of all cells as shown in the following exemplary rule:
  • c' ⁇ (m,n) 1.0 for the given validation cell, the cell does not count against validation of the misalignment correction determination in the rubrics of either step 228 or step 234 of Figure 2A, since a correlation coefficient of 1.0 represents a perfect match.
  • LoG filtering may result in "wraparound error."
  • a preferred embodiment employs an image blending technique such as "feathering" to smooth border discontinuities, while requiring only a minimal amount of additional processing time.
  • FIG. 4A depicts a sample image 402 after application of a 9-pixel size [9 x 9] Laplacian of Gaussian filter (LoG 9 filter) on an exemplary image from a sequence of images of tissue, according to an illustrative embodiment of the invention.
  • the filtered intensity values are erroneous at the top edge 404, the bottom edge 406, the right edge 410, and the left edge 408 of the image 402. Since LoG frequency domain filtering corresponds to cyclic convolution in the space-time domain, intensity discontinuities between the top and bottom edges of an image and between the right and left edges of an image result in erroneous gradient approximations.
  • Feathering comprises removal of border discontinuities prior to application of a filter.
  • feathering is performed on an image before LoG filtering, for example, between steps 206 and 208 in Figure 2A.
  • feathering is preferably performed prior to both Fourier transformation and LoG filtering.
  • an illustrative feathering algorithm is as follows:
  • FIG. 4B depicts the application of both a feathering technique and a LoG filter on the same unfiltered image used in Figure 4 A.
  • the feathering is performed to account for border processing effects, according to an illustrative embodiment of the invention.
  • a feathering distance, d of 20 pixels was used.
  • Other embodiments use other values of d.
  • the filtered image 420 of Figure 4B does not display uncharacteristically large or small gradient intensity values at the top edge 424, bottom edge 426, right edge 430, or left edge 428, since discontinuities are smoothed prior to LoG filtering. Also, there is minimal contrast suppression of image detail at the borders. Pixels outside the feathering distance, d, are not affected. The use of feathering here results in more accurate determinations of misalignment correction between two images in a sequence of images. [0076] Another method of border smoothing is multiplication of unfiltered image data by a Hamming window. In some embodiments, a Hamming window function is multiplied to image data before Fourier transformation so that the border pixels are gradually modified to remove discontinuities.
  • Figure 5 A is identical to Figure 4A and depicts the application of a LoG 9 filter on an exemplary image from a sequence of images of tissue according to an illustrative embodiment of the invention.
  • the filtered intensity values are erroneous at the top edge 404, the bottom edge 406, the right edge 410, and the left edge 408 of the image 402.
  • Figure 5B depicts the application of both a Hamming window and a LoG 9 filter on the same unfiltered image used in Figure 5 A.
  • Hamming windowing is performed to account for border processing effects, according to an illustrative embodiment of the invention.
  • Each of the edges 524, 526, 528, 530 of the image 520 of Figure 5B no longer show the extreme filtered intensity values seen at the edges 404, 406, 408, 410 of the image 402 of Figure 5 A.
  • application of the feathering technique is preferred over application of Hamming windowing.
  • a skilled artisan knows other methods of smoothing border discontinuities.
  • Figure 6 depicts the determination of a misalignment correction between two images using methods including the application of LoG filters of various sizes, as well as the application of a Hamming window technique and a feathering technique, according to illustrative embodiments of the invention.
  • Image 602 and image 604 at the top of Figure 6 are consecutive images from a sequence of images of cervix tissue obtained during a diagnostic exam, each with a pixel resolution of about 0.054-mm.
  • Figure 6 depicts the application of four different image filtering algorithms: (1) Hamming window with LoG 9 filtering, (2) feathering with LoG 9 filtering, (3) feathering with LoG 21 filtering, and (4) feathering with LoG 31 filtering.
  • Each of these algorithms are implemented as part of a misalignment correction determination and validation technique as illustrated in Figure 2A and Figure 2C, and values of d x and d y between images 602 and 604 of Figure 6 are determined using each of the four filtering algorithms.
  • each of the four different image filtering algorithms (1) - (4) listed above are applied, resulting in images 606, 610, 614, and 618, respectively, each having 256 x 256 pixels.
  • the four different image filtering algorithms are also applied for image 604, resulting in images 608, 612, 616, and 620, respectively, each having 256 x 256 pixels.
  • Values of (d x , d y ) determined using Hamming + LoG 9 filtering are (-7, 0), expressed in pixels.
  • Values of (d x , d y ) determined using feathering + LoG 9 filtering are (-2, -10).
  • Values of (d x , d y ) determined using feathering + LoG 21 filtering are (-1, -9).
  • Values of (d x , d y ) determined using feathering + LoG 31 filtering are (0, -8).
  • a false positive occurs when a good misalignment correction determination is improperly rejected as a failure by a given validation rule.
  • the classification of a validation result as a "true positive” or a "false positive” is made by visual inspection of the pair of sequential images. In preferred embodiments, whenever true failures occur, the scan should be aborted.
  • situations where true failures occur in certain embodiments include image pairs between which there is one or more of the following: a large non-translational deformation such as warping or tilting; a large jump for which motion tracking cannot compute a correct translational displacement; rotation greater than about 3 degrees; situations in which a target laser is left on; video system failure such as blur, dark scan lines, or frame shifting; cases where the image is too dark and noisy, in shadow; cases where a vaginal speculum (or other obstruction) blocks about half the image; other obstructions such as sudden bleeding.
  • a set of validation rules is chosen such that true positives are maximized and false positives are minimized. Sensitivity and specificity can be adjusted by adjusting choice of filtering algorithms and/or choice of validation rules.
  • Table 1 shows the number of true positives (true failures) and false positives (false failures) determined by a validation rule as depicted in Figure 2A and Figure 2C where validation is determined using consecutive images.
  • Table 1 shows various combinations of filtering algorithms and validation rules. The four filtering algorithms used are (1) Hamming windowing with LoG 9 filtering, (2) feathering with LoG 9 filtering, (3) feathering with LoG 21 filtering, and (4) feathering with LoG 31 filtering.
  • the values, c'(m,n), correspond to the normalized "auto"-correlation coefficient of Equation (5) whose value must be met or exceeded in order for a validation cell to "pass" in an embodiment.
  • the "Number Threshold” column indicates the maximum number of "failed” validation cells, out of the 16 total cells, that are allowed for a misalignment correction determination to be accepted in an embodiment. If more than this number of validation cells fail, then the misalignment correction determination is rejected.
  • Table 1 True positives and false positives of validation determinations for embodiments using various combinations of filtering algorithms and validation rules.
  • Preferred embodiments for the determination and validation of misalignment corrections between 256 x 256 pixel portions of images of cervical tissue with pixel resolution of about 0.054-mm employ one or more of the following: (1) use of feathering for image border processing, (2) application of LoG 21 filter, (3) elimination of validation cells with low signal-to-noise ratio, and (4) use of consecutive images for validation.

Abstract

The invention provides methods of determining a correction for a misalignment between at least two images in a sequence of images due at least in part to sample movement. The methods are applied, for example, in the processing and analysis of a sequence of images of biological tissue in a diagnostic procedure. The invention also provides methods of validating the correction for a misalignment between at least two images in a sequence of images of a sample. The methods may be applied in deciding whether a correction for misalignment accurately accounts for sample motion.

Description

METHODS AND SYSTEMS FOR CORRECTING IMAGE MISALIGNMENT
Prior Applications
[0001] The present application claims the benefit of U.S. Patent Application Serial Number 10/273,511, filed October 18, 2002, and U.S. Provisional Patent Application Serial Number 60/414,767, filed on September 30, 2002.
Field of the Invention [0002] This invention relates generally to image processing. More particularly, the invention relates to correcting image misalignment, where the misalignment is due at least in part to sample movement.
Background of the Invention [0003] In modern medical practice, it is useful to analyze a sequence of images of in vivo tissue obtained throughout the course of a diagnostic medical procedure. For example, in screening for some forms of cervical cancer, a chemical agent is applied to cervical tissue and the optical response of the tissue is captured in a sequence of colposcopic images. The tissue is characterized by analyzing the time-dependent response of the tissue, as recorded in the sequence of images. During this type of diagnostic procedure, the tissue may move while images are being taken, resulting in a spatial shift of the tissue within the image frame field. The tissue movement may be caused by the natural movement of the patient during the procedure, which can occur even though the patient attempts to remain completely still. Accurate analysis of the sequence of images may require that the images be adjusted prior to analysis to compensate for misalignment caused at least in part by patient movement. [0004] There is currently a method of stabilizing an electronic image by generating a motion vector which represents the amount and direction of motion occurring between consecutive frames of a video signal. See U.S. Patent No. 5,289,274 to Kondo. However, this method accounts for certain gross movements of a video camera — in particular, certain vibrations caused by the operator of a handheld camcorder. The method does not compensate for misalignment caused by movement of a sample. For example, such a method could not be used to adequately correct an image misalignment caused by the small-scale movement of a patient during a diagnostic procedure.
[0005] Another image stabilization method is based on detecting the physical movement of the camera itself. See U.S. Patent No. 5,253,071 to MacKay, which describes the use of a gimbaled ring assembly that moves as a camera is physically jittered. These types of methods cannot be used to correct misalignments caused by the movement of a sample.
Summary of the Invention [0006] The invention provides methods of correcting misalignments between sequential images of a sample. The invention is particularly useful for correcting image misalignment due to movement of the sample between images and/or during image acquisition. The invention also allows for real-time, dynamic image alignment for improved optical diagnosis and assessment. [0007] In a preferred embodiment, the invention comprises determining an x-displacement and a y-displacement corresponding to a misalignment between two images of a tissue sample, where the misalignment is caused by a shift in the position of the sample with respect to the image frame field. For example, in obtaining a sequence of images of an in-situ tissue sample, an embodiment of the invention makes it possible to correct for small image misalignments caused by unavoidable patient motion, such as motion due to breathing. It has been discovered that validating misalignment corrections improves the accuracy of diagnostic procedures that use data from sequential images, particularly where the misalignments are small and the need for accuracy is great. Thus, methods of the invention comprise validating misalignment corrections by splitting individual images into smaller subimages, determining displacement between these subimages, and comparing the subimage displacements to the overall image displacement. Alternatively, validation may comprise adjusting two images according to a misalignment correction, then determining displacement between corresponding subimages and comparing these displacements with a threshold maximum value.
[0008] It has also been discovered that application of a chemical contrast agent, such as acetic acid, prior to or during acquisition of a sequence of tissue images enhances the detection of small-scale image misalignment by increasing intra-image contrast of the tissue images. The enhanced contrast of the tissue features recorded in the images allows for more accurate motion correction determination, since enhanced features may serve as landmarks in determining values of displacement.
[0009] Both misalignment correction determination and validation may be performed such that an accurate adjustment is made for a misalignment before an entire sequence of images is obtained. This allows, for example, "on the fly" adjustment of a camera while a diagnostic exam is in progress. Thus, corrections may be determined, validated, and accurately adjusted for as misalignments occur, reducing the need for retakes and providing immediate feedback as to whether an examination is erroneous. Automatic adjustment may be accomplished by adjusting aspects of the optical interrogation of the sample using a misalignment correction value. Adjustments may be performed, for example, by adjusting aspects of transmission and/or reception of electromagnetic energy associated with the sample. This may include, for example, transmitting a correction signal to a galvanometer system or a voice coil to "null out" a misalignment by adjusting the position of a mirror or other component of the camera obtaining the images according to the correction signal. Alternatively, or additionally, adjustments may be performed by electronically adjusting an aspect of an image, for example, the frame and/or bounds of an image, according to a misalignment correction value, or by performing any other appropriate adjustment procedure. [0010] Applications of methods of the invention include the processing and analysis of a sequence of images of biological tissue. For example, chemical agents are often applied to tissue prior to optical measurement in order to elucidate physiological properties of the tissue. In one embodiment, acetic acid is applied to cervical tissue in order to whiten the tissue in a way that allows enhanced optical discrimination between normal tissue and certain kinds of diseased tissue. The acetowhitening technique, as well as other diagnostic techniques, and the analysis of images and spectral data obtained during acetowhitening tests are described in co-owned U.S. patent application Serial No. 10/099,881, filed March 15, 2002, and co-owned U.S. patent application entitled, "Method and Apparatus for Identifying Spectral Artifacts," identified by Attorney Docket Number MDS-033, filed September 13, 2002, both of which are hereby incorporated by reference. [0011] A typical misalignment between two images is less than about 0.55-mm within a two- dimensional, 480 x 500 pixel image frame field covering an area of approximately 25-mm x 25- mm. These dimensions provide an example of the relative scale of misalignment versus image size. In some instances it is only necessary to compensate for misalignments of less than about one millimeter within the exemplary image frame field defined above. In other cases, it is necessary to compensate for misalignments of less than about 0.3-mm within the exemplary image frame field above. Also, the dimensions represented by the image frame field, the number of pixels of the image frame field, and/or the pixel resolution may differ from the values shown above. [0012] A misalignment correction determination may be inaccurate, for example, due to any one or a combination of the following: non-translational sample motion such as rotational motion, local deformation, and/or warping; changing features of a sample such as whitening of tissue; and image recording problems such as focus adjustment, missing images, blurred or distorted images, low signal-to-noise ratio, and computational artifacts. Validation procedures of the invention identify such inaccuracies. The methods of validation may be conducted "on-the- fly" in concert with the methods of determining misalignment corrections in order to improve accuracy and to reduce the time required to conduct a given test.
[0013] Once an image misalignment is detected, an embodiment provides for automatically adjusting an optical signal detection device, such as a camera. For example, a camera may be adjusted "on-the-fly" to compensate for misalignments as images are obtained. This improves accuracy and reduces the time required to conduct a given test.
[0014] The optical signal detection device comprises a camera, a spectrometer, or any other device which detects optical signals. The optical signal may be emitted by the sample, diffusely reflected by the sample, transmitted through the sample, or otherwise conveyed from the sample. The optical signal comprises light of wavelength falling in a range between about 190-nm and about 1100-nm. One embodiment comprises obtaining one or more of the following from one or more regions of the tissue sample: fluorescence spectral data, reflectance spectral data, and video images. [0015] Methods comprise analysis of a sample of human tissue, such as cervical tissue. Methods of the invention also include analysis of other types of tissue, such as non-cervical tissue and/or nonhuman tissue. For example, methods comprise analysis of one or more of the following types of tissue: colorectal, gastroesophageal, urinary bladder, lung, skin, and any other tissue type comprising epithelial cells. [0016] A common source of misalignment is movement of a sample. Methods comprise the steps of: obtaining a plurality of sequential images of a sample using an optical signal detection device; determining a correction for a misalignment between two or more of the sequential images, where the misalignment is due at least in part to a movement of the sample; and compensating for the misalignment by automatically adjusting the optical signal detection device.
[0017] The two or more sequential images may be consecutive, or they may be nonconsecutive. In one embodiment, a misalignment correction is identified between a first image and a second image, where the second image is subsequent to the first image. The first image and second image may be either consecutive or nonconsecutive. [0018] Identifying a misalignment correction may involve data filtering. For example, some methods comprise filtering a subset of data from a first image of a plurality of sequential images. A variety of data filtering techniques may be used. In one embodiment, Laplacian of Gaussian filtering is performed. Identifying a misalignment may comprise preprocessing a subset of data from the first image prior to filtering. For example, color intensities may be converted to gray scale before filtering. In some embodiments, filtering comprises frequency domain filtering and/or discrete convolution in the space domain.
[0019] In order to identify a correction for a misalignment, preferred embodiments comprise computing a cross correlation using data from each of two of the plurality of sequential images. In some embodiments, computing a cross correlation comprises computing a product represented by Fj(u,v) F j(u,v), where Fi(u,v) is a Fourier transform of data derived from a subset of data from a first image, i, of the plurality of sequential images, F*j(u,v) is a complex conjugate of a Fourier transform of data derived from a subset of data from a second image, j, of the plurality of sequential images, and u and v are frequency domain variables. In preferred embodiments, the computing of the cross correlation additionally comprises computing an inverse Fourier transform of the product represented by Fi(u,v) F j(u,v).
[0020] A method of the invention comprises validating a correction for a misalignment determined between a first image and a second image. Validating a misalignment correction comprises defining one or more validation cells within a bounded image plane; computing for each validation cell a measure of displacement between two (or more) images bound by the image plane using data from the two images corresponding to each validation cell; and validating a correction for misalignment between the two images by comparing the validation cell displacements with the correction. Preferably, each validation cell comprises a subset of the bounded image plane. The two (or more) images may be consecutive images. In some embodiments, the validating step includes eliminating from consideration one or more measures of displacement for corresponding validation cells. For example, measures of displacement from validation cells determined to be likely to contribute to an erroneous validation result are eliminated in some embodiments. In some embodiments, identifying validation cells that are likely to contribute to an erroneous validation result comprises calculating a sum squared gradient for at least one validation cell.
[0021] Methods of the invention comprise obtaining a plurality of sequential images of the sample during an application of a chemical agent to the sample. For example, the chemical agent comprises at least one of the following: acetic acid, formic acid, propionic acid, butyric acid, Lugol's iodine, Shiller's iodine, methylene blue, toluidine blue, indigo carmine, indocyanine green, and fluorescein. Some embodiments comprise obtaining sequential images of the sample during an acetowhitening test. [0022] In preferred embodiments, the movement of the sample is relative to the optical signal detection device and comprises at least one of the following: translational motion, rotational motion, warping, and local deformation.
[0023] One or more of the sequential images comprise measurements of an optical signal from the sample. The optical signal comprises, for example, visible light, fluoresced light, and/or another form of electromagnetic radiation.
[0024] Methods of the invention comprise determining a correction for misalignment between each of a plurality of pairs of images. Such methods comprise the steps of: obtaining a set of sequential images of a sample using an optical signal detection device; and determining a correction for a misalignment between each of a plurality of pairs of the sequential images, where at least one of the misalignments is due at least in part to a movement of the sample. The correction may then be used to compensate for each of the misalignments by automatically adjusting the optical signal detection device.
[0025] The obtaining step and the determining step may be performed alternately or concurrently, for example. One embodiment comprises determining a correction for a misalignment between a pair of the sequential images less than about 2 seconds after obtaining the latter of the pair of the sequential images. In another embodiment, this takes less than about one second.
[0026] In another aspect, the invention is directed to a method of determining a correction for a misalignment that includes validating the correction. Methods comprise the steps of: obtaining a plurality of sequential images of a sample using an optical signal detection device; determining a correction for a misalignment between at least two of the sequential images; and validating the correction for misalignment between two of the images. An embodiment further comprises compensating for the misalignment by automatically adjusting the optical signal detection device according to the correction determined. In one embodiment, determining a misalignment correction between two images and validating the correction is performed in less than about one second.
[0027] Methods of the invention comprise compensating for a misalignment by determining a correction for a misalignment between a pair of images, validating the misalignment, and automatically realigning one of the pair of images. The realignment may be performed during the acquisition of the images, or afterwards.
Brief Description of the Drawings [0028] The objects and features of the invention can be better understood with reference to the drawings described below, and the claims. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the drawings, like numerals are used to indicate like parts throughout the various views. [0029] Figure 1 A represents a 480 x 500 pixel image from a sequence of images of in vivo human cervix tissue and shows a 256 x 256 pixel portion of the image from which data is used in determining a correction for a misalignment between two images from a sequence of images of the tissue according to an illustrative embodiment of the invention.
[0030] Figure IB depicts the image represented in Figure 1 A and shows a 128 x 128 pixel portion of the image, made up of 16 individual 32 x 32 pixel validation cells, from which data is used in performing a validation of the misalignment correction determination according to an illustrative embodiment of the invention. [0031] Figure 2A is a schematic flow diagram depicting steps in a method of determining a correction for a misalignment between two images due to at least in part to the movement of a sample according to an illustrative embodiment of the invention. [0032] Figure 2B is a schematic flow diagram depicting steps in a version of the method shown in Figure 2A of determining a correction for a misalignment between two images due to at least in part to the movement of a sample according to an illustrative embodiment of the invention. [0033] Figure 2C is a schematic flow diagram depicting steps in a version of the method shown in Figure 2 A of determining a correction for a misalignment between two images due to at least in part to the movement of a sample according to an illustrative embodiment of the invention.
[0034] Figure 3 depicts a subset of adjusted images from a sequence of images of a tissue with an overlay of gridlines showing the validation cells used in validating the determinations of misalignment correction between the images according to an illustrative embodiment of the invention.
[0035] Figure 4A depicts a sample image after application of a 9-pixel size (9 x 9) Laplacian of Gaussian filter (LoG 9 filter) on an exemplary image from a sequence of images of tissue according to an illustrative embodiment of the invention. [0036] Figure 4B depicts the application of both a feathering technique and a Laplacian of Gaussian filter on the exemplary unfiltered image used in Figure 4A to account for border processing effects according to an illustrative embodiment of the invention. [0037] Figure 5 A depicts a sample image after application of a LoG 9 filter on an exemplary image from a sequence of images of tissue according to an illustrative embodiment of the invention.
[0038] Figure 5B depicts the application of both a Hamming window technique and a LoG 9 filter on the exemplary unfiltered image used in Figure 5A to account for border processing effects according to an illustrative embodiment of the invention. [0039] Figure 6 depicts the determination of a correction for misalignment between two images using methods including the application of LoG filters of various sizes, as well as the application of a Hamming window technique and a feathering technique according to illustrative embodiments of the invention.
Description of the Illustrative Embodiment [0040] In general, the invention provides methods of determining a correction for a misalignment between images in a sequence due to movement of a sample. These methods are useful, for example, in the preparation of a sequence of images for analysis, as in medical diagnostics.
[0041] In some diagnostic procedures, methods of the invention comprise applying an agent to a tissue in order to change its optical properties in a way that is indicative of the physiological state of the tissue. The rate and manner in which the tissue changes are important in the characterization of the tissue.
[0042] Certain embodiments of the invention comprise automated and semi-automated analysis of diagnostic procedures that have traditionally required analysis by trained medical personnel. Diagnostic procedures which use automatic image-based tissue analysis provide results having increased sensitivity and/or specificity. See, e.g., co-owned U.S. patent application Serial No. 10/099,881, filed March 15, 2002, and co-owned U.S. patent application entitled, "Method and Apparatus for Identifying Spectral Artifacts," identified by Attorney Docket Number MDS-033, filed September 13, 2002, both of which are incorporated herein by reference.
[0043] In order to facilitate such automatic analysis, it is often necessary to adjust for misalignments caused by sample movement that occurs during the diagnostic procedure. For example, during a given procedure, in vivo tissue may spatially shift within the image frame field from one image to the next due to movement of the patient. Accurate diagnosis requires that this movement be taken into account in the automated analysis of the tissue sample. In some exemplary embodiments, spatial shift correction made at the time images are obtained is more accurate than correction made after all the images are obtained, since "on-the-fly" corrections compensate for smaller shifts occurring over shorter periods of time, rather than larger, more cumulative shifts occurring over longer periods of time.
[0044] If a sample moves while a sequence of images is obtained, the procedure may have to be repeated. For example, this may be because the shift between consecutive images is too large to be accurately compensated for, or because a region of interest moves outside of a usable portion of the frame captured by the optical signal detection device. It is often preferable to compensate for misalignments resulting from sample movement during the collection of images rather than wait until the entire sequence of images has been obtained before compensating for misalignments. Stepwise adjustment of an optical signal detection device throughout image capture reduces the cumulative effect of sample movement. If adjustment is made only after an entire sequence is obtained, it may not be possible to accurately compensate for some types of sample movement. On-the-fly, stepwise compensation for misalignment reduces the need for retakes.
[0045] On-the-fly compensation may also obviate the need to obtain an entire sequence of images before making the decision to abort a failed procedure, particularly when coupled with on-the-fly, stepwise validation of the misalignment correction determination. For example, if the validation procedure detects that a misalignment correction determination is either too large for adequate compensation to be made or is invalid, the procedure may be aborted before obtaining the entire sequence of images. It can be immediately determined whether or not the obtained data is useable. Retakes may be performed during the same patient visit; no follow-up visit to repeat an erroneous test is required. A diagnostic test invalidated by excessive movement of the patient may be aborted before obtaining the entire sequence of images.
[0046] In preferred embodiments, a determination of misalignment correction is expressed as a translational displacement in two dimensions, x and y. Here, x and y represent Cartesian coordinates indicating displacement on the image frame field plane. In other embodiments, corrections for misalignment are expressed in terms of non-Cartesian coordinate systems, such as biradical, spherical, and cylindrical coordinate systems, among others. Alternatives to Cartesian- coordinate systems may be useful, for example, where the image frame field is non-planar. [0047] Some types of sample motion - including rotational motion, warping, and local deformation — may result in an invalid misalignment correction determination, since it may be impossible to express certain instances of these types of sample motion in terms of a translational displacement, for example, in the two Cartesian coordinates x and y. It is noted, however, that in some embodiments, rotational motion, warping, local deformation, and/or other kinds of non- translational motion are acceptably accounted for by a correction expressed in terms of a translational displacement. The changing features of the tissue, as in acetowhitening, may also affect the determination of a misalignment correction. Image recording problems such as focus adjustment, missing images, blurred or distorted images, low signal-to-noise ratio (i.e. caused by glare), and computational artifacts may affect the correction determination as well. Therefore, validation of a determined correction is often required. In some embodiments, a validation step includes determining whether an individual correction for misalignment is erroneous, as well as determining whether to abort or continue the test in progress. Generally, validation comprises splitting at least a portion of each of a pair of images into smaller, corresponding units (subimages), determining for each of these smaller units a measure of the displacement that occurs within the unit between the two images, and comparing the unit displacements to the overall displacement between the two images.
[0048] In certain embodiments, the method of validation takes into account the fact that features of a tissue sample may change during the capture of a sequence of images. For example, the optical intensity of certain regions of tissue change during an acetowhitening test. Therefore, in preferred embodiments, validation of a misalignment correction determination is performed using a pair of consecutive images. In this way, the difference between the corresponding validation cells of the two consecutive images is less affected by gradual tissue whitening changes, as compared with images obtained further apart in time. In some embodiments, validation is performed using pairs of nonconsecutive images taken within a relatively short period of time, compared with the time in which the overall sequence of images is obtained. In other embodiments, validation comprises the use of any two images in the sequence of images.
[0049] In some exemplary embodiments, a determination of misalignment correction between two images may be inadequate if significant portions of the images are featureless or have low signal-to-noise ratio (i.e. are affected by glare). Similarly, validation using cells containing significant portions which are featureless or which have low signal-to-noise ratio may result in the erroneous invalidation of valid misalignment correction determinations in cases where the featureless portion of the overall image is small enough so that it does not adversely affect the misalignment correction determination. For example, analysis of featureless validation cells may produce meaningless correlation coefficients. One embodiment comprises identifying one or more featureless cells and eliminating them from consideration in the validation of a misalignment correction determination, thereby preventing rejection of a good misalignment correction. [0050] A determination of misalignment correction may be erroneous due to a computational artifact of data filtering at the image borders. For example, in one exemplary embodiment, an image with large intensity differences between the upper and lower borders and/or the left and right borders of the image frame field undergoes Laplacian of Gaussian frequency domain filtering. Since Laplacian of Gaussian frequency domain filtering corresponds to cyclic convolution in the space-time domain, these intensity differences (discontinuities) yield a large gradient value at the image border, and cause the overall misalignment correction determination to be erroneous, since changes between the two images due to spatial shift are dwarfed by the edge effects. Certain embodiments employ pre-multiplication of image data by a Hamming window to remove or reduce this "wraparound error." Preferred embodiments employ image- blending techniques such as feathering, to smooth any border discontinuity, while requiring only a minimal amount of additional processing time.
[0051] Figure 1A represents a 480 x 500 pixel image 102 from a sequence of images of in vivo human cervix tissue and shows a 256 x 256 pixel portion 104 of the image from which data is used in identifying a misalignment correction between two images from a sequence of images of the tissue, according to an illustrative embodiment of the invention. Preferred embodiments comprise illuminating the tissue using either or both a white light source and a UN light source. The image 102 of Figure 1A has a pixel resolution of about 0.054-mm. The embodiments described herein show images with pixel resolutions of about 0.0547-mm to about 0.0537-mm. Other embodiments have pixel resolutions outside this range. In some embodiments, the images of a sequence have an average pixel resolution of between about 0.044-mm and about 0.064-mm. In the embodiment of Figure 1 A, the central 256 x 256 pixels 104 of the image 102 are chosen for use in motion tracking. Other embodiments use regions of different sizes for motion tracking, and these regions are not necessarily located in the center of the image frame field. In the embodiment of Figure 1 A, the method of motion tracking determines an x-displacement and a y-displacement corresponding to the translational shift (misalignment) between the 256 x 256 central portions 104 of two images in the sequence of images.
[0052] The determination of misalignment correction may be erroneous for any number of various reasons, including but not limited to non-translational sample motion (i.e. rotational motion, local deformation, and/or warping), changing features of a sample (i.e. whitening of tissue), and image recording problems such as focus adjustment, missing images, blurred or distorted images, low signal-to-noise ratio, and computational artifacts. Therefore, in preferred embodiments, validation comprises splitting an image into smaller units (called cells), determining displacements of these cells, and comparing the cell displacements to the overall displacement. Figure IB depicts the image represented in Figure 1A and shows a 128 x 128 pixel portion 154 of the image, made up of 16 individual 32 x 32 pixel validation cells 156, from which data is used in performing a validation of the misalignment correction, according to an illustrative embodiment of the invention. [0053] Figure 2A, Figure 2B, and Figure 2C depict steps in illustrative embodiment methods of determining a misalignment correction between two images of a sequence, and methods of validating that determination. Steps 202 and 204 of Figure 2 A depict steps of developing data from an initial image with which data from a subsequent image are compared in order to determine a misalignment correction between the subsequent image and the initial image. An initial image "o" is preprocessed 202, then filtered 204 to obtain a matrix of values, for example, optical intensities, representing a portion of the initial image. In one embodiment, preprocessing includes transforming the three RGB color components into a single intensity component. An exemplary intensity component is CCIR 601, shown in Equation 1 :
I = 0.299R + 0.587G + 0.114B (1) where I is the CCIR 601 "gray scale" intensity component, expressed in terms of red (R), green (G), and blue (B) intensities. CCIR 601 intensity may be used, for example, as a measure of the "whiteness" of a particular pixel in an image from an acetowhitening test. Different expressions for intensity may be used, and the choice may be geared to the specific type of diagnostic test conducted. In an alternative embodiment, a measure of radiant power as determined by a spectrometer may be used in place of the intensity component of Equation (1). Some embodiments comprise obtaining multiple types of optical signals simultaneously or contemporaneously; for example, some embodiments comprise obtaining a combination of two or more of the following signals: fluorescence spectra, reflectance (backscatter) spectra, and a video signal. Step 202 of Figure 2A is illustrated in blocks 240, 242, and 244 of Figure 2B, where block 240 represents the initial color image, "o", in the sequence, block 242 represents conversion of color data to gray scale using Equation 1, and block 244 represents the image of block 240 after conversion to gray scale.
[0054] Step 204 of Figure 2A represents filtering a 256 x 256 portion of the initial image, for example, a portion analogous to the 256 x 256 central portion 104 of the image 102 of Figure 1 A, using Laplacian of Gaussian filtering. Other filtering techniques are used in other embodiments. Preferred embodiments employ Laplacian of Gaussian filtering, which combines the Laplacian second derivative approximation with the Gaussian smoothing filter to reduce the high frequency noise components prior to differentiation. This filtering step may be performed by discrete convolution in the space domain, or by frequency domain filtering. The Laplacian of Gaussian (LoG) filter may be expressed in terms of x and y coordinates (centered on zero) as shown in Equation (2):
Figure imgf000018_0001
where x and y are space coordinates and σ is the Gaussian standard deviation. In one preferred embodiment, an approximation to the LoG function is used. In the embodiments described herein, approximation kernels of size 9 x 9, 21 x 21, and 31 x 31 are used. The Gaussian standard deviation σ is chosen in certain preferred embodiments as shown in Equation (3): σ = LoG filter size / 8.49 (3) where LoG filter size corresponds to the size of the discrete kernel approximation to the LoG function (i.e. 9, 21, and 31 for the approximation kernels used herein). Other embodiments employ different kernel approximations and/or different values of Gaussian standard deviation. [0055] The LoG filter size may be chosen so that invalid scans are failed and valid scans are passed with a minimum of error. Generally, use of a larger filter size is better at reducing large structured noise and is more sensitive to larger image features and larger motion, while use of a smaller filter size is more sensitive to smaller features and smaller motion. One embodiment of the invention comprises using more than one filter size, adjusting to coordinate with the kind of motion being tracked and the features being imaged. [0056] Step 204 of Figure 2 A is illustrated in Figure 2B in blocks 244, 246, and 248, where block 244 represents data from the initial image in the sequence after conversion to gray scale intensity, block 246 represents the application of the LoG filter, and block 248 represents the 256 x 256 matrix of data values, G0(x,y), which is the "gold standard" by which other images are compared in validating misalignment correction determinations in this embodiment. As detailed in Figure 2C, preferred embodiments validate a misalignment correction determination by comparing a given image to its preceding image in the sequence, not by comparing a given image to the initial image in the sequence as shown in Figure 2B. Although Figure 2 A, Figure 2B, and Figure 2C show application of the LoG filter as a discrete convolution in the space domain, resulting in a standard expressed in space coordinates, other preferred embodiments comprise applying the LoG filter in the frequency domain. In either case, the LoG filter is preferably zero padded to the image size.
[0057] Steps 206 and 208 of Figure 2 A represent preprocessing an image "i", for example, by converting RGB values to gray scale intensity as discussed above, and performing LoG filtering to obtain Gι(x,y), a matrix of values from image "i" which is compared with that of another image in the sequence in order to determine a misalignment correction between the two images. Steps 206 and 208 of Figure 2A are illustrated in Figure 2B in blocks 250, 252, 254, 256, and 258, where fi(x,y) in block 250 is the raw image data from image "i", block 252 represents conversion of the fi(x,y) data to gray scale intensities as shown in block 254, and block 256 represents application of the LoG filter on the data of block 254 to produce the data of block 258, Gi(x,y).
[0058] Similarly, steps 212 and 214 of Figure 2 A represent preprocessing an image "j", for example, by converting RGB values to gray scale intensity as discussed above, and performing LoG filtering to obtain Gj(x,y), a matrix of values from image "j" which is compared with image "i" in order to determine a measure of misalignment between the two images. In some preferred embodiments, image "j" is subsequent to image "i" in the sequence. In some preferred embodiments, "i" and "j" are consecutive images. Steps 212 and 214 of Figure 2 A are illustrated in Figure 2B in blocks 264, 266, 268, 270, and 272, where "j" is "i+1", the image consecutive to image "i" in the sequence. In Figure 2B, block 264 is the raw "i+1" image data, block 266 represents conversion of the "i+1" data to gray scale intensities as shown in block 268, and block 270 represents application of the LoG filter on the data of block 268 to produce the data of block 272, Gi+1(x,y).
[0059] Steps 210 and 216 of Figure 2A represent applying a Fourier transform, for example, a Fast Fourier Transform (FFT), using G;(x,y) and Gj(x,y), respectively, to obtain Fj(u,v) and Fj(u,v), which are matrices of values in the frequency domain corresponding to data from images "i" and "j", respectively. Steps 210 and 216 of Figure 2A are illustrated in Figure 2B by blocks 258, 260, 262, 272, 274, and 276, where "j" is "i+1", the image consecutive to image "i" in the sequence. In Figure 2B, block 258 represents the LoG filtered data, Gi(x,y), corresponding to image "i", and block 260 represents taking the Fast Fourier Transform of Gj(x,y) to obtain Fj(u,v), shown in block 262. Similarly, in Figure 2B block 272 is the LoG filtered data, Gi+ι(x,y), corresponding to image "i+1", and block 274 represents taking the Fast Fourier Transform of Gj+1(x,y) to obtain Fj+ι(u,v), shown in block 276. [0060] Step 218 of Figure 2 A represents computing the cross correlation Fj(u,v) F j(u,v), where Fi(u,v) is the Fourier transform of data from image "i", F j(u,v) is the complex conjugate of the Fourier transform of data from image "j", and u and v are frequency domain variables. The cross-correlation of two signals of length Ni and N2 provides Nι+N2-1 values; therefore, to avoid aliasing problems due to under-sampling, the two signals should be padded with zeros up to Nι+N2-1 samples. Step 218 of Figure 2A is represented in Figure 2B by blocks 262, 276, and 278. Block 278 of Figure 2B represents computing the cross correlation, Fj(u,v) F i+ι(u,v), using Fj(u,v), the Fourier transform of data from image "i", and F*j+1(u,v), the complex conjugate of the Fourier transform of data from image "i+1". The cross-correlation may also be expressed as c(k,l) in Equation (4): c(k,l) = Σ ΣIl(p,q)I2(p-k,q-l) (4) where variables (k,l) can be thought of as the shifts in each of the x- and y-directions which are being tested in a variety of combinations to determine the best measure of misalignment between two images I\ and 72, and where p and q are matrix element markers.
[0061] Step 220 of Figure 2 A represents computing the inverse Fourier transform of the cross- correlation computed in step 218. Step 220 of Figure 2 A is represented in Figure 2B by block 280. The resulting inverse Fourier transform maps how well the 256 x 256 portions of images "i" and "j" match up with each other given various combinations of x- and y-shifts. Generally, the normalized correlation coefficient closest to 1.0 corresponds to the x-shift and y-shift position providing the best match, and is determined from the resulting inverse Fourier transform. In a preferred embodiment, correlation coefficients are normalized by dividing matrix values by a scalar computed as the product of the square root of the (0,0) value of the αwto-correlation of each image. In this way, variations in overall brightness between the two images have a more limited effect on the correlation coefficient, so that the actual movement within the image frame field between the two images is better reflected in the misalignment determination. [0062] Step 222 of Figure 2 A represents determining misalignment values dx, dy, d, sum(dx), sum(dy), and Sum(dj), where dx is the computed displacement between the two images "i" and "j" in the x-direction, dy is the computed displacement between the two images in the y- direction, d is the square root of the sum dx 2+dy 2 and represents an overall displacement between the two images, sum(dx) is the cumulative x-displacement between the current image "j" and the first image in the sequence "o", sum(dy) is the cumulative y-displacement between the current image "j" and the first image in the sequence "o", and Sum(dj) is the cumulative displacement, d, between the current image "j" and the first image in the sequence "o". Step 222 of Figure 2A is represented in Figure 2B by blocks 282, 284, and 286. Blocks 284 and 286 represent finding the maximum value in the data of block 282 in order to calculate dx, dy, d, sum(dx), sum(dy), and Sum(di+1) as described above, where image "j" in Figure 2A is "i+1" in Figure 2B, the image consecutive to image "i".
[0063] Steps 224, 226, and 228 of Figure 2A represent one method of validating the misalignment correction determined for image "j" in step 222 of Figure 2A. This method of validating misalignment correction is represented in blocks 287, 289, 291, 296, 297, and 298 of Figure 2C. Another method of validating a misalignment correction is represented in steps 230, 232, and 234 of Figure 2A; and this method is represented in blocks 288, 290, 292, 293, 294, and 295 of Figure 2B. Figure 2C is a schematic flow diagram depicting steps in a version of the methods shown in Figure 2A of determining a correction for a misalignment between two images in which validation is performed using data from two consecutive images. Preferred embodiments comprise using consecutive or near-consecutive images to validate a misalignment correction determination, as in Figure 2C. Other embodiments comprise using the initial image to validate a misalignment correction determination for a given image, as in Figure 2B. [0064] In Figure 2A, step 224 represents realigning Gj(x,y), the LoG-filtered data from image "j", to match up with Gj(x,y), the LoG-filtered data from image "i", using the misalignment values dx and dy determined in step 222. In preferred embodiments, image "j" is consecutive to image "i" in the sequence of images. Here, image "j" is image "i+1" such that G,(x,y) is aligned with Gi+ι(x,y) as shown in block 287 of Figure 2C. Similarly, in Figure 2A, step 230 represents realigning Gj(x,y), the LoG-filtered data from image "j", to match up with G0(x,y), the LoG- filtered "gold standard" data from the initial image "o", using the displacement values sum(dx) and sum(dy) determined in step 222. Step 230 of Figure 2A is represented in block 288 of Figure 2B.
[0065] Step 226 of Figure 2 A represents comparing corresponding validation cells from Gj(x,y) and G;(x,y) by computing correlation coefficients for each cell. This is represented schematically in Figure 2C by blocks 289, 291, 296, 297, and 298 for the case where j = i+1. First, a 128 x 128 pixel central portion of the realigned Gj+ι(x,y) is selected, and the corresponding 128 x 128 pixel central portion of Gj(x,y) is selected, as shown in blocks 289 and 291 of Figure 2C. An exemplary 128 x 128 pixel validation region 154 is shown in Figure IB. Then, the embodiment comprises computing a correlation coefficient for each of 16 validation cells. An exemplary validation cell from each of the realigned Gi+ι(x,y) matrix 291 and Gj(x,y) matrix 289 is shown in blocks 297 and 296 of Figure 2C. The validation cells are as depicted in the 32 x 32 pixel divisions 156 of the 128 x 128 pixel validation region 154 of Figure IB. Different embodiments use different numbers and/or different sizes of validation cells. Correlation coefficients are computed for each of the 16 cells, as shown in block 298 of Figure 2C. Each correlation coefficient is a normalized cross-correlation coefficient as shown in Equation (5):
Figure imgf000024_0001
where c'(m,n) is the normalized cross-correlation coefficient for the validation cell (m,n), m is an integer 1 to 4 corresponding to the column of the validation cell whose correlation coefficient is being calculated, n is an integer 1 to 4 corresponding to the row of the validation cell whose correlation coefficient is being calculated, p and q are matrix element markers, It[p,q] are elements of the cell in column m and row n of the 128 x 128 portion of the realigned image shown in block 291 of Figure 2C, and l2[p,q] are elements of the cell in column m and row n of the 128 x 128 portion of Gj(x,y) shown in block 289 of Figure 2C. Here, p = 1 to 32 and q = 1 to 32, and the sums shown in Equation (5) are performed over p and q. The cross-correlation coefficient of Equation (5) is similar to an auto-correlation in the sense that a subsequent image is realigned with a prior image based on the determined misalignment correction so that, ideally, the aligned images appear to be identical. A low value of c'(m,n) indicates a mismatching between two corresponding cells. The misalignment correction determination is then either validated or rejected based on the values of the 16 correlation coefficients computed in step 298 of Figure 2C. For example, each correlation coefficient may be compared against a threshold maximum value. This corresponds to step 228 of Figure 2 A.
[0066] Step 232 of Figure 2 A represents comparing corresponding validation cells from Gj(x,y) and G0(x,y) by computing correlation coefficients for each cell. This is represented schematically in Figure 2B by blocks 290, 292, 293, 294, and 295 for the case where j = i+1. First, a 128 x 128 pixel central portion of the realigned
Figure imgf000025_0001
is selected, and the corresponding 128 x 128 pixel central portion of G0(x,y) is selected, as shown in blocks 292 and 290 of Figure 2B. An exemplary 128 x 128 pixel validation region 154 is shown in Figure IB. Then, the embodiment comprises computing a correlation coefficient for each of the 16 validation cells. An exemplary validation cell from each of the realigned Gj+ι(x,y) matrix 292 and G0(x,y) matrix 290 is shown in blocks 294 and 293 of Figure 2B. The validation cells are as depicted in the 32 x 32 pixel divisions 156 of the 128 x 128 pixel validation region 154 of Figure IB. Different embodiments use different numbers of and/or different sizes of validation cells. Correlation coefficients are computed for each of the 16 cells, as shown in block 295 of Figure 2B. Each correlation coefficient is a normalized "auto"-correlation coefficient as shown in Equation (5) above, where Iι[p,q] are elements of the cell in column m and row n of the 128 x 128 portion of the realigned subsequent image shown in block 292 of Figure 2B, and I2[p,q] are elements of the cell in column m and row n of the 128 x 128 portion of G0(x,y) shown in block 290 of Figure 2B. A low value of c'(m,n) indicates a mismatching between two corresponding cells. The misalignment determination is then either validated or rejected based on the values of the 16 correlation coefficients computed in step 295 of Figure 2C. This corresponds to step 234 of Figure 2 A.
[0067] In an illustrative embodiment, determinations of misalignment correction and validation of these determinations as shown in each of Figure 2 A, Figure 2B, and Figure 2C are performed using a plurality of the images in a given sequence. In preferred embodiments, determinations of misalignment correction and validations thereof are performed while images are being obtained, so that an examination in which a given sequence of images is obtained may be aborted before all the images are obtained. In some embodiments, a misalignment correction is determined, validated, and compensated for by adjusting the optical signal detection device obtaining the images. In certain embodiments, an adjustment of the optical signal detection device is made after each of a plurality of images are obtained. In certain embodiments, an adjustment, if required by the misalignment correction determination, is made after every image subsequent to the first image (except the last image), and prior to the next consecutive image. In one embodiment, a cervical tissue scan comprising a sequence of 13 images is performed using on-the-fly misalignment correction determination, validation, and camera adjustment, such that the scan is completed in about 12 seconds. Other embodiments comprise obtaining sequences of any number of images in more or less time than indicated here.
[0068] Each of steps 228 and 234 of the embodiment of Figure 2 A represents applying a validation algorithm to determine at least the following: (1) whether the misalignment correction can be made, for example, by adjusting the optical signal detection device, and (2) whether the misalignment correction determined is valid. In an exemplary embodiment, the validation algorithm determines that a misalignment correction cannot be executed during an acetowhitening exam conducted on cervical tissue in time to provide sufficiently aligned subsequent images, if either of conditions (a) or (b) is met, as follows: (a) dj, the displacement between the current image "i" and the immediately preceding image "i-1" is greater than 0.55- mm or (b) Sum(dj), the total displacement between the current image and the first image in the sequence, "o", is greater than 2.5-mm. If either of these conditions is met, the exam in progress is aborted, and another exam must be performed. Other embodiments may comprise the use of different validation rules.
[0069] In the exemplary embodiment above, validation is performed for each determination of misalignment correction by counting how many of the correlation coefficients c'r(m,n) shown in Equation (5), corresponding to the 16 validation cells, is less than 0.5. If this number is greater than 1, the exam in progress is aborted. Other embodiments may comprise the use of different validation rules. Gradual changes in image features, such as acetowhitening of tissue or changes in glare, cause discrepancies which are reflected in the correlation coefficients of the validation cells, but which do not represent a spatial shift. Thus, in preferred embodiments, the validation is performed as shown in Figure 2C, where validation cells of consecutive images are used to calculate the correlation coefficients. In other embodiments, the validation is performed as shown in Figure 2B, where validation cells of a current image, "i", and an initial image of the sequence, "o", are used to calculate the correlation coefficients of Equation (5). [0070] Figure 3 depicts a subset of adjusted, filtered images 302, 306, 310, 314, 318, 322 from a sequence of images of a tissue with an overlay of gridlines showing the validation cells used in validating the determinations of misalignment correction between the images, according to an illustrative embodiment of the invention. By performing validation according to Figure 2C, using consecutive images to calculate the correlation coefficients of Equation (5), the number of validation cells with correlation coefficient below 0.5 for the misalignment-corrected images of Figure 3 is 0, 1, 0, 0, and 1 for images 306, 310, 314, 318, and 322, respectively. Since none of the images have more than one coefficient below 0.5, this sequence is successful and is not aborted. This is a good result in the example of Figure 3, since there is no significant tissue movement occurring between the misalignment-corrected images. There is only a gradually changing glare, seen to move within the validation region 304, 308, 312, 316, 320, 324 of each image. In an embodiment in which validation is performed as in Figure 2B, the number of validation cells with correlation coefficient below 0.5 for the misalignment-corrected images of Figure 3 is 3, 4, 5, 5, and 6 for images 306, 310, 314, 318, and 322, respectively. This is not a good result in this example, since the exam would be erroneously aborted, due only to gradual changes in glare or whitening of tissue, not uncompensated movement of the tissue sample. [0071] In a preferred embodiment, validation cells that are featureless or have low signal-to- noise ratio are eliminated from consideration. These cells can produce meaningless correlation coefficients. Featureless cells in a preferred embodiment are identified and eliminated from consideration by examining the deviation of the sum squared gradient of a given validation cell from the mean of the sum squared gradient of all cells as shown in the following exemplary rule:
Rule: If ssgι(m,n) < Mean[ssg(m,n)] - STD[ssg(m,n)], then set c' ι(m,n) = 1.0.
where c'ι(m,n) is the correlation of the given validation cell "1", ssg.(m,n) = Σ Σ Iι2[p,q], m = 1 to 4, n = 1 to 4, Iι[p,q] is the matrix of values of the given validation cell "1", p = 1 to 32, q = 1 to 32, the summations Σ Σ are performed over pixel markers p and q, Mean[ssg(m,n)] is the mean of the sum squared gradient of all 16 validation cells, and STD[ssg(m,n)] is the standard deviation of the sum squared gradient of the given validation cell "1" from the mean sum squared gradient. By setting c'ι(m,n) = 1.0 for the given validation cell, the cell does not count against validation of the misalignment correction determination in the rubrics of either step 228 or step 234 of Figure 2A, since a correlation coefficient of 1.0 represents a perfect match. [0072] If an image has large intensity differences between the upper and lower borders and/or the left and right borders of the image frame field, LoG filtering may result in "wraparound error." A preferred embodiment employs an image blending technique such as "feathering" to smooth border discontinuities, while requiring only a minimal amount of additional processing time.
[0073] Figure 4A depicts a sample image 402 after application of a 9-pixel size [9 x 9] Laplacian of Gaussian filter (LoG 9 filter) on an exemplary image from a sequence of images of tissue, according to an illustrative embodiment of the invention. The filtered intensity values are erroneous at the top edge 404, the bottom edge 406, the right edge 410, and the left edge 408 of the image 402. Since LoG frequency domain filtering corresponds to cyclic convolution in the space-time domain, intensity discontinuities between the top and bottom edges of an image and between the right and left edges of an image result in erroneous gradient approximations. These erroneous gradient approximations can be seen in the dark stripe on the right edge 410 and bottom edge 406 of the image 402, as well as the light stripe on the top edge 404 and the left edge 408 of the image 402. This often results in a misalignment correction determination that is too small, since changes between the images due to spatial shift are dwarfed by the edge effects. A preferred embodiment uses a "feathering" technique to smooth border discontinuities and reduce "wraparound error." [0074] Feathering comprises removal of border discontinuities prior to application of a filter. In preferred embodiments, feathering is performed on an image before LoG filtering, for example, between steps 206 and 208 in Figure 2A. In embodiments where LoG filtering is performed in the frequency domain (subsequent to Fourier transformation), feathering is preferably performed prior to both Fourier transformation and LoG filtering. For one- dimensional image intensity functions Ii(x) and I2(x) that are discontinuous at x = x0, an illustrative feathering algorithm is as follows:
/;(*) = /-(*) . /(ϋ ≤L + o.5) and /;(χ) = /2(χ) . (l - /(£^ + o.5)) , d d 0 x < 0
/( ) = 3x - 2x3 O ≤ ≤ l (6)
0 x > l
where Iι'(x) and I2'(x) are the intensity functions L(x) and I2(x) after applying the feathering algorithm of Equation (6), and d is the feathering distance chosen. The feathering distance, d, adjusts the tradeoff between removing wraparound error and suppressing image content. [0075] Figure 4B depicts the application of both a feathering technique and a LoG filter on the same unfiltered image used in Figure 4 A. The feathering is performed to account for border processing effects, according to an illustrative embodiment of the invention. Here, a feathering distance, d, of 20 pixels was used. Other embodiments use other values of d. The filtered image 420 of Figure 4B does not display uncharacteristically large or small gradient intensity values at the top edge 424, bottom edge 426, right edge 430, or left edge 428, since discontinuities are smoothed prior to LoG filtering. Also, there is minimal contrast suppression of image detail at the borders. Pixels outside the feathering distance, d, are not affected. The use of feathering here results in more accurate determinations of misalignment correction between two images in a sequence of images. [0076] Another method of border smoothing is multiplication of unfiltered image data by a Hamming window. In some embodiments, a Hamming window function is multiplied to image data before Fourier transformation so that the border pixels are gradually modified to remove discontinuities. However, application of the Hamming window suppresses image intensity as well as gradient information near the border of an image. [0077] Figure 5 A is identical to Figure 4A and depicts the application of a LoG 9 filter on an exemplary image from a sequence of images of tissue according to an illustrative embodiment of the invention. The filtered intensity values are erroneous at the top edge 404, the bottom edge 406, the right edge 410, and the left edge 408 of the image 402.
[0078] Figure 5B depicts the application of both a Hamming window and a LoG 9 filter on the same unfiltered image used in Figure 5 A. Hamming windowing is performed to account for border processing effects, according to an illustrative embodiment of the invention. Each of the edges 524, 526, 528, 530 of the image 520 of Figure 5B no longer show the extreme filtered intensity values seen at the edges 404, 406, 408, 410 of the image 402 of Figure 5 A. However, there is a greater suppression of image detail in Figure 5B than in Figure 4B. Thus, for this particular embodiment, application of the feathering technique is preferred over application of Hamming windowing. [0079] A skilled artisan knows other methods of smoothing border discontinuities. Another embodiment comprises removing cyclic convolution artifacts by zero padding the image prior to frequency domain filtering to assure image data at an edge would not affect filtering output at the opposite edge. This technique adds computational complexity and may increase processing time. [0080] Figure 6 depicts the determination of a misalignment correction between two images using methods including the application of LoG filters of various sizes, as well as the application of a Hamming window technique and a feathering technique, according to illustrative embodiments of the invention. Image 602 and image 604 at the top of Figure 6 are consecutive images from a sequence of images of cervix tissue obtained during a diagnostic exam, each with a pixel resolution of about 0.054-mm. Figure 6 depicts the application of four different image filtering algorithms: (1) Hamming window with LoG 9 filtering, (2) feathering with LoG 9 filtering, (3) feathering with LoG 21 filtering, and (4) feathering with LoG 31 filtering. Each of these algorithms are implemented as part of a misalignment correction determination and validation technique as illustrated in Figure 2A and Figure 2C, and values of dx and dy between images 602 and 604 of Figure 6 are determined using each of the four filtering algorithms. For image 602, each of the four different image filtering algorithms (1) - (4) listed above are applied, resulting in images 606, 610, 614, and 618, respectively, each having 256 x 256 pixels. The four different image filtering algorithms are also applied for image 604, resulting in images 608, 612, 616, and 620, respectively, each having 256 x 256 pixels. Values of (dx, dy) determined using Hamming + LoG 9 filtering are (-7, 0), expressed in pixels. Values of (dx, dy) determined using feathering + LoG 9 filtering are (-2, -10). Values of (dx, dy) determined using feathering + LoG 21 filtering are (-1, -9). Values of (dx, dy) determined using feathering + LoG 31 filtering are (0, -8). All of the displacement values determined using feathering are close in this embodiment, and agree well with visually- verified displacement. However, in this example, the displacement values determined using Hamming windowing are different from those obtained using the other three filtering methods, and result in a misalignment correction that does not agree well with visually- verified displacement. Thus, for this example, feathering works best since it does not suppress as much useful image data. [0081] The effect of the filtering algorithm employed, as well as the choice of validation rules are examined by applying combinations of the various filtering algorithms and validation rules to pairs of sequential images of tissue and determining the number of "true positives" and "false positives" identified. A true positive occurs when a bad misalignment correction determination is properly rejected by a given validation rule. A false positive occurs when a good misalignment correction determination is improperly rejected as a failure by a given validation rule. The classification of a validation result as a "true positive" or a "false positive" is made by visual inspection of the pair of sequential images. In preferred embodiments, whenever true failures occur, the scan should be aborted. Some examples of situations where true failures occur in certain embodiments include image pairs between which there is one or more of the following: a large non-translational deformation such as warping or tilting; a large jump for which motion tracking cannot compute a correct translational displacement; rotation greater than about 3 degrees; situations in which a target laser is left on; video system failure such as blur, dark scan lines, or frame shifting; cases where the image is too dark and noisy, in shadow; cases where a vaginal speculum (or other obstruction) blocks about half the image; other obstructions such as sudden bleeding.
[0082] In one embodiment, a set of validation rules is chosen such that true positives are maximized and false positives are minimized. Sensitivity and specificity can be adjusted by adjusting choice of filtering algorithms and/or choice of validation rules. Table 1 shows the number of true positives (true failures) and false positives (false failures) determined by a validation rule as depicted in Figure 2A and Figure 2C where validation is determined using consecutive images. Table 1 shows various combinations of filtering algorithms and validation rules. The four filtering algorithms used are (1) Hamming windowing with LoG 9 filtering, (2) feathering with LoG 9 filtering, (3) feathering with LoG 21 filtering, and (4) feathering with LoG 31 filtering. The values, c'(m,n), correspond to the normalized "auto"-correlation coefficient of Equation (5) whose value must be met or exceeded in order for a validation cell to "pass" in an embodiment. The "Number Threshold" column indicates the maximum number of "failed" validation cells, out of the 16 total cells, that are allowed for a misalignment correction determination to be accepted in an embodiment. If more than this number of validation cells fail, then the misalignment correction determination is rejected. Table 1: True positives and false positives of validation determinations for embodiments using various combinations of filtering algorithms and validation rules.
Figure imgf000034_0001
[0083] For the given set of cervical image pairs on which the methods shown in Table 1 were applied, feathering performs better than Hamming windowing, since there are more true positives and fewer false positives. Among different LoG filter sizes, LoG 21 and LoG 31 performs better than LoG 9 for both tracking and validation here. The LoG 21 filter is more sensitive to rotation and deformation than the LoG 31 filter for these examples. Preferred embodiments for the determination and validation of misalignment corrections between 256 x 256 pixel portions of images of cervical tissue with pixel resolution of about 0.054-mm employ one or more of the following: (1) use of feathering for image border processing, (2) application of LoG 21 filter, (3) elimination of validation cells with low signal-to-noise ratio, and (4) use of consecutive images for validation.
Equivalents [0084] While the invention has been particularly shown and described with reference to specific preferred embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. [0085] What is claimed is:

Claims

1. A method of compensating for image misalignment, the method comprising the steps of: obtaining a sequence of images of a tissue sample; and correcting for misalignment between at least two of the images, said misali nment being due at least in part to movement of the tissue sample.
2. The method of claim 1 , wherein the correcting step is performed in real time.
3. The method of claim 1 , wherein the correcting step comprises adjusting an optical signal detection device used to obtain the sequence of images.
4. The method of claim 3, wherein the correcting step comprises adjusting a position of a component of the optical signal detection device.
5. The method of claim 4, wherein the component comprises a mirror.
6. The method of claim 1, wherein the tissue sample is an in-situ tissue sample and wherein the misalignment is due at least in part to patient motion.
7. The method of claim 1 , further comprising the step of applying a contrast agent to the tissue sample.
8. The method of claim 1, wherein the correcting step comprises electronically adjusting at least one of the images.
9. The method of claim 1 , wherein the at least two images are consecutive images.
10. The method of claim 1 , wherein the correcting step comprises the step of filtering a subset of data from a first image of the sequence of images.
11. The method of claim 10, wherein the correcting step comprises the step of preprocessing the subset of data prior to the filtering.
12. The method of claim 10, wherein the filtering step comprises at least one of frequency domain filtering and discrete convolution in the space domain.
1 13. The method of claim 10, wherein the filtering step comprises Laplacian of Gaussian filtering.
1 14. The method of claim 10, wherein the filtering step comprises using a feathering technique.
1 15. The method of claim 10, wherein the filtering step comprises using a Hamming window.
1 16. The method of claim 1, wherein the correcting step comprises computing a cross correlation using data from two of the images.
1 17. The method of claim 16, wherein the computing of the cross correlation comprises computing a product represented by
Figure imgf000036_0001
where Fj(u,v) is a Fourier transform of data derived from a subset of data from a first image, i, of the sequence of images, F j(u,v) is a complex conjugate of a Fourier transform of data derived from a subset of data from a second image, j, of the sequence of images, and u and v are frequency domain variables.
1 18. The method of claim 17, wherein the computing of the cross correlation comprises computing an inverse Fourier transform of the product. l
19. The method of claim 1, wherein the tissue sample comprises cervical tissue.
1 20. The method of claim 1 , wherein the tissue sample comprises at least one member of the
2 group consisting of colorectal tissue, gastroesophageal tissue, urinary bladder tissue, lung tissue,
3 and skin tissue. l
21. The method of claim 1 , wherein the tissue sample comprises epithelial cells.
1 22. The method of claim 1 , wherein the obtaining step comprises obtaining the sequence of
2 images of the tissue sample during application of a chemical agent to the tissue sample.
23. The method of claim 1 , wherein the obtaining step comprises obtaining the sequence of images of the tissue sample after application of a chemical agent to the tissue sample.
24. The method of claim 23, wherein the chemical agent is selected from the group consisting of acetic acid, formic acid, propionic acid, and butyric acid.
25. The method of claim 23, wherein the chemical agent is selected from the group consisting of Lugol's iodine, ShiUer's iodine, methylene blue, toluidine blue, indigo carmine, indocyanine green, and fluorescein.
26. The method of claim 1 , wherein the obtaining step comprises obtaining the sequence of images of the tissue sample during an acetowhitening test.
27. The method of claim 1, wherein the movement of the tissue sample is relative to an optical signal detection device and comprises at least one member of the group consisting of translational motion, rotational motion, warping, and local deformation.
28. The method of claim 1 , wherein one or more images of the sequence of images comprise measurements of an optical signal from the tissue sample.
29. The method of claim 28, wherein the optical signal comprises visible light.
30. The method of claim 28, wherein the optical signal comprises fluorescent light.
31. The method of claim 28, wherein the optical signal is emitted by the tissue sample.
32. The method of claim 28, wherein the optical signal is reflected by the tissue sample.
33. The method of claim 28, wherein the optical signal is transmitted through the tissue sample.
34. A method of validating a correction for an image misalignment, the method comprising the steps of: adjusting at least one of two or more images using a correction for an image misalignment between the two or more images; defining one or more validation cells, each of which includes a common area of the two or more adjusted images; computing for each of the one or more validation cells a measure of displacement between the two or more adjusted images using data from the two or more adjusted images corresponding to each of the one or more validation cells; and validating the correction for the image misalignment by comparing at least one of the measures of displacement with a threshold value.
35. A method of validating a correction for an image misalignment, the method comprising the steps of: defining one or more validation cells within a bounded image plane; computing for each of the one or more validation cells a measure of displacement between two or more images bound by the image plane using data from the two or more images corresponding to each of the one or more validation cells; validating a correction for an image misalignment between the two or more images by comparing at least one of the measures of displacement with the correction.
36. The method of claim 34, wherein the images are images of an in-situ tissue sample, and wherein the image misalignment is due at least in part to patient motion.
37. The method of claim 34, wherein the images are images of an in-situ tissue sample that has been treated with a contrast agent.
38. The method of claim 34, wherein the one or more validation cells comprise a subset of a bounded image plane common to the two or more images.
39. The method of claim 34, wherein the two or more images are consecutive images.
40. The method of claim 38, wherein the one or more validation cells comprise a central portion of the bounded image plane.
41. The method of claim 38, wherein the bounded image plane has an area about four times larger than the total area of the one or more validation cells.
42. The method of claim 34, wherein the validating step comprises eliminating from consideration one or more of the measures of displacement for one or more of the one or more validation cells.
43. The method of claim 42, wherein the eliminating of the one or more measures of displacement comprises calculating a sum squared gradient for at least one of the one or more validation cells.
44. A method of compensating for an image misalignment, the method comprising the steps of: obtaining a set of sequential images of a tissue sample; and correcting for a misalignment between each of a plurality of pairs of the sequential images, the misalignment due at least in part to movement of the tissue sample.
45. The method of claim 44, wherein the tissue sample is an in-situ tissue sample and wherein the misalignment is due at least in part to patient motion.
46. The method of claim 44, further comprising the step of applying to the sample a contrast agent.
47. The method of claim 44, wherein the obtaining step and the correcting step are performed alternately.
48. The method of claim 44, wherein the obtaining step and the correcting step are performed substantially concurrently. -31-
49. The method of claim 44, wherein the correcting step comprises determining a correction for a misalignment between a pair of the sequential images less than about 2 seconds after the obtaining of the latter of the pair of the sequential images.
50. The method of claim 44, wherein the correcting step comprises determining a correction for a misalignment between a pair of the sequential images less than about one second after the obtaining of the latter of the pair of the sequential images.
51. A method of validating a correction for an image misalignment, the method comprising the steps of: obtaining a plurality of sequential images of a sample using an optical signal detection device; determining a correction for a misalignment between at least two of the sequential images, the misalignment due at least in part to a movement of the sample; and validating the correction between at least a first image and a second image of the plurality of sequential images.
52. The method of claim 51 , wherein the sample is an in-situ tissue sample and wherein the misalignment is due at least in part to patient motion.
53. The method of claim 51 , further comprising the step of applying a contrast agent to the sample.
54. The method of claim 51 , wherein the determination of a correction for a misalignment between a first and a second image and the validation of said correction are performed in less than about one second.
55. The method of claim 51 , further comprising the step of: adjusting the optical signal detection device using the correction.
56. A method of dynamically compensating for image misalignment, the method comprising the steps of: obtaining a sequence of images of a tissue sample; and correcting in real time for misalignment between at least two of the images, the misalignment due at least in part to movement of the tissue sample.
PCT/US2003/030711 2002-09-30 2003-09-30 Methods and systems for correcting image misalignment WO2004032058A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU2003277051A AU2003277051A1 (en) 2002-09-30 2003-09-30 Methods and systems for correcting image misalignment
CA002500539A CA2500539A1 (en) 2002-09-30 2003-09-30 Methods and systems for correcting image misalignment
EP03799315A EP1554694A2 (en) 2002-09-30 2003-09-30 Methods and systems for correcting image misalignment

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US41476702P 2002-09-30 2002-09-30
US60/414,767 2002-09-30
US10/273,511 2002-10-18
US10/273,511 US7187810B2 (en) 1999-12-15 2002-10-18 Methods and systems for correcting image misalignment

Publications (2)

Publication Number Publication Date
WO2004032058A2 true WO2004032058A2 (en) 2004-04-15
WO2004032058A3 WO2004032058A3 (en) 2004-06-17

Family

ID=32072879

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/030711 WO2004032058A2 (en) 2002-09-30 2003-09-30 Methods and systems for correcting image misalignment

Country Status (5)

Country Link
US (2) US7187810B2 (en)
EP (1) EP1554694A2 (en)
AU (1) AU2003277051A1 (en)
CA (1) CA2500539A1 (en)
WO (1) WO2004032058A2 (en)

Families Citing this family (80)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7672708B2 (en) * 1998-07-23 2010-03-02 David Roberts Method and apparatus for the non-invasive imaging of anatomic tissue structures
FR2790123B1 (en) * 1999-02-18 2001-04-27 Ge Medical Syst Sa PROCESS FOR PROCESSING A SEQUENCE OF FLUOROSCOPIC IMAGES OF A BODY, IN A WAY TO IMPROVE THE QUALITY OF THE VISUALIZED IMAGES
US6839661B2 (en) * 2000-12-15 2005-01-04 Medispectra, Inc. System for normalizing spectra
US6818903B2 (en) 2002-07-09 2004-11-16 Medispectra, Inc. Method and apparatus for identifying spectral artifacts
US20040208385A1 (en) * 2003-04-18 2004-10-21 Medispectra, Inc. Methods and apparatus for visually enhancing images
US6768918B2 (en) 2002-07-10 2004-07-27 Medispectra, Inc. Fluorescent fiberoptic probe for tissue health discrimination and method of use thereof
DE10234086B4 (en) * 2002-07-26 2004-08-26 Koenig & Bauer Ag Method for signal evaluation of an electronic image sensor in the pattern recognition of image contents of a test specimen
GB0300921D0 (en) * 2003-01-15 2003-02-12 Mirada Solutions Ltd Improvements in or relating to dynamic medical imaging
EP1460557A3 (en) * 2003-03-12 2006-04-05 Eastman Kodak Company Manual and automatic alignement of pages
AU2003903511A0 (en) * 2003-07-08 2003-07-24 Canon Kabushiki Kaisha Image registration method improvement
GB0318701D0 (en) * 2003-08-08 2003-09-10 Inst Of Cancer Res The A method and apparatus for image processing
WO2005027491A2 (en) * 2003-09-05 2005-03-24 The Regents Of The University Of California Global motion estimation image coding and processing
US8170366B2 (en) * 2003-11-03 2012-05-01 L-3 Communications Corporation Image processing using optically transformed light
AU2004292260A1 (en) * 2003-11-12 2005-06-02 Siemens Corporate Research, Inc. A system and method for filtering and automatic detection of candidate anatomical structures in medical images
US20050163390A1 (en) * 2004-01-23 2005-07-28 Ann-Shyn Chiang Method for improving the depth of field and resolution of microscopy
JP2007536054A (en) * 2004-05-06 2007-12-13 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Pharmacokinetic image registration
EP1647937A1 (en) * 2004-10-15 2006-04-19 Sony Deutschland GmbH Method for motion estimation
GB0425430D0 (en) * 2004-11-18 2004-12-22 Mitsubishi Electric Inf Tech Dominant motion analysis
US7738684B2 (en) * 2004-11-24 2010-06-15 General Electric Company System and method for displaying images on a PACS workstation based on level of significance
US7551772B2 (en) * 2004-11-30 2009-06-23 Hewlett-Packard Development Company, L.P. Blur estimation in a digital image
JP4857418B2 (en) * 2005-02-16 2012-01-18 アポロ メディカル イメージング テクノロジー ピーティーワイ リミテッド Method and system for specimen motion artifact correction
US7668357B2 (en) * 2005-10-17 2010-02-23 Stanford University Method and system for using computed tomography to test pulmonary function
US7689015B2 (en) * 2005-12-29 2010-03-30 Kabushiki Kaisha Toshiba Magnetic resonance imaging apparatus and image correction estimating method
US7787688B1 (en) 2006-01-25 2010-08-31 Pixar Interactive depth of field using simulated heat diffusion
US20070249928A1 (en) * 2006-04-19 2007-10-25 General Electric Company Method and system for precise repositioning of regions of interest in longitudinal magnetic resonance imaging and spectroscopy exams
US20080058593A1 (en) * 2006-08-21 2008-03-06 Sti Medical Systems, Llc Computer aided diagnosis using video from endoscopes
US7925065B2 (en) * 2006-08-22 2011-04-12 Siemens Medical Solutions Usa, Inc. Finding blob-like structures using diverging gradient field response
US20080086051A1 (en) * 2006-09-20 2008-04-10 Ethicon Endo-Surgery, Inc. System, storage medium for a computer program, and method for displaying medical images
US8620051B2 (en) * 2006-12-22 2013-12-31 Salim Djerizi Registration of optical images of turbid media
US20080165278A1 (en) * 2007-01-04 2008-07-10 Sony Corporation Human visual system based motion detection/estimation for video deinterlacing
US8553758B2 (en) * 2007-03-02 2013-10-08 Sony Corporation Motion parameter engine for true motion
US20080212687A1 (en) * 2007-03-02 2008-09-04 Sony Corporation And Sony Electronics Inc. High accurate subspace extension of phase correlation for global motion estimation
US8457718B2 (en) * 2007-03-21 2013-06-04 Ethicon Endo-Surgery, Inc. Recognizing a real world fiducial in a patient image data
US8155728B2 (en) * 2007-08-22 2012-04-10 Ethicon Endo-Surgery, Inc. Medical system, method, and storage medium concerning a natural orifice transluminal medical procedure
US20080221434A1 (en) * 2007-03-09 2008-09-11 Voegele James W Displaying an internal image of a body lumen of a patient
US20080234544A1 (en) * 2007-03-20 2008-09-25 Ethicon Endo-Sugery, Inc. Displaying images interior and exterior to a body lumen of a patient
US8081810B2 (en) 2007-03-22 2011-12-20 Ethicon Endo-Surgery, Inc. Recognizing a real world fiducial in image data of a patient
CN201937736U (en) * 2007-04-23 2011-08-17 德萨拉技术爱尔兰有限公司 Digital camera
US8055037B2 (en) * 2007-07-02 2011-11-08 Siemens Aktiengesellschaft Robust reconstruction method for parallel magnetic resonance images
US8126291B2 (en) * 2007-07-16 2012-02-28 Ecole Centrale De Paris System and method for dense image registration using Markov Random Fields and efficient linear programming
US8131069B2 (en) * 2007-07-16 2012-03-06 Ecole Centrale De Paris System and method for optimizing single and dynamic markov random fields with primal dual strategies
US8031924B2 (en) * 2007-11-30 2011-10-04 General Electric Company Methods and systems for removing autofluorescence from images
CN101458531A (en) * 2007-12-12 2009-06-17 深圳富泰宏精密工业有限公司 Display screen automatic adjustment system and method
JP5267143B2 (en) * 2008-03-27 2013-08-21 富士フイルム株式会社 Imaging apparatus and program
US7995854B2 (en) * 2008-03-28 2011-08-09 Tandent Vision Science, Inc. System and method for identifying complex tokens in an image
US8725477B2 (en) 2008-04-10 2014-05-13 Schlumberger Technology Corporation Method to generate numerical pseudocores using borehole images, digital rock samples, and multi-point statistics
RU2440591C2 (en) 2008-04-10 2012-01-20 Шлюмбергер Текнолоджи Б.В. Method of obtaining characteristics of geological formation intersected by well
US8144766B2 (en) * 2008-07-16 2012-03-27 Sony Corporation Simple next search position selection for motion estimation iterative search
US8094714B2 (en) * 2008-07-16 2012-01-10 Sony Corporation Speculative start point selection for motion estimation iterative search
EP2379981B1 (en) 2009-01-17 2017-11-01 Luna Innovations Incorporated Optical imaging for optical device inspection
EP2406679B1 (en) 2009-03-11 2017-01-25 Sakura Finetek U.S.A., Inc. Autofocus method and autofocus device
JP5242479B2 (en) * 2009-03-26 2013-07-24 オリンパス株式会社 Image processing apparatus, image processing program, and method of operating image processing apparatus
JP5396121B2 (en) * 2009-03-26 2014-01-22 オリンパス株式会社 Image processing apparatus, imaging apparatus, image processing program, and method of operating image processing apparatus
US8311788B2 (en) * 2009-07-01 2012-11-13 Schlumberger Technology Corporation Method to quantify discrete pore shapes, volumes, and surface areas using confocal profilometry
US10311585B2 (en) * 2010-06-23 2019-06-04 Varian Medical Systems International Ag Mechanism for advanced structure generation and editing
WO2012023816A2 (en) * 2010-08-18 2012-02-23 주식회사 나노엔텍 Fluorescence microscope for multi-fluorescence image observation, fluorescence image observation method using the same, and multi-fluorescence image observation system
US10139613B2 (en) 2010-08-20 2018-11-27 Sakura Finetek U.S.A., Inc. Digital microscope and method of sensing an image of a tissue sample
DE102011004160A1 (en) 2011-02-15 2012-08-16 Siemens Aktiengesellschaft Method and device for examining a hollow organ with a magnet-guided endoscope capsule
US10152951B2 (en) 2011-02-28 2018-12-11 Varian Medical Systems International Ag Method and system for interactive control of window/level parameters of multi-image displays
US10918307B2 (en) * 2011-09-13 2021-02-16 St. Jude Medical, Atrial Fibrillation Division, Inc. Catheter navigation using impedance and magnetic field measurements
US10362963B2 (en) 2011-04-14 2019-07-30 St. Jude Medical, Atrial Fibrillation Division, Inc. Correction of shift and drift in impedance-based medical device navigation using magnetic field information
US9901303B2 (en) 2011-04-14 2018-02-27 St. Jude Medical, Atrial Fibrillation Division, Inc. System and method for registration of multiple navigation systems to a common coordinate frame
AU2011224051B2 (en) * 2011-09-14 2014-05-01 Canon Kabushiki Kaisha Determining a depth map from images of a scene
US8463012B2 (en) * 2011-10-14 2013-06-11 Siemens Medical Solutions Usa, Inc. System for comparison of medical images
US8942512B2 (en) 2011-12-24 2015-01-27 Ecole De Technologie Superieure Methods and systems for processing a first image with reference to a second image
WO2013130086A1 (en) * 2012-03-01 2013-09-06 Empire Technology Development Llc Integrated image registration and motion estimation for medical imaging applications
US20140303499A1 (en) * 2013-04-09 2014-10-09 Konica Minolta, Inc. Ultrasound diagnostic apparatus and method for controlling the same
DE102013103971A1 (en) 2013-04-19 2014-11-06 Sensovation Ag Method for generating an overall picture of an object composed of several partial images
GB201312214D0 (en) * 2013-07-08 2013-08-21 Waterford Inst Technology A Measurement Probe
TWI496112B (en) * 2013-09-13 2015-08-11 Univ Nat Cheng Kung Cell image segmentation method and a nuclear-to-cytoplasmic ratio evaluation method using the same
MX2014011990A (en) 2013-10-04 2015-05-28 Tidi Products Llc Sheath for a medical or dental instrument.
US10007102B2 (en) 2013-12-23 2018-06-26 Sakura Finetek U.S.A., Inc. Microscope with slide clamping assembly
CN103767658B (en) * 2013-12-30 2017-02-08 深圳市理邦精密仪器股份有限公司 Collection method of electronic colposcope images and device
WO2016036370A1 (en) * 2014-09-04 2016-03-10 Hewlett-Packard Development Company, L.P. Projection alignment
FR3032104B1 (en) * 2015-02-02 2017-02-10 Institut De Rech Sur Les Cancers De Lappareil Digestif Ircad METHOD FOR DETERMINING THE SHIFT BETWEEN MEDIAN AND OPTICAL AXES OF AN ENDOSCOPE
WO2017145788A1 (en) * 2016-02-26 2017-08-31 ソニー株式会社 Image processing device, image processing method, program, and surgery system
WO2018073784A1 (en) 2016-10-20 2018-04-26 Optina Diagnostics, Inc. Method and system for detecting an anomaly within a biological tissue
US11280803B2 (en) 2016-11-22 2022-03-22 Sakura Finetek U.S.A., Inc. Slide management system
US11113791B2 (en) * 2017-01-03 2021-09-07 Flir Systems, Inc. Image noise reduction using spectral transforms
US11335084B2 (en) 2019-09-18 2022-05-17 International Business Machines Corporation Image object anomaly detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4558462A (en) * 1982-09-02 1985-12-10 Hitachi Medical Corporation Apparatus for correcting image distortions automatically by inter-image processing
US4641352A (en) * 1984-07-12 1987-02-03 Paul Fenster Misregistration correction
WO2000057361A1 (en) * 1999-03-19 2000-09-28 Isis Innovation Limited Method and apparatus for image processing
US20020007122A1 (en) * 1999-12-15 2002-01-17 Howard Kaufman Methods of diagnosing disease

Family Cites Families (159)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3013467A (en) 1957-11-07 1961-12-19 Minsky Marvin Microscopy apparatus
US3632865A (en) 1969-12-23 1972-01-04 Bell Telephone Labor Inc Predictive video encoding using measured subject velocity
US3809072A (en) 1971-10-07 1974-05-07 Med General Inc Sterile sheath apparatus for fiber optic illuminator with compatible lens
IT985204B (en) 1972-05-26 1974-11-30 Adelman Stuart Lee IMPROVEMENT IN ENDOSCOPES AND THE LIKE
US3890462A (en) 1974-04-17 1975-06-17 Bell Telephone Labor Inc Speed and direction indicator for video systems
US3963019A (en) 1974-11-25 1976-06-15 Quandt Robert S Ocular testing method and apparatus
US4017192A (en) 1975-02-06 1977-04-12 Neotec Corporation Optical analysis of biomedical specimens
US4071020A (en) 1976-06-03 1978-01-31 Xienta, Inc. Apparatus and methods for performing in-vivo measurements of enzyme activity
GB1595422A (en) 1977-04-28 1981-08-12 Nat Res Dev Scaning microscopes
FR2430754A1 (en) 1978-07-13 1980-02-08 Groux Jean ULTRAVIOLET ENDOSCOPE
US4218703A (en) 1979-03-16 1980-08-19 Bell Telephone Laboratories, Incorporated Technique for estimation of displacement and/or velocity of objects in video scenes
US4357075A (en) 1979-07-02 1982-11-02 Hunter Thomas M Confocal reflector system
US4349510A (en) 1979-07-24 1982-09-14 Seppo Kolehmainen Method and apparatus for measurement of samples by luminescence
US4254421A (en) 1979-12-05 1981-03-03 Communications Satellite Corporation Integrated confocal electromagnetic wave lens and feed antenna system
DE2951459C2 (en) 1979-12-20 1984-03-29 Heimann Gmbh, 6200 Wiesbaden Optical arrangement for a smoke detector based on the light scattering principle
US4515165A (en) 1980-02-04 1985-05-07 Energy Conversion Devices, Inc. Apparatus and method for detecting tumors
ATE23752T1 (en) 1980-08-21 1986-12-15 Oriel Scient Ltd OPTICAL ANALYZER.
US4396579A (en) 1981-08-06 1983-08-02 Miles Laboratories, Inc. Luminescence detection device
AU556742B2 (en) 1982-02-01 1986-11-20 Sony Corporation Digital tape jitter compensation
US5139025A (en) 1983-10-14 1992-08-18 Somanetics Corporation Method and apparatus for in vivo optical spectroscopic examination
SE455736B (en) 1984-03-15 1988-08-01 Sarastro Ab PROCEDURE KIT AND MICROPHOTOMETRATION AND ADDITIONAL IMAGE COMPOSITION
US4662360A (en) 1984-10-23 1987-05-05 Intelligent Medical Systems, Inc. Disposable speculum
US5179936A (en) * 1984-10-23 1993-01-19 Intelligent Medical Systems, Inc. Disposable speculum with membrane bonding ring
US4646722A (en) 1984-12-10 1987-03-03 Opielab, Inc. Protective endoscope sheath and method of installing same
US4803049A (en) 1984-12-12 1989-02-07 The Regents Of The University Of California pH-sensitive optrode
US5199431A (en) 1985-03-22 1993-04-06 Massachusetts Institute Of Technology Optical needle for spectroscopic diagnosis
US5042494A (en) 1985-11-13 1991-08-27 Alfano Robert R Method and apparatus for detecting cancerous tissue using luminescence excitation spectra
US4930516B1 (en) 1985-11-13 1998-08-04 Laser Diagnostic Instr Inc Method for detecting cancerous tissue using visible native luminescence
GB8529889D0 (en) 1985-12-04 1986-01-15 Cardiff Energy & Resources Luminometer construction
JPS62138819A (en) 1985-12-13 1987-06-22 Hitachi Ltd Scanning type laser microscope
JPS62247232A (en) 1986-04-21 1987-10-28 Agency Of Ind Science & Technol Fluorescence measuring apparatus
US4852955A (en) 1986-09-16 1989-08-01 Laser Precision Corporation Microscope for use in modular FTIR spectrometer system
US5011243A (en) 1986-09-16 1991-04-30 Laser Precision Corporation Reflectance infrared microscope having high radiation throughput
US4741326A (en) 1986-10-01 1988-05-03 Fujinon, Inc. Endoscope disposable sheath
US4891829A (en) 1986-11-19 1990-01-02 Exxon Research And Engineering Company Method and apparatus for utilizing an electro-optic detector in a microtomography system
NL8603108A (en) 1986-12-08 1988-07-01 Philips Nv MICROSALE.
NZ223988A (en) 1987-03-24 1990-11-27 Commw Scient Ind Res Org Optical distance measuring
US5235457A (en) 1987-09-24 1993-08-10 Washington University Kit for converting a standard microscope into a single aperture confocal scanning epi-illumination microscope
US4945478A (en) 1987-11-06 1990-07-31 Center For Innovative Technology Noninvasive medical imaging system and method for the identification and 3-D display of atherosclerosis and the like
US4800571A (en) 1988-01-11 1989-01-24 Tektronix, Inc. Timing jitter measurement display
US4844617A (en) 1988-01-20 1989-07-04 Tencor Instruments Confocal measuring microscope with automatic focusing
FR2626383B1 (en) 1988-01-27 1991-10-25 Commissariat Energie Atomique EXTENDED FIELD SCAN AND DEPTH CONFOCAL OPTICAL MICROSCOPY AND DEVICES FOR CARRYING OUT THE METHOD
US4997242A (en) 1988-03-07 1991-03-05 Medical Research Council Achromatic scanning system
US5032720A (en) 1988-04-21 1991-07-16 White John G Confocal imaging system
US4877033A (en) 1988-05-04 1989-10-31 Seitz Jr H Michael Disposable needle guide and examination sheath for transvaginal ultrasound procedures
DE8808299U1 (en) 1988-06-25 1989-07-20 Effner Gmbh, 1000 Berlin, De
EP0393165B2 (en) 1988-07-13 2007-07-25 Optiscan Pty Ltd Scanning confocal endoscope
CA1325537C (en) 1988-08-01 1993-12-28 Timothy Peter Dabbs Confocal microscope
US5036853A (en) 1988-08-26 1991-08-06 Polartechnics Ltd. Physiological probe
US4972258A (en) 1989-07-31 1990-11-20 E. I. Du Pont De Nemours And Company Scanning laser microscope system and methods of use
US5101825A (en) 1988-10-28 1992-04-07 Blackbox, Inc. Method for noninvasive intermittent and/or continuous hemoglobin, arterial oxygen content, and hematocrit determination
US5205291A (en) 1988-11-08 1993-04-27 Health Research, Inc. In vivo fluorescence photometer
US4937526A (en) * 1988-11-23 1990-06-26 Mayo Foundation For Medical Education And Research Adaptive method for reducing motion and flow artifacts in NMR images
US5022757A (en) 1989-01-23 1991-06-11 Modell Mark D Heterodyne system and method for sensing a target substance
US4878485A (en) 1989-02-03 1989-11-07 Adair Edwin Lloyd Rigid video endoscope with heat sterilizable sheath
US5003979A (en) 1989-02-21 1991-04-02 University Of Virginia System and method for the noninvasive identification and display of breast lesions and the like
US5201318A (en) 1989-04-24 1993-04-13 Rava Richard P Contour mapping of spectral diagnostics
JPH0378720A (en) 1989-08-22 1991-04-03 Nikon Corp Confocal laser scanning microscope
US5267179A (en) 1989-08-30 1993-11-30 The United States Of America As Represented By The United States Department Of Energy Ferroelectric optical image comparator
US5065008A (en) 1989-10-18 1991-11-12 Fuji Photo Film Co., Ltd. Scanning microscope and scanning mechanism for the same
DE8912757U1 (en) 1989-10-27 1989-12-07 Fa. Carl Zeiss, 7920 Heidenheim, De
US4979498A (en) 1989-10-30 1990-12-25 Machida Incorporated Video cervicoscope system
US5034613A (en) 1989-11-14 1991-07-23 Cornell Research Foundation, Inc. Two-photon laser microscopy
US5257617A (en) 1989-12-25 1993-11-02 Asahi Kogaku Kogyo Kabushiki Kaisha Sheathed endoscope and sheath therefor
US5028802A (en) 1990-01-11 1991-07-02 Eye Research Institute Of Retina Foundation Imaging apparatus and methods utilizing scannable microlaser source
US5091652A (en) 1990-01-12 1992-02-25 The Regents Of The University Of California Laser excited confocal microscope fluorescence scanner and method
US5274240A (en) 1990-01-12 1993-12-28 The Regents Of The University Of California Capillary array confocal fluorescence scanner and method
EP0448931B1 (en) * 1990-01-26 1996-04-03 Canon Kabushiki Kaisha Method for measuring a specimen by the use of fluorescence light
JPH0742401Y2 (en) 1990-02-01 1995-10-04 株式会社町田製作所 Endoscope cover
JPH03101903U (en) 1990-02-01 1991-10-23
US5074306A (en) 1990-02-22 1991-12-24 The General Hospital Corporation Measurement of burn depth in skin
US5083220A (en) 1990-03-22 1992-01-21 Tandem Scanning Corporation Scanning disks for use in tandem scanning reflected light microscopes and other optical systems
JP2613118B2 (en) 1990-04-10 1997-05-21 富士写真フイルム株式会社 Confocal scanning microscope
US5048946A (en) 1990-05-15 1991-09-17 Phoenix Laser Systems, Inc. Spectral division of reflected light in complex optical diagnostic and therapeutic systems
GB9014263D0 (en) 1990-06-27 1990-08-15 Dixon Arthur E Apparatus and method for spatially- and spectrally- resolvedmeasurements
GB9016587D0 (en) 1990-07-27 1990-09-12 Isis Innovation Infra-red scanning microscopy
US5239178A (en) 1990-11-10 1993-08-24 Carl Zeiss Optical device with an illuminating grid and detector grid arranged confocally to an object
US5168157A (en) 1990-11-20 1992-12-01 Fuji Photo Film Co., Ltd. Scanning microscope with means for detecting a first and second polarized light beams along first and second optical receiving paths
US5193525A (en) 1990-11-30 1993-03-16 Vision Sciences Antiglare tip in a sheath for an endoscope
US5720293A (en) * 1991-01-29 1998-02-24 Baxter International Inc. Diagnostic catheter with memory
JP3103894B2 (en) 1991-02-06 2000-10-30 ソニー株式会社 Apparatus and method for correcting camera shake of video data
US5261410A (en) 1991-02-07 1993-11-16 Alfano Robert R Method for determining if a tissue is a malignant tumor tissue, a benign tumor tissue, or a normal or benign tissue using Raman spectroscopy
US5162641A (en) 1991-02-19 1992-11-10 Phoenix Laser Systems, Inc. System and method for detecting, correcting and measuring depth movement of target tissue in a laser surgical system
US5303026A (en) * 1991-02-26 1994-04-12 The Regents Of The University Of California Los Alamos National Laboratory Apparatus and method for spectroscopic analysis of scattering media
US5260578A (en) 1991-04-10 1993-11-09 Mayo Foundation For Medical Education And Research Confocal imaging system for visible and ultraviolet light
WO1992021201A1 (en) * 1991-05-24 1992-11-26 British Broadcasting Corporation Video image processing
JP2975719B2 (en) 1991-05-29 1999-11-10 オリンパス光学工業株式会社 Confocal optics
US5201908A (en) 1991-06-10 1993-04-13 Endomedical Technologies, Inc. Sheath for protecting endoscope from contamination
US5313567A (en) * 1991-06-13 1994-05-17 At&T Bell Laboratories Arrangement for determining and displaying volumetric data in an imaging system
US5237984A (en) 1991-06-24 1993-08-24 Xomed-Treace Inc. Sheath for endoscope
US5203328A (en) 1991-07-17 1993-04-20 Georgia Tech Research Corporation Apparatus and methods for quantitatively measuring molecular changes in the ocular lens
US5162941A (en) 1991-07-23 1992-11-10 The Board Of Governors Of Wayne State University Confocal microscope
JPH0527177A (en) 1991-07-25 1993-02-05 Fuji Photo Film Co Ltd Scanning type microscope
JP3082346B2 (en) 1991-09-12 2000-08-28 株式会社ニコン Fluorescence confocal microscope
US5383874A (en) * 1991-11-08 1995-01-24 Ep Technologies, Inc. Systems for identifying catheters and monitoring their use
US5253071A (en) 1991-12-20 1993-10-12 Sony Corporation Of America Method and apparatus for stabilizing an image produced in a video camera
US5398685A (en) * 1992-01-10 1995-03-21 Wilk; Peter J. Endoscopic diagnostic system and associated method
US5284149A (en) 1992-01-23 1994-02-08 Dhadwal Harbans S Method and apparatus for determining the physical characteristics of ocular tissue
US5248876A (en) 1992-04-21 1993-09-28 International Business Machines Corporation Tandem linear scanning confocal imaging system with focal volumes at different heights
GB9213978D0 (en) * 1992-07-01 1992-08-12 Skidmore Robert Medical devices
US5609560A (en) * 1992-08-19 1997-03-11 Olympus Optical Co., Ltd. Medical operation device control system for controlling a operation devices accessed respectively by ID codes
US5306902A (en) * 1992-09-01 1994-04-26 International Business Machines Corporation Confocal method and apparatus for focusing in projection lithography
US5402768A (en) * 1992-09-01 1995-04-04 Adair; Edwin L. Endoscope with reusable core and disposable sheath with passageways
US5704892A (en) * 1992-09-01 1998-01-06 Adair; Edwin L. Endoscope with reusable core and disposable sheath with passageways
US5285490A (en) 1992-10-22 1994-02-08 Eastman Kodak Company Imaging combination for detecting soft tissue anomalies
US5294799A (en) 1993-02-01 1994-03-15 Aslund Nils R D Apparatus for quantitative imaging of multiple fluorophores
US5421339A (en) * 1993-05-12 1995-06-06 Board Of Regents, The University Of Texas System Diagnosis of dysplasia using laser induced fluoroescence
US5596992A (en) * 1993-06-30 1997-01-28 Sandia Corporation Multivariate classification of infrared spectra of cell and tissue samples
US5496259A (en) * 1993-09-13 1996-03-05 Welch Allyn, Inc. Sterile protective sheath and drape for video laparoscope and method of use
US5412563A (en) * 1993-09-16 1995-05-02 General Electric Company Gradient image segmentation method
US5406939A (en) * 1994-02-14 1995-04-18 Bala; Harry Endoscope sheath
US5493444A (en) * 1994-04-28 1996-02-20 The United States Of America As Represented By The Secretary Of The Air Force Photorefractive two-beam coupling nonlinear joint transform correlator
US5599717A (en) * 1994-09-02 1997-02-04 Martin Marietta Energy Systems, Inc. Advanced synchronous luminescence system
US5829444A (en) * 1994-09-15 1998-11-03 Visualization Technology, Inc. Position tracking and imaging system for use in medical applications
JP3732865B2 (en) * 1995-01-18 2006-01-11 ペンタックス株式会社 Endoscope device
US5894340A (en) * 1995-02-17 1999-04-13 The Regents Of The University Of California Method for quantifying optical properties of the human lens
JP3490817B2 (en) * 1995-03-13 2004-01-26 ペンタックス株式会社 Endoscope tip
US5735276A (en) * 1995-03-21 1998-04-07 Lemelson; Jerome Method and apparatus for scanning and evaluating matter
US5612540A (en) * 1995-03-31 1997-03-18 Board Of Regents, The University Of Texas Systems Optical method for the detection of cervical neoplasias using fluorescence spectroscopy
US5690106A (en) * 1995-06-30 1997-11-25 Siemens Corporate Research, Inc. Flexible image registration for rotational angiography
US5713364A (en) * 1995-08-01 1998-02-03 Medispectra, Inc. Spectral volume microprobe analysis of materials
US5730701A (en) * 1995-09-12 1998-03-24 Olympus Optical Co., Ltd. Endoscope
US6040139A (en) * 1995-09-19 2000-03-21 Bova; G. Steven Laser cell purification system
JPH0998938A (en) * 1995-10-04 1997-04-15 Fuji Photo Optical Co Ltd Protector of insertion part of endoscope
US5865726A (en) * 1996-03-27 1999-02-02 Asahi Kogaku Kogyo Kabushiki Kaisha Front end structure of side-view type endoscope
US5717209A (en) * 1996-04-29 1998-02-10 Petrometrix Ltd. System for remote transmission of spectral information through communication optical fibers for real-time on-line hydrocarbons process analysis by near infra red spectroscopy
US5860913A (en) * 1996-05-16 1999-01-19 Olympus Optical Co., Ltd. Endoscope whose distal cover can be freely detachably attached to main distal part thereof with high positioning precision
AU6184196A (en) * 1996-06-26 1998-01-14 Morphometrix Technologies Inc. Confocal ultrasonic imaging system
US5685822A (en) * 1996-08-08 1997-11-11 Vision-Sciences, Inc. Endoscope with sheath retaining device
CA2192036A1 (en) * 1996-12-04 1998-06-04 Harvey Lui Fluorescence scope system for dermatologic diagnosis
US6847490B1 (en) * 1997-01-13 2005-01-25 Medispectra, Inc. Optical probe accessory device for use in vivo diagnostic procedures
JP3654325B2 (en) * 1997-02-13 2005-06-02 富士写真フイルム株式会社 Fluorescence detection device
US5855551A (en) * 1997-03-17 1999-01-05 Polartechnics Limited Integral sheathing apparatus for tissue recognition probes
FR2763721B1 (en) * 1997-05-21 1999-08-06 Inst Nat Rech Inf Automat ELECTRONIC IMAGE PROCESSING DEVICE FOR DETECTING DIMENSIONAL VARIATIONS
WO1999020313A1 (en) * 1997-10-20 1999-04-29 The Board Of Regents, The University Of Texas System Acetic acid as a contrast agent in reflectance confocal imaging of tissue
WO1999047041A1 (en) * 1998-03-19 1999-09-23 Board Of Regents, The University Of Texas System Fiber-optic confocal imaging apparatus and methods of use
WO1999056156A1 (en) * 1998-04-24 1999-11-04 Case Western Reserve University Geometric distortion correction in magnetic resonance imaging
US6004270A (en) * 1998-06-24 1999-12-21 Ecton, Inc. Ultrasound system for contrast agent imaging and quantification in echocardiography using template image for image alignment
US6205235B1 (en) * 1998-07-23 2001-03-20 David Roberts Method and apparatus for the non-invasive imaging of anatomic tissue structures
US6377842B1 (en) * 1998-09-22 2002-04-23 Aurora Optics, Inc. Method for quantitative measurement of fluorescent and phosphorescent drugs within tissue utilizing a fiber optic probe
US6169817B1 (en) * 1998-11-04 2001-01-02 University Of Rochester System and method for 4D reconstruction and visualization
US6193660B1 (en) * 1999-03-31 2001-02-27 Acuson Corporation Medical diagnostic ultrasound system and method for region of interest determination
US6292683B1 (en) * 1999-05-18 2001-09-18 General Electric Company Method and apparatus for tracking motion in MR images
US6697666B1 (en) * 1999-06-22 2004-02-24 Board Of Regents, The University Of Texas System Apparatus for the characterization of tissue of epithelial lined viscus
US6208887B1 (en) * 1999-06-24 2001-03-27 Richard H. Clarke Catheter-delivered low resolution Raman scattering analyzing system for detecting lesions
US6717668B2 (en) * 2000-03-07 2004-04-06 Chemimage Corporation Simultaneous imaging and spectroscopy apparatus
GR1004180B (en) * 2000-03-28 2003-03-11 ����������� ����� ��������� (����) Method and system for characterization and mapping of tissue lesions
JP4191884B2 (en) * 2000-08-18 2008-12-03 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー Image processing method, image processing apparatus, and image photographing apparatus
US6718055B1 (en) * 2000-12-05 2004-04-06 Koninklijke Philips Electronics, N.V. Temporal and spatial correction for perfusion quantification system
US6839661B2 (en) * 2000-12-15 2005-01-04 Medispectra, Inc. System for normalizing spectra
USD453964S1 (en) * 2001-02-09 2002-02-26 Medispectra, Inc. Sheath for cervical optical probe
USD453832S1 (en) * 2001-02-09 2002-02-19 Medispectra, Inc. Sheath for cervical optical probe
USD453962S1 (en) * 2001-02-09 2002-02-26 Medispectra, Inc. Sheath for cervical optical probe
USD453963S1 (en) * 2001-02-09 2002-02-26 Medispectra, Inc. Sheath for cervical optical probe
US7282723B2 (en) * 2002-07-09 2007-10-16 Medispectra, Inc. Methods and apparatus for processing spectral data for use in tissue characterization
US6933154B2 (en) * 2002-07-09 2005-08-23 Medispectra, Inc. Optimal windows for obtaining optical data for characterization of tissue samples
US6818903B2 (en) * 2002-07-09 2004-11-16 Medispectra, Inc. Method and apparatus for identifying spectral artifacts
US6768918B2 (en) * 2002-07-10 2004-07-27 Medispectra, Inc. Fluorescent fiberoptic probe for tissue health discrimination and method of use thereof
US7103401B2 (en) * 2002-07-10 2006-09-05 Medispectra, Inc. Colonic polyp discrimination by tissue fluorescence and fiberoptic probe

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4558462A (en) * 1982-09-02 1985-12-10 Hitachi Medical Corporation Apparatus for correcting image distortions automatically by inter-image processing
US4641352A (en) * 1984-07-12 1987-02-03 Paul Fenster Misregistration correction
WO2000057361A1 (en) * 1999-03-19 2000-09-28 Isis Innovation Limited Method and apparatus for image processing
US20020007122A1 (en) * 1999-12-15 2002-01-17 Howard Kaufman Methods of diagnosing disease
US20020127735A1 (en) * 1999-12-15 2002-09-12 Howard Kaufman Methods of monitoring effects of chemical agents on a sample

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KO C-C ET AL: "Multiresolution registration of coronary artery image sequences" INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, ELSEVIER SCIENTIFIC PUBLISHERS, SHANNON, IR, vol. 44, no. 2, April 1997 (1997-04), pages 93-104, XP004262353 ISSN: 1386-5056 *
NOBLE J A ET AL: "Automated, nonrigid alignment of clinical myocardial contrast echocardiography image sequences: comparison with manual alignment" ULTRASOUND IN MEDICINE AND BIOLOGY, NEW YORK, NY, US, vol. 28, no. 1, January 2002 (2002-01), pages 115-123, XP004342782 ISSN: 0301-5629 *

Also Published As

Publication number Publication date
US20070147705A1 (en) 2007-06-28
CA2500539A1 (en) 2004-04-15
US20030095721A1 (en) 2003-05-22
WO2004032058A3 (en) 2004-06-17
US7406215B2 (en) 2008-07-29
AU2003277051A1 (en) 2004-04-23
US7187810B2 (en) 2007-03-06
EP1554694A2 (en) 2005-07-20

Similar Documents

Publication Publication Date Title
US7406215B2 (en) Methods and systems for correcting image misalignment
US8401258B2 (en) Method to provide automated quality feedback to imaging devices to achieve standardized imaging data
US7469160B2 (en) Methods and apparatus for evaluating image focus
US7282723B2 (en) Methods and apparatus for processing spectral data for use in tissue characterization
CN107788948B (en) Diagnosis support device, image processing method for diagnosis support device, and storage medium storing program
US7136518B2 (en) Methods and apparatus for displaying diagnostic data
US7309867B2 (en) Methods and apparatus for characterization of tissue samples
US20110110567A1 (en) Methods and Apparatus for Visually Enhancing Images
US7459696B2 (en) Methods and apparatus for calibrating spectral data
Marrugo et al. Retinal image restoration by means of blind deconvolution
WO2013187206A1 (en) Image processing device, image processing method, and image processing program
US20040209237A1 (en) Methods and apparatus for characterization of tissue samples
US20040208390A1 (en) Methods and apparatus for processing image data for use in tissue characterization
WO2006062013A1 (en) Image processing device, image processing method, and image processing program
JP2009168572A (en) Image processing apparatus and image processing program
WO2016170656A1 (en) Image processing device, image processing method and image processing program
AU2003259095A2 (en) Methods and apparatus for characterization of tissue samples
Köhler et al. Multi-frame super-resolution with quality self-assessment for retinal fundus videos
DE112015002614T5 (en) Image processing device, image processing method and image processing program
CN107529962B (en) Image processing apparatus, image processing method, and recording medium
CN112348771B (en) Imaging consistency evaluation method based on wavelet transformation
Kubecka et al. Improving quality of autofluorescence images using non-rigid image registration
CN117176868A (en) High-precision scanning imaging system
Sakuma et al. Automated detection of changes in sequential color ocular fundus images
CN117562487A (en) Method for compensating uniformity of fluorescence signal in fluorescence endoscope system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2500539

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2003799315

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2003277051

Country of ref document: AU

WWP Wipo information: published in national office

Ref document number: 2003799315

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP

WWW Wipo information: withdrawn in national office

Ref document number: 2003799315

Country of ref document: EP