WO2006065741A2 - Method and apparatus for enhancing a digital image - Google Patents

Method and apparatus for enhancing a digital image Download PDF

Info

Publication number
WO2006065741A2
WO2006065741A2 PCT/US2005/044917 US2005044917W WO2006065741A2 WO 2006065741 A2 WO2006065741 A2 WO 2006065741A2 US 2005044917 W US2005044917 W US 2005044917W WO 2006065741 A2 WO2006065741 A2 WO 2006065741A2
Authority
WO
WIPO (PCT)
Prior art keywords
digital
image
distribution
band
numbers
Prior art date
Application number
PCT/US2005/044917
Other languages
French (fr)
Other versions
WO2006065741A3 (en
Inventor
Chris Padwick
Jack F. Paris
Original Assignee
Digitalglobe, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digitalglobe, Inc. filed Critical Digitalglobe, Inc.
Priority to JP2007546809A priority Critical patent/JP2008527766A/en
Priority to CA002591351A priority patent/CA2591351A1/en
Priority to EP05853757A priority patent/EP1828960A2/en
Publication of WO2006065741A2 publication Critical patent/WO2006065741A2/en
Publication of WO2006065741A3 publication Critical patent/WO2006065741A3/en
Priority to IL183894A priority patent/IL183894A0/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Definitions

  • the present invention is directed to enhancing digital images, and, more specifically, to enhancing color and contrast in a multispectral digital image obtained from a remote sensing platform.
  • a digital image is a set of one or more two- dimensional numeric arrays in which each array element, or pixel, represents an apparent brightness measured by an imaging sensor.
  • the brightness value at each pixel is often represented by an integer number called a digital number (DN).
  • Digital images are commonly generated by remote imaging systems that collect imagery in the visible and infrared regions of the electromagnetic spectrum. Images collected from such systems are used in numerous applications by both commercial and government customers.
  • Imaging systems When collecting digital imagery, specific bands of electromagnetic energy from the area being imaged are commonly collected at several imaging sensors. For example, in many imaging systems, several spectral bands of electromagnetic energy are collected at imaging sensors, such as, a red light band, a green light band, a blue light band, and a near infrared band. Imaging systems may also include other spectral bands within the visible bands and/or within the middle infrared (a.k.a., shortwave infrared) bands. An image generated from such a system is referred to as a multispectral digital image. In such a case, a set of DN values exist for each line and column position in the multispectral digital image with one DN value allocated to each spectral band.
  • Each DN represents the relative brightness of an image in the associated spectral band at the associated pixel location in the image.
  • data from the imaging sensors is collected, processed, and images are produced. Images are commonly provided to customers as a multispectral image file containing imagery from each of the spectral bands.
  • Each band includes DNs on, for example, an 8-bit or 1 1-bit radiometric brightness scale representing the radiance collected at the respective sensor for an area of the scene imaged.
  • the DN values generated from a particular imaging sensor have limited range that varies from image source to image source according to the associated bit depth. Commonly used bit depths are 8 bits and 11 bits, resulting in DNs that range from 0 to 255 and 0 to 2047, respectively. Digital images are generally stored in raster files or as raster arrays in computer memory, and since rasters use bit depths that are simple powers of base 2, image DNs may be stored in rasters having 1, 2, 4, 8, or 16 bits, with 8 bit and 16 bit being most common. It is common practice to reserve special DN values to represent non-existent image data (e.g., 0, 255, and/or 2047). The corresponding pixels are called blackfill. Actual image data will then have DNs between 1 and 254 or between 1 and 2046.
  • Enhancement in the context of a digital image, is a process whereby source- image DNs are transformed into new DNs that have added value in terms of their subsequent use.
  • the data from each band of imagery is enhanced based on known sensor and atmospheric characteristics in order to provide an adjusted color for each band.
  • the image is then contrast stretched, in order to provide enhanced visual contrast between features within the image.
  • the contrast stretch the average radiance from each pixel within a particular band is placed in a histogram, and the distribution of the histogram is stretched to the full range of DNs available for the pixels.
  • each band includes DNs on an 8-bit radiometric brightness scale, this represents a range of DNs between 0 and 255.
  • the DNs from a scene may then be adjusted to use this full range of possible DN values, and/or the DNs from a scene may be adjusted to obtain a distribution of DNs that is centered about the mid-point of possible DN values.
  • the visual quality of the contrast stretch achieved using normal contrast stretch algorithms is highly dependent on scene content. Many contrast stretch algorithms change the color content of the imagery resulting in questionable color content in the scene.
  • the brightness of the image is enhanced while maintaining the hue and saturation.
  • the image in the RGB color space thus has enhanced contrast while maintaining color balance.
  • This technique is reliable, however, it requires significant additional processing as compared to a contrast stretch performed on RGB data as previously described.
  • a major drawback to this type of stretch involves the significant amounts of computer processing time involved in converting from RGB to HIS space and back.
  • the present invention provides a system and method to process original DNs provided by a satellite imaging system to produce a set of color balanced and contrast enhanced images.
  • the present invention enhances a multispectral digital image in a fully- automatic, systematic, and universal way, allows for the production of optimized enhanced digital image products that are cost effective and profitable, and also produces a set of quantitative surface-reflectance estimates that can be further processed by other automatic algorithms to yield specific kinds of information about the objects that have been imaged.
  • the set of color balanced and contrast enhanced images is referred to herein as Dynamic Range Adjusted (DRA) images.
  • the DNs in the multispectral bands may be processed using a relatively small amount of processing resources otherwise required to produce such images.
  • Such DRA products provide relatively easy visual interpretation, and may include images in which each pixel is in an 8-bit format, four-band 8-bit or 24-bit RGB. The image may be displayed and/or printed without applying any additional contrast stretches.
  • a method for producing an enhanced digital image comprising: receiving a plurality of pixels of imaging data from an imaging sensor, each of the pixels of imaging data comprising a digital number; processing the digital number of each of the pixels to determine a distribution of imaging data for the plurality of pixels, and determining the spectral radiance at top of atmosphere (TOA) of each of the plurality of pixels based on the distribution.
  • the receiving step comprises receiving a plurality of bands of pixels of imaging data from a plurality of imaging sensors.
  • the plurality of imaging sensors may include a blue band imaging sensor, a green band imaging sensor, a red band imaging sensor, and a near-infrared band imaging sensor.
  • the receiving step comprises receiving a plurality of top-of-atmosphere pixels of imaging data from the imaging sensors, the top-of-atmosphere pixels comprising a top-of-atmosphere digital number, and adjusting the top-of-atmosphere digital numbers based on known sensor characteristics of the imaging sensors.
  • the step of determining a spectral radiance step may be performed independently of scene content of the digital image, and may be performed on a physical basis of the imaging data, rather than a purely statistical basis.
  • a lower distribution cutoff of said distribution of imaging data may be determined.
  • the lower distribution cutoff in an embodiment, is set at 0.1% of the cumulative distribution. In another embodiment, the lower distribution cutoff is based on a point in the cumulative distribution at which the digital numbers are indicative of the spectral radiance of the atmospheric path, hereinafter referred to as "path radiance.”
  • path radiance a point in the cumulative distribution at which the digital numbers are indicative of the spectral radiance of the atmospheric path
  • the method includes determining the value of a digital number associated with the lower distribution cutoff, and subtracting the digital number associated with the lower distribution cutoff from each of the plurality of pixels of imaging data when the digital number associated with the lower distribution cutoff is less than a predetermined maximum value.
  • the predetermined maximum value is subtracted from each of the plurality of pixels of imaging data.
  • the contrast of a digital image is also enhanced by determining a median value of the input DNs, determining a target brightness value for the enhanced digital image, and adjusting each DN to generate an output DN, the median value of the output DNs being substantially equal to the target brightness value.
  • the DNs of each band of imaging data are received from the plurality of imaging sensors.
  • the median value of the received DNs is determined by processing the DNs of each of the bands of imaging data to determine a distribution of DNs for each of the plurality bands of imaging data, determining a median value of each of the distributions of DNs, and computing an average of the median values.
  • the present invention provides a satellite image comprising a plurality of pixels each having a value determined by: determining a magnitude of spectral radiance of each a plurality of digital numbers of raw data pixels; processing the magnitudes of spectral radiance to determine a distribution of magnitudes of spectral radiance; and calculating the value of each of the plurality of pixels based on the distribution.
  • the step of determining a magnitude of spectral radiance may include receiving a plurality of bands of pixels of imaging data from a plurality of imaging sensors, each pixel having a digital number representing the spectral radiance received by the associated imaging sensor, and determining a magnitude of spectral radiance for each of the digital numbers for each of the bands of pixels.
  • the magnitude of spectral radiance is compensated for at least a portion of the digital numbers for any given band of the bands of pixels based on a known non-linear response of an imaging sensor associated with the spectral band.
  • the compensated portion of the digital numbers are digital numbers which are greater than a digital number associated with a known response roll-off for the imaging sensor.
  • the digital numbers associated with the pixel from the remaining bands are determined, and a compensated digital number based on digital numbers from the other bands is computed.
  • the present invention provides a method for transporting a color enhanced image towards an interested entity.
  • the method comprises: conveying, over a portion of a computer network, an image that includes a plurality of pixels each having a value that has been determined based on a distribution of spectral radiance values associated with the plurality of pixels.
  • Still a further embodiment of the present invention provides a method for producing an enhanced digital image, comprising: receiving a plurality of pixels of imaging data from an imaging sensor; determining a magnitude of spectral radiance of each the plurality of pixels of imaging data; processing the magnitudes of spectral radiance to determine a distribution of magnitudes of spectral radiance for the plurality of pixels; adjusting the spectral radiance of each of the plurality of pixels based on the distribution to produce a path radiance adjusted value spectral radiance; processing the adjusted value spectral radiance for each of the plurality of pixels to determine an adjusted distribution for the plurality of pixels; assessing the adjusted distribution to determine a range of adjusted values and a median value of the range of adjusted values; and secondly adjusting the value of at least a subset of the plurality of pixels to create a second adjusted distribution wherein the median of the second adjusted distribution corresponds with a target median and the range of the second adjusted distribution corresponds with a target range.
  • FIG. 1 is a diagrammatic illustration of a satellite in an earth orbit obtaining an image of the earth;
  • FIG. 2 is a block diagram representation of a system of an embodiment of the present invention.
  • FIG. 3 is a diagrammatic illustration of light scattering through the earth atmosphere
  • FIG. 4 is a flow chart illustration of the operational steps for collecting, processing and delivering a digital image for an embodiment of the present invention
  • FIG. 5 is a flow chart illustration of the operational steps for color correction of a digital image for an embodiment of the present invention
  • FIG. 6 is a flow chart illustration of the operational steps for contrast enhancement of a digital image for an embodiment of the present invention
  • Fig. 7 is a diagrammatic illustration of linear and non-linear sensor responses for different imaging sensors.
  • Fig. 8 is a flow chart illustration of the operational steps for compensation of a known sensor non-linearity for an embodiment of the present invention.
  • the present invention provides a digital image that may be displayed with no further manipulation on the part of the customer or end user of the digital image.
  • the present invention may also provide a set of quantitative surface-reflectance estimates that can be further processed by other automatic algorithms to yield specific kinds of information about the objects that have been imaged.
  • a digital image collected at a satellite is spectral balanced and contrast stretched on a physical basis rather than a subjective one.
  • a customer may use the image without additional tools, expertise, or time to perform complex digital image processing manipulations.
  • the present invention may be useful for the remote sensing expert to aid in their application.
  • the present invention also provides an acceptable contrast stretch and spectral balance over a very wide variety of scene content.
  • the algorithm uses known physical characteristics of the imaging sensors to estimate the actual spectral radiance measured by the satellite. A first order correction for major atmospheric effects is then applied. Finally a contrast stretch is applied to enhance the visual contrast of the image without disturbing the spectral balance. In this fashion the DRA algorithm automatically provides a visually appealing image based on standard human color perception principles.
  • FIG. 1 an illustration of a satellite 100 orbiting a planet 104 is described.
  • the earth reference is made to any celestial body of which it may be desirable to acquire images or other remote sensing information.
  • a satellite reference is made to any spacecraft, satellite, and/or aircraft capable of acquiring images or other remote sensing information.
  • the system described herein may also be applied to other imaging systems, including imaging systems located on the earth or in space that acquire images of other celestial bodies. It is also noted that none of the drawing figures contained herein are drawn to scale, and that such figures are for the purposes of discussion and illustration only.
  • the satellite 100 orbits the earth 104 following orbital path 108.
  • An imaging system aboard the satellite 100 is capable of acquiring an image 112 that includes a portion the surface of the earth 104.
  • the image 1 12 is comprised of a plurality of pixels.
  • the satellite 100 may collect images 112 in a number of spectral bands.
  • the imaging system aboard satellite 100 collects four bands of electromagnetic energy, namely, a red band, a green band, a blue band, and a near infrared band. Each band is collected by a separate imaging sensor that is adapted to collect electromagnetic radiation. Data from the sensors is collected, processed, and images are produced.
  • Each band consists of digital numbers (DNs) on an 8-bit or 1 1 -bit radiometric brightness scale.
  • the DNs from each band are processed to generate an image that is useful for the application required by a user.
  • Images collected from the satellite 100 may be used in a number of applications, including both commercial and non-commercial applications.
  • the satellite 100 includes a number of systems, including power/positioning systems 124, a transmit/receive system 128, and an imaging system 132.
  • power/positioning systems 124 Such a satellite and associated systems 124 are well known in the art, and therefore are not described in detail herein as it is sufficient to say that the satellite 100 receives power and may be positioned to collect desired images and transmit/receive data to/from a ground location and/or other satellite systems.
  • the imaging system 132 in this embodiment, includes four imaging sensors that collect electromagnetic energy received at the sensor within a band of electromagnetic energy.
  • the imaging system 132 includes a blue sensor 140, a green sensor 144, a red sensor 148, and a near-infrared (NIR) sensor 152.
  • Each of these sensors 140 - 152 collect electromagnetic energy falling within preset energy bands that is received at the sensor.
  • the imaging sensors 140 - 152 include charge coupled device (CCD) arrays and associated optics to collect electromagnetic energy and focus the energy at the CCD arrays.
  • the CCD arrays are configured to collect energy from a specific energy band by a mass of optical filters.
  • the sensors 140 - 152 also include electronics to sample the CCD arrays and output a digital number (DN) that is proportional to the amount of energy collected at the CCD array.
  • Each CCD array includes a number of pixels, and the imaging system operates as a pushbroom imaging system. Thus, a plurality DNs for each pixel are output from the imaging system to the transmit/receive system 128.
  • the satellite 100 transmits and receives data from a ground station 160.
  • the ground station 160 of this embodiment includes a transmit/receive system 164, a data storage system 168, a control system 172, and a communication system 176.
  • a number of ground stations 160 exist and are able to communicate with the satellite 100 throughout different portions of the satellite 100 orbit.
  • the transmit/receive system 164 is used to send and receive data to and from the satellite 100.
  • the data storage system 168 may be used to store image data collected by the imaging system 132 and sent from the satellite 100 to the ground station 160.
  • the control system 172 in one embodiment, is used for satellite control and transmits/receives control information through the transmit/receive system 164 to/from the satellite 100.
  • the communication system 176 is used for communications between the ground station 160 and one or more data centers 180.
  • the data center 180 includes a communication system 184, a data storage system 188, and an image processing system 192.
  • the image processing system 192 processes the data from the imaging system 132 and provides a digital image to one or more user(s) 196. The operation of the image processing system 192 will be described in greater detail below.
  • the image data received from the satellite 100 at the ground station 160 may be sent from the ground station 160 to a user 196 directly.
  • the image data may be processed by the user using one or more techniques described herein to accommodate the user's needs.
  • the satellite 100 receives radiation from the earth 104.
  • the radiation received at the satellite 100 has three components.
  • L d i r ect 200 is the surface reflected attenuated solar radiation
  • L U pw e iii n g 204 is the up-scattered path radiance
  • L d0Wn w e iii n g 208 is the down scattered "sky" radiance that has been reflected by the surface into the sensor.
  • the total radiation, L tota i received at the satellite 100 can be expressed as follows:
  • the atmospheric scattering contributions which effect the L U p we iii n g 204 and L do w n w e iii n g 208 terms, are generally governed by Rayleigh and Mie scattering.
  • Rayleigh scattering is caused by the various molecular species in the atmosphere and is proportional to ⁇ "4 . This is a selective scattering process that affects shorter wavelength radiances more than longer wavelength radiances. Because of the strong wavelength dependence of Rayleigh scattering, blue light (shorter wavelength) is scattered much more than red light (longer wavelength). This scattering mechanism generally causes the path radiance signal to be much higher in the blue channel 140 (Fig. 2) than in the near IR channel 152 (Fig. 2).
  • Mie scattering is often called "aerosol" scattering. Similar to Rayleigh scattering, Mie scattering is also wavelength dependent (roughly proportional to ⁇ '1 ). Mie scattering is caused by scattering off large sized atmospheric constituents like smoke and haze. Both of these scattering mechanisms contribute to atmospheric scattering of electromagnetic radiation illustrated in Fig. 2.
  • the image is collected.
  • electromagnetic radiation is collected at an imaging system aboard a satellite, with the imaging system providing data related to the total radiance received at certain bands associated with certain sensors.
  • the data provided by the imaging system is used to produce a digital image.
  • This data, at block 224, is spectrally corrected.
  • the color perceived by a user viewing an image produced from a satellite imaging system may be different than the color of an object that would be observed from a location relatively close to the object.
  • the spectral correction of block 224 adjusts the spectral balance of an image based on known sensor information and also further adjusts the spectral balance of an image in order to partially compensate for the atmospheric scattering of the light as it passes through the atmosphere.
  • the contrast of the image is stretched.
  • the contrast stretch in an embodiment, is used to provide additional perceived contrast between features within the image.
  • the image is delivered to one or more user(s) and/or application(s), referred to as a receiver.
  • the image(s) are transmitted to the receiver by conveying the images over the Internet.
  • an image is conveyed in a compressed format.
  • the receiver is able to display an image of the earth location having a visually acceptable color and contrast. It is also possible to convey the image(s) to the receiver in other ways. For instance, the image(s) can be recorded on a magnetic disk, CD, tape, solid-state memory, or other recording medium and shipped to the receiver. It is also possible to simply produce a hard copy of an image and then ship the hardcopy to the receiver. The hard copy can also be faxed, scanned, or otherwise electronically sent to the receiver.
  • spectral information is collected.
  • spectral information is collected, in an embodiment, using CCD detectors within an imaging system.
  • the imaging system 132 comprises sensors 140 - 152 that comprise CCD detectors for each spectral band.
  • the sensors 140 - 152 collect spectral information in their respective bands.
  • the radiometric dynamic range of each image in one embodiment, is 11 bits, although any dynamic range may be used.
  • the digital number produced from each element within a CCD detector has a range from 1 to 2047, with zeros used for backfill.
  • CCD detectors collect spectral information and output the digital number associated with the amount of spectral radiation received at the detector. Numerous methods exist for the collection and sampling of the spectral information within a CCD detector to produce the digital number, the details of which are well understood in the art.
  • radiometric correction coefficients associated with the sensor are applied to the spectral information, as noted at block 240.
  • each CCD detector has a set of coefficients, called radiometric correction coefficients, which represent an approximate, though not always, linear response between the amount of spectral radiance received at the detector and the DN output of the detector.
  • the response of each detector even though they may be manufactured at the same time using the same technique, commonly differs from detector to detector. If left uncorrected the resulting image would contain bands and stripes corresponding to different responses from each detector.
  • radiometric correction The process of correcting the response of each detector is termed "radiometric correction".
  • a set of linear calibration coefficients (corresponding to a gain and a dark offset) are measured for each detector and then applied to the image data to provide consistency between detectors.
  • the calibration coefficients are estimated on a regular basis on-orbit using standard calibration techniques that are well known.
  • the next step in the spectral correction of imaging data is the conversion of the radiometrically corrected data to spectral radiance, as noted at block 244.
  • imaging data is collected in several bands. This conversion is applied to each spectral band as follows:
  • L ⁇ TOA (band) is the spectral radiance of the band
  • DN(band) is the radiometrically corrected Digital Number (DN) of the band
  • K(band) is the band integrated radiance conversion factor (W m "2 ster '1 DN '1 )
  • ⁇ (band) is the effective bandwidth( ⁇ m) of the band.
  • the K factors are factors that are provided for the imaging system and are described in more detail below. Once the correction listed in this equation is performed, the image data represents what the sensor measures, within radiometric error estimates.
  • Lj 1, TOA is referred to as the Top Of Atmosphere (TOA) spectral radiance.
  • the K factors are band dependant factors that convert the input corrected DN data to band integrated radiance (in units of W m-2 sr-1).
  • the K factor is divided by the effective bandwidth of the spectral channel, the result, which converts input DN's TOA spectral radiance, is termed the kappa factor.
  • the kappa factor since it is derived from the original k factor, is also band dependant.
  • Table A-I Spectral Radiance Conversion Factors for spectral bands.
  • each kappa factor is converted to a dimensionless ratio, and these ratios are applied to the DN data. This process preserves the relative spectral balances among the bands and preserves, and may in some cases enhance, the radiometric range of the adjusted DN data.
  • the transformed factors are termed "adjusted kappa" factors and are computed as follows: 1
  • Table A-2 Adjusted kappa factors for the spectral bands.
  • a path radiance is estimated by noting the "dark target" DN value in each spectral band. This embodiment thus estimates the path radiance in each band. This DN then becomes a DN offset value. This offset is an estimate of the total amount of radiation entering the sensor that is not reflected into the sensor by the Earth's surface. An estimation of the DN associated with path radiance for each of the spectral bands of imaging data (blue, green, red, and NIR) is generated. At block 252, the DN offset value is applied to L ⁇ J OA to generate a path-corrected TOA spectral radiance.
  • the path-corrected TOA spectral radiances of the scene in a given spectral band are obtained by subtracting the path-radiance value (the DN offset value) for that band from the total spectral radiances for that band.
  • spectral properties based on TOA path-corrected spectral radiances are similar to spectral properties based on the spectral radiances of surface materials observed just above the surface. The only property that changes significantly from surface to TOA is the overall intensity of the three bands.
  • the dark target DN value is estimated automatically for each band by identifying the 0.1% point in the cumulative distribution histogram for the band.
  • This low-end cutoff represents the first DN in the band where an appreciable signal from surface objects is measured, and it is an estimate of the path radiance contribution to the band. It should be noted that this low-end cutoff at the 0.1% point in the cumulative distribution histogram is merely one example of estimating the dark target DN value. Other points in the distribution may be selected, the dark target DN value may be manually selected by a user, or the point in the distribution may be dynamically adjusted based on the scene content and distribution histogram. The low-end cutoff point is computed with respect to the adjusted DN values of this embodiment. Subtracting out this adjusted DN value from each pixel is equivalent to subtracting out the path radiance contribution from the overall TOA spectral radiance.
  • the DN associated with the low-end cutoff is set as the dark target DN value, also referred to as DN path(band).
  • the Path Radiance Corrected (PRC) DN can be computed as follows:
  • DN _ path ADJ ⁇ band DN _ path ⁇ band) ⁇ Kappa ADJ ⁇ band
  • DN PRC ⁇ band, pixel DN ADJ ⁇ band, pixel) - DN _ path ADJ ⁇ band
  • the value of DN_path ADJ (band) is determined for each spectral-band of imaging data. For example, if four bands of imagery are collected, a value is produced for DN_path AD j(band_l), DN_path ADJ (band_2), DN_path AD j(band_3), and DN_path AD j(band_4) representing the blue-light, green-light, red-light, and NIR bands, respectively.
  • DN_path AD j(band) representing the blue-light, green-light, red-light, and NIR bands.
  • ⁇ o A (band_l) DN_path AD j(band_l) x K_Factor(band-l) / ⁇ (band_l) L ⁇
  • ⁇ o A (band_2) DN _path AD j(band_2) x K_Factor(band_2) / ⁇ (band_2) L ⁇
  • ⁇ o A (band_3) DN_path AD j(band_3) x K_Factor(band_3) / ⁇ (band_3) L ⁇ >
  • ⁇ o A (band-4) DN_path ADJ (band_4) x K_Factor(band_4) / ⁇ (band_4)
  • each L ⁇ jo A (band) value can be converted to an apparent reflectance factor at the top of the atmosphere (RFTOA), as seen by an observer (RFTO A_path_band).
  • RFTO A_path_band an apparent reflectance factor at the top of the atmosphere
  • the spectral irradiance of the sun at TOA (SISUNTOA), The elevation angle, in degrees, of the sun (SUNELEV), and The earth-sun distance, in Astronomical Units (A.U.), (D)
  • SISUNTOA is a known constant for each spectral band.
  • SUNELEV and D are the same for each spectral band. SUNELEV varies from place to place on the earth and from date to date
  • the ratio of RFTOA_path_4 divided by RFTOA_path_3, called RFRATIO43 is related to the ratio of DN_path_4 divided by DN_path_3, called DNRATIO43, in a fixed way, as follows:
  • DNRATIO43 0.927316 x RFRATIO43
  • RFRATIO43 is 0.8 or less. If RFRATIO43 is greater than 0.8, then, in one embodiment, it is assumed that the RFTOA_path_4 value is too high, and that no totally absorbing body was present in the scene as seen in the NIR band. In this case, in one embodiment, an estimate is selected as the value of RFTOA_path_4 as:
  • RFTOA_path_4_better RFTOA_path_4 x 0.8
  • DN_path AD j In terms of DN_path AD j values, DNRATIO43 is normally 0.741853 or less in an embodiment. If DNRATIO43 is greater than this value, an estimate is selected as the value of DN_path AD j(band_4) as:
  • DN_path ADJ _better(band_4) DN_path ADJ (band_3) x 0.741853 [Para 52]
  • path radiance values are not allowed to exceed to an absolute maximum value.
  • a value of 400 DNs (1 1 bit input data) is set as the maximum path radiance value for every band in one embodiment, although a different value may be used so long as the spectral transformation is acceptable.
  • path-radiance DNs are not the same for all bands and tend to be higher for blue light and green light than for red light and NIR. Nevertheless, under cloudy conditions, actual scene DNs are generally quite high (above 1500). In such cases, minor differences among path-radiance DN estimates have relatively small effects on the perceived color of the transformed imagery.
  • a Look Up Table is computed for each band corresponding to the DN PRC .
  • the DN PRC values are generated on a band specific basis using the above equations.
  • the DN Ong value in the equations is simply in the integer range of the input data such as 1 - 2047 for 11 bit data, and 1 - 255 for 8 bit data.
  • intermediate computations are kept in floating point format to avoid round off and overflow errors during the calculation.
  • the results of the LUT are not allowed to exceed the allowed range of the output DN values. This can be accomplished by testing the DN value for values that fall below 1 or above the maximum DN output, and resetting the value to 1 or to the maximum accordingly.
  • Fig. 6 illustrates the operational steps for enhancing contrast for one embodiment of the present invention.
  • the contrast enhancement technique of this embodiment differs from many COTS contrast stretches in that it is a color-hue-and-color-saturation preserving contrast stretch.
  • the color preservation property of this stretch is beneficial since the spectral correction operations previously described are designed to correct the imagery in terms of hue and saturation.
  • the desire in making contrast improvements is to only increase its brightness (intensity) while not affecting the color-hue or color-saturation.
  • the lower percentile limit is set to 1% of the cumulative distribution of the image histogram
  • the upper percentile limit is set to 99% of the cumulative distribution of the image histogram. Any percentile cutoff can be used for this determination, such as 2%, which corresponds to the 98% point in the cumulative histogram. However, a 2% cutoff may cut off too much of the upper end of the DN histogram, resulting in significant loss of detail over bright objects.
  • a 0.5 percentile cutoff may be used, which corresponds to the 0.5% and 99.5% points in the image histogram.
  • any value for the percentile cutoff may be used and, in one embodiment, the contrast enhancement algorithm is implemented with these parameters as configurable values that may be configured by a user.
  • the DN corresponding to the upper and lower percentile limits for each band are identified.
  • the cutoff value of the upper DN (hi dn) and lower DN (lo_dn AD j) is selected at block 268. The selection is made by first identifying the highest DN corresponding to the high percentile cutoff for each band in the image, and the largest of these values is selected at block 268.
  • lo_dn is made by identifying the lowest DN corresponding to the lower percentile cutoff for all bands in the image.
  • the value of the band used in the above equations is the band that contains the hi_dn and the band that contains lo_dn, respectively.
  • the average target brightness value for the image is identified.
  • the value of the average target brightness has a significant impact on the visual appearance of the image, and is often open to subjective interpretation.
  • the value of the average target brightness is configurable by a user.
  • the value is the average target brightness is preset to a value that is likely to produce a visually acceptable image.
  • the median of the modified DN values for each band in the image is computed. This median is computed by determining the median value of the cumulative image histogram for each band.
  • the average of the medians from each band is computed. In one embodiment the average of the medians from each band is computed according to the following equation:
  • G gamma factor
  • avg_median_dn is the average of the median modified DN values for each band in the image
  • max dn is the maximum DN value of the image output (i.e. 255 for 8-bit products or 2047 for 1 1 -bit products)
  • trgt_dn is the DN value of the target brightness in the image.
  • the value of gamma is limited to ensure that an exaggerated stretch is not applied to the imagery.
  • the stretch is applied, as noted at block 296.
  • the stretch equation in an embodiment, is:
  • DN slrelch (band, pixel) x hi value
  • DNpRc the TOA path radiance corrected DN
  • lo dn AD j is the lowest cutoff from all the bands
  • hi_dn AD j is the highest cutoff from all the bands
  • G is the gamma factor
  • hi value is the maximum DN desired in the output image.
  • the stretch equation is applied to each pixel in each band to produce a stretched DN value for the pixel. This technique increases the intensity of each channel without materially changing the hue and saturation of the imagery.
  • the resulting image product maintains the correct spectral balance of the imagery while providing an acceptable level of contrast enhancement.
  • the result is a visually appealing image that requires no manipulation for proper display.
  • the results of the algorithm generally do not depend on scene content, and interpretation of the product requires little or no experimentation or "guessing" by the remote sensing analyst.
  • a lookup table is used to determine a path radiance corrected DN for each pixel in a band.
  • the original pixel values (for each band) are read in from the original image. These DNs are used as an index into the DN PR C lookup tables to retrieve the value of DNp R c in the stretch equation. The stretch is then applied and the pixel value is written out to the image.
  • the value of the DN for each pixel is evaluated to verify that the DN value is not outside of certain limits following the application of the stretch equation. For example, if the DNprc- lo_dn is less than 0, then DNstretch is set to 1. In this embodiment, the DN value of zero is reserved for blackfill. Similarly, if DNprc is greater than the hi value, then DNstretch is set to be the hi value.
  • spectral compensation may be generated for known imaging sensor non-linearities.
  • each band of imagery has an associated imaging sensor that receives radiance from the particular band of interest.
  • the sensor may have a non-linear response, and provide DN values that reach a maximum limit when the amount of radiance received at the sensor is above a threshold limit.
  • the output of three spectral bands is illustrated as a function of radiance received at the sensor associated with the respective spectral band.
  • a first band has a linear response throughout the full range of DNs, as indicated by line 310.
  • a second band also has a linear response throughout the full range of DNs, as indicated by line 314.
  • a third band has a non-linearity, as indicated by line 318.
  • the third band has a linear response up to a certain amount of radiance received, and the output of the sensor is non-linear beyond this point. This point is referred to herein as a roll-off point, and is indicated at point 322 in Fig. 7.
  • the third sensor reaches a maximum output at point 326, after which the output of the sensor is the same DN regardless of any additional radiance received at the sensor.
  • the output of the nonlinear sensor is corrected in one embodiment using a technique referred to as "spectral feathering."
  • spectral feathering is performed according to the flow chart illustration of Fig. 8.
  • block 330 is it determined if there is a known non- linearity of a sensor band. If there is no known non-linearity, no compensation is required, and the operations are complete, as indicated at block 334. If there is a known non-linearity, it is determined if the pixel within the band has a DN value that is greater than the DN value associated with the roll-off point, as noted at block 338. If the pixel value is less than the roll-off value, no compensation is required and the operations are done as noted at block 334.
  • the DN value of the pixel is compensated based on the pixel DN values of the other bands, as indicated at block 342.
  • the compensated pixel value is computed based on an average of the remaining bands DN values.
  • the compensated pixel DN value is output for the non-linear band, as indicated at block 346.
  • a spectral band (denoted by "band”) has a known non- linearity.
  • pixel for a particular pixel (denoted by "pixel"), it is determined if the DN value of the pixel in the band is greater than the rolloff value for the band (denoted by "band rolloff '). If so, the pixel requires spectral compensation. In this case, first, the amount of compensation required to adjust the pixel is determined as follows:
  • band lmt is the DN value beyond which the sensor becomes non-linear
  • band_roUoff is the DN corresponding to the beginning of the non-linear behavior
  • input_dn(pixel,band) indicates the DN value associated with the given pixel in the band that requires compensation.
  • a value of zero indicates no compensation, while a value of one indicates full compensation.
  • pixel values from adjacent spectral bands are used.
  • the average output value of the adjacent spectral bands is computed according to the following equation:
  • output_dn(pixel,band+l) indicates the output DN value of the spectrally adjacent band of longer wavelength than the band to be compensated
  • output_dn(pixel,band-l) indicates the output DN value of the spectrally adjacent band of shorter wavelength than the band to be compensated.
  • output_value(pixel,band) is the uncompensated DN corresponding to the pixel in the band that requires compensation, fraction is computed above, and ave dn out is computed above.
  • spectral feathering This technique is referred to as "spectral feathering".

Abstract

A system and method processes original digital numbers (DNs) provided by a satellite imaging system (100) to produce a set of spectral balanced and contrast enhanced multispectral images. Spectral balancing is achieved based on physical characteristics of sensors of the imaging system as well as compensation for atmoshpheric effects. The DNs in the multispectral bands may be processed using a relatively small amount of processing resources otherwise required to produce such images. Such images may be processed completely automatically and provide relatively easy visual interpretation. Each image pixel may be, for example, in an 8-bit or 16-bit format, and the image may be displayed and/or printed (196) without applying any additional color correction and/or contrast stretches.

Description

METHOD AND APPARATUS FOR ENHANCING A DIGITAL IMAGE
FIELD OF THE INVENTION
[Para 1] The present invention is directed to enhancing digital images, and, more specifically, to enhancing color and contrast in a multispectral digital image obtained from a remote sensing platform.
BACKGROUND
[Para 2] In a generalized way, a digital image is a set of one or more two- dimensional numeric arrays in which each array element, or pixel, represents an apparent brightness measured by an imaging sensor. The brightness value at each pixel is often represented by an integer number called a digital number (DN). Digital images are commonly generated by remote imaging systems that collect imagery in the visible and infrared regions of the electromagnetic spectrum. Images collected from such systems are used in numerous applications by both commercial and government customers.
[Para 3] When collecting digital imagery, specific bands of electromagnetic energy from the area being imaged are commonly collected at several imaging sensors. For example, in many imaging systems, several spectral bands of electromagnetic energy are collected at imaging sensors, such as, a red light band, a green light band, a blue light band, and a near infrared band. Imaging systems may also include other spectral bands within the visible bands and/or within the middle infrared (a.k.a., shortwave infrared) bands. An image generated from such a system is referred to as a multispectral digital image. In such a case, a set of DN values exist for each line and column position in the multispectral digital image with one DN value allocated to each spectral band. Each DN represents the relative brightness of an image in the associated spectral band at the associated pixel location in the image. When generating a multispectral digital image, data from the imaging sensors is collected, processed, and images are produced. Images are commonly provided to customers as a multispectral image file containing imagery from each of the spectral bands. Each band includes DNs on, for example, an 8-bit or 1 1-bit radiometric brightness scale representing the radiance collected at the respective sensor for an area of the scene imaged. Several methods exist for processing data from each band to generate an image that is useful for the application required by a user. The data is processed in order to provide an image that has accurate color and contrast for features within the image. [Para 4] The DN values generated from a particular imaging sensor have limited range that varies from image source to image source according to the associated bit depth. Commonly used bit depths are 8 bits and 11 bits, resulting in DNs that range from 0 to 255 and 0 to 2047, respectively. Digital images are generally stored in raster files or as raster arrays in computer memory, and since rasters use bit depths that are simple powers of base 2, image DNs may be stored in rasters having 1, 2, 4, 8, or 16 bits, with 8 bit and 16 bit being most common. It is common practice to reserve special DN values to represent non-existent image data (e.g., 0, 255, and/or 2047). The corresponding pixels are called blackfill. Actual image data will then have DNs between 1 and 254 or between 1 and 2046.
[Para 5] Digital images, following collection by an imaging system, are commonly enhanced. Enhancement, in the context of a digital image, is a process whereby source- image DNs are transformed into new DNs that have added value in terms of their subsequent use. Commonly, the data from each band of imagery is enhanced based on known sensor and atmospheric characteristics in order to provide an adjusted color for each band. The image is then contrast stretched, in order to provide enhanced visual contrast between features within the image. Commonly, when performing the contrast stretch, the average radiance from each pixel within a particular band is placed in a histogram, and the distribution of the histogram is stretched to the full range of DNs available for the pixels. For example, if each band includes DNs on an 8-bit radiometric brightness scale, this represents a range of DNs between 0 and 255. The DNs from a scene may then be adjusted to use this full range of possible DN values, and/or the DNs from a scene may be adjusted to obtain a distribution of DNs that is centered about the mid-point of possible DN values. Generally speaking, the visual quality of the contrast stretch achieved using normal contrast stretch algorithms is highly dependent on scene content. Many contrast stretch algorithms change the color content of the imagery resulting in questionable color content in the scene. For example, if the distribution of DNs is not centered within the range of possible DN values, such a contrast stretch can skew the DNs resulting in a color offset and, in an image containing structures, a house may appear as being the wrong color. In addition it is often difficult to decide what stretch to apply to a given image. A user often balances the trade-offs between acceptable contrast and acceptable color balance when choosing a Commercial Off The Shelf (COTS) stretch to apply to a given image. While useful in applications where users may be accustomed to color distortion, in applications where a user is not accustomed to such a color skew, it may result in customer dissatisfaction for applications where users are not accustomed to such a color skew. For example, an employee of a firm specializing in the analysis of digital imagery may be accustomed to such a color skew, while a private individual seeking to purchase a satellite image of an earth location of interest to them may find such a color skew unacceptable.
[Para 6] Other methods for enhancing contrast in digital images may be used that preserve the color of the image. While such methods preserve color, they are generally quite computer intensive and require significant amounts of additional processing as compared with a contrast stretch as described above. For example, due to inadequacies in COTS stretch algorithms, images may be stretched manually by manipulating image histograms to achieve the desired result. This can be a very time-consuming, labor-intensive process. Another method of performing a color preserving contrast stretch follows three steps. First, a processing system converts RGB data to a Hue Intensity Saturation (HIS) color space. Next, a contrast stretch is applied to the 1 (Intensity) channel within the HIS color space. Finally, the modified HIS is converted back to the RGB color space. By adjusting the Intensity channel in the HIS color space, the brightness of the image is enhanced while maintaining the hue and saturation. The image in the RGB color space thus has enhanced contrast while maintaining color balance. This technique is reliable, however, it requires significant additional processing as compared to a contrast stretch performed on RGB data as previously described. A major drawback to this type of stretch involves the significant amounts of computer processing time involved in converting from RGB to HIS space and back.
SUMMARY OF THE INVENTION
[Para 7] The present invention provides a system and method to process original DNs provided by a satellite imaging system to produce a set of color balanced and contrast enhanced images. The present invention enhances a multispectral digital image in a fully- automatic, systematic, and universal way, allows for the production of optimized enhanced digital image products that are cost effective and profitable, and also produces a set of quantitative surface-reflectance estimates that can be further processed by other automatic algorithms to yield specific kinds of information about the objects that have been imaged. The set of color balanced and contrast enhanced images is referred to herein as Dynamic Range Adjusted (DRA) images. The DNs in the multispectral bands may be processed using a relatively small amount of processing resources otherwise required to produce such images. Such DRA products provide relatively easy visual interpretation, and may include images in which each pixel is in an 8-bit format, four-band 8-bit or 24-bit RGB. The image may be displayed and/or printed without applying any additional contrast stretches.
[Para 8] In one embodiment, a method is provided for producing an enhanced digital image, the method comprising: receiving a plurality of pixels of imaging data from an imaging sensor, each of the pixels of imaging data comprising a digital number; processing the digital number of each of the pixels to determine a distribution of imaging data for the plurality of pixels, and determining the spectral radiance at top of atmosphere (TOA) of each of the plurality of pixels based on the distribution. The receiving step, in one embodiment, comprises receiving a plurality of bands of pixels of imaging data from a plurality of imaging sensors. The plurality of imaging sensors may include a blue band imaging sensor, a green band imaging sensor, a red band imaging sensor, and a near-infrared band imaging sensor. In another embodiment, the receiving step comprises receiving a plurality of top-of-atmosphere pixels of imaging data from the imaging sensors, the top-of-atmosphere pixels comprising a top-of-atmosphere digital number, and adjusting the top-of-atmosphere digital numbers based on known sensor characteristics of the imaging sensors. The step of determining a spectral radiance step may be performed independently of scene content of the digital image, and may be performed on a physical basis of the imaging data, rather than a purely statistical basis.
[Para 9] When processing the digital number, a lower distribution cutoff of said distribution of imaging data may be determined. The lower distribution cutoff, in an embodiment, is set at 0.1% of the cumulative distribution. In another embodiment, the lower distribution cutoff is based on a point in the cumulative distribution at which the digital numbers are indicative of the spectral radiance of the atmospheric path, hereinafter referred to as "path radiance." When the lower distribution cutoff is determined, the method may further include subtracting a digital number associated with the lower distribution cutoff from each of the plurality of pixels of imaging data. In another embodiment, the method includes determining the value of a digital number associated with the lower distribution cutoff, and subtracting the digital number associated with the lower distribution cutoff from each of the plurality of pixels of imaging data when the digital number associated with the lower distribution cutoff is less than a predetermined maximum value. When the lower distribution cutoff is greater than the predetermined maximum value, the predetermined maximum value is subtracted from each of the plurality of pixels of imaging data.
[Para 10] In another embodiment, the contrast of a digital image is also enhanced by determining a median value of the input DNs, determining a target brightness value for the enhanced digital image, and adjusting each DN to generate an output DN, the median value of the output DNs being substantially equal to the target brightness value. When multiple bands of imaging data are present, the DNs of each band of imaging data are received from the plurality of imaging sensors. The median value of the received DNs is determined by processing the DNs of each of the bands of imaging data to determine a distribution of DNs for each of the plurality bands of imaging data, determining a median value of each of the distributions of DNs, and computing an average of the median values.
[Para 11] In another embodiment, the present invention provides a satellite image comprising a plurality of pixels each having a value determined by: determining a magnitude of spectral radiance of each a plurality of digital numbers of raw data pixels; processing the magnitudes of spectral radiance to determine a distribution of magnitudes of spectral radiance; and calculating the value of each of the plurality of pixels based on the distribution. The step of determining a magnitude of spectral radiance may include receiving a plurality of bands of pixels of imaging data from a plurality of imaging sensors, each pixel having a digital number representing the spectral radiance received by the associated imaging sensor, and determining a magnitude of spectral radiance for each of the digital numbers for each of the bands of pixels.
[Para 12] In another embodiment, the magnitude of spectral radiance is compensated for at least a portion of the digital numbers for any given band of the bands of pixels based on a known non-linear response of an imaging sensor associated with the spectral band. The compensated portion of the digital numbers are digital numbers which are greater than a digital number associated with a known response roll-off for the imaging sensor. When performing the compensation, it is first determined that a digital number for a given pixel of any given band is greater than a predetermined digital number. Second, the digital numbers associated with the pixel from the remaining bands are determined, and a compensated digital number based on digital numbers from the other bands is computed.
[Para 13] In yet another embodiment, the present invention provides a method for transporting a color enhanced image towards an interested entity. The method comprises: conveying, over a portion of a computer network, an image that includes a plurality of pixels each having a value that has been determined based on a distribution of spectral radiance values associated with the plurality of pixels.
[Para 14] Still a further embodiment of the present invention provides a method for producing an enhanced digital image, comprising: receiving a plurality of pixels of imaging data from an imaging sensor; determining a magnitude of spectral radiance of each the plurality of pixels of imaging data; processing the magnitudes of spectral radiance to determine a distribution of magnitudes of spectral radiance for the plurality of pixels; adjusting the spectral radiance of each of the plurality of pixels based on the distribution to produce a path radiance adjusted value spectral radiance; processing the adjusted value spectral radiance for each of the plurality of pixels to determine an adjusted distribution for the plurality of pixels; assessing the adjusted distribution to determine a range of adjusted values and a median value of the range of adjusted values; and secondly adjusting the value of at least a subset of the plurality of pixels to create a second adjusted distribution wherein the median of the second adjusted distribution corresponds with a target median and the range of the second adjusted distribution corresponds with a target range.
BRIEF DESCRIPTION OF THE DRAWINGS
[Para 15] Fig. 1 is a diagrammatic illustration of a satellite in an earth orbit obtaining an image of the earth;
[Para 16] Fig. 2 is a block diagram representation of a system of an embodiment of the present invention;
[Para 17] Fig. 3 is a diagrammatic illustration of light scattering through the earth atmosphere;
[Para 18] Fig. 4 is a flow chart illustration of the operational steps for collecting, processing and delivering a digital image for an embodiment of the present invention;
[Para 19] Fig. 5 is a flow chart illustration of the operational steps for color correction of a digital image for an embodiment of the present invention;
[Para 20] Fig. 6 is a flow chart illustration of the operational steps for contrast enhancement of a digital image for an embodiment of the present invention;
[Para 21] Fig. 7 is a diagrammatic illustration of linear and non-linear sensor responses for different imaging sensors; and
[Para 22] Fig. 8 is a flow chart illustration of the operational steps for compensation of a known sensor non-linearity for an embodiment of the present invention.
DETAILED DESCRIPTION
[Para 23] The present invention provides a digital image that may be displayed with no further manipulation on the part of the customer or end user of the digital image. The present invention may also provide a set of quantitative surface-reflectance estimates that can be further processed by other automatic algorithms to yield specific kinds of information about the objects that have been imaged. When used to provide a digital image, a digital image collected at a satellite is spectral balanced and contrast stretched on a physical basis rather than a subjective one. A customer may use the image without additional tools, expertise, or time to perform complex digital image processing manipulations. In addition to serving the basic commercial customer, the present invention may be useful for the remote sensing expert to aid in their application. The present invention also provides an acceptable contrast stretch and spectral balance over a very wide variety of scene content. The algorithm uses known physical characteristics of the imaging sensors to estimate the actual spectral radiance measured by the satellite. A first order correction for major atmospheric effects is then applied. Finally a contrast stretch is applied to enhance the visual contrast of the image without disturbing the spectral balance. In this fashion the DRA algorithm automatically provides a visually appealing image based on standard human color perception principles.
[Para 24] Having generally described the process for producing an image, various embodiments of the present invention are described in greater detail. Referring to Fig. 1, an illustration of a satellite 100 orbiting a planet 104 is described. At the outset, it is noted that, when referring to the earth herein, reference is made to any celestial body of which it may be desirable to acquire images or other remote sensing information. Furthermore, when referring to a satellite herein, reference is made to any spacecraft, satellite, and/or aircraft capable of acquiring images or other remote sensing information. Furthermore, the system described herein may also be applied to other imaging systems, including imaging systems located on the earth or in space that acquire images of other celestial bodies. It is also noted that none of the drawing figures contained herein are drawn to scale, and that such figures are for the purposes of discussion and illustration only.
[Para 25] As illustrated in Fig. 1, the satellite 100 orbits the earth 104 following orbital path 108. An imaging system aboard the satellite 100 is capable of acquiring an image 112 that includes a portion the surface of the earth 104. The image 1 12 is comprised of a plurality of pixels. Furthermore, the satellite 100 may collect images 112 in a number of spectral bands. In one embodiment, the imaging system aboard satellite 100 collects four bands of electromagnetic energy, namely, a red band, a green band, a blue band, and a near infrared band. Each band is collected by a separate imaging sensor that is adapted to collect electromagnetic radiation. Data from the sensors is collected, processed, and images are produced. Each band consists of digital numbers (DNs) on an 8-bit or 1 1 -bit radiometric brightness scale. The DNs from each band are processed to generate an image that is useful for the application required by a user. Images collected from the satellite 100 may be used in a number of applications, including both commercial and non-commercial applications.
[Para 26] Referring now to Fig. 2, a block diagram representation of an image collection and distribution system 120. In this embodiment, the satellite 100 includes a number of systems, including power/positioning systems 124, a transmit/receive system 128, and an imaging system 132. Such a satellite and associated systems 124 are well known in the art, and therefore are not described in detail herein as it is sufficient to say that the satellite 100 receives power and may be positioned to collect desired images and transmit/receive data to/from a ground location and/or other satellite systems. The imaging system 132, in this embodiment, includes four imaging sensors that collect electromagnetic energy received at the sensor within a band of electromagnetic energy. In this embodiment, the imaging system 132 includes a blue sensor 140, a green sensor 144, a red sensor 148, and a near-infrared (NIR) sensor 152. Each of these sensors 140 - 152 collect electromagnetic energy falling within preset energy bands that is received at the sensor. The imaging sensors 140 - 152, in this embodiment, include charge coupled device (CCD) arrays and associated optics to collect electromagnetic energy and focus the energy at the CCD arrays. The CCD arrays are configured to collect energy from a specific energy band by a mass of optical filters. The sensors 140 - 152 also include electronics to sample the CCD arrays and output a digital number (DN) that is proportional to the amount of energy collected at the CCD array. Each CCD array includes a number of pixels, and the imaging system operates as a pushbroom imaging system. Thus, a plurality DNs for each pixel are output from the imaging system to the transmit/receive system 128.
[Para 27] The satellite 100 transmits and receives data from a ground station 160. The ground station 160 of this embodiment includes a transmit/receive system 164, a data storage system 168, a control system 172, and a communication system 176. In one embodiment, a number of ground stations 160 exist and are able to communicate with the satellite 100 throughout different portions of the satellite 100 orbit. The transmit/receive system 164 is used to send and receive data to and from the satellite 100. The data storage system 168 may be used to store image data collected by the imaging system 132 and sent from the satellite 100 to the ground station 160. The control system 172, in one embodiment, is used for satellite control and transmits/receives control information through the transmit/receive system 164 to/from the satellite 100. The communication system 176 is used for communications between the ground station 160 and one or more data centers 180. The data center 180 includes a communication system 184, a data storage system 188, and an image processing system 192. The image processing system 192 processes the data from the imaging system 132 and provides a digital image to one or more user(s) 196. The operation of the image processing system 192 will be described in greater detail below. Alternatively, the image data received from the satellite 100 at the ground station 160 may be sent from the ground station 160 to a user 196 directly. The image data may be processed by the user using one or more techniques described herein to accommodate the user's needs.
[Para 28] Referring now to Fig. 3, an illustration of an imaging system collecting sensing data is now described. The satellite 100, as illustrated in Fig. 3, receives radiation from the earth 104. The radiation received at the satellite 100 has three components. Ldirect 200 is the surface reflected attenuated solar radiation, LUpweiiing 204 is the up-scattered path radiance, and Ld0Wnweiiing 208 is the down scattered "sky" radiance that has been reflected by the surface into the sensor. Thus, the total radiation, Ltotai, received at the satellite 100 can be expressed as follows:
Figure imgf000010_0001
W where λ indicates a dependence on wavelength. The three terms in this equation are the result of a complex radiative transfer process in the atmosphere as well as reflection by surface materials and λ indicates a dependence on wavelength. Note that the Lupweiiing204 and Ldownweiiing 208 rays may undergo multiple scattering events before entering the sensor. Furthermore, atmospheric interactions may also include absorption of radiance.
[Para 29] The atmospheric scattering contributions, which effect the LUpweiiing 204 and Ldownweiiing 208 terms, are generally governed by Rayleigh and Mie scattering. Rayleigh scattering is caused by the various molecular species in the atmosphere and is proportional to λ"4. This is a selective scattering process that affects shorter wavelength radiances more than longer wavelength radiances. Because of the strong wavelength dependence of Rayleigh scattering, blue light (shorter wavelength) is scattered much more than red light (longer wavelength). This scattering mechanism generally causes the path radiance signal to be much higher in the blue channel 140 (Fig. 2) than in the near IR channel 152 (Fig. 2). Mie scattering is often called "aerosol" scattering. Similar to Rayleigh scattering, Mie scattering is also wavelength dependent (roughly proportional to λ'1). Mie scattering is caused by scattering off large sized atmospheric constituents like smoke and haze. Both of these scattering mechanisms contribute to atmospheric scattering of electromagnetic radiation illustrated in Fig. 2.
[Para 30] Referring now to Fig. 4, the operational steps performed in image collection, processing, and delivery are now described for an embodiment of the invention. Initially, as indicated at block 220, the image is collected. As discussed above with respect to Fig. 2, in an embodiment electromagnetic radiation is collected at an imaging system aboard a satellite, with the imaging system providing data related to the total radiance received at certain bands associated with certain sensors. The data provided by the imaging system is used to produce a digital image. This data, at block 224, is spectrally corrected. As discussed above, the color perceived by a user viewing an image produced from a satellite imaging system may be different than the color of an object that would be observed from a location relatively close to the object. This difference in perceived color is due to the reflected light received at the satellite imaging system being scattered through the atmosphere. The spectral correction of block 224, in an embodiment, adjusts the spectral balance of an image based on known sensor information and also further adjusts the spectral balance of an image in order to partially compensate for the atmospheric scattering of the light as it passes through the atmosphere. At block 228, the contrast of the image is stretched. The contrast stretch, in an embodiment, is used to provide additional perceived contrast between features within the image. At block 232, the image is delivered to one or more user(s) and/or application(s), referred to as a receiver. In one embodiment, the image(s) are transmitted to the receiver by conveying the images over the Internet. Typically, an image is conveyed in a compressed format. Once received, the receiver is able to display an image of the earth location having a visually acceptable color and contrast. It is also possible to convey the image(s) to the receiver in other ways. For instance, the image(s) can be recorded on a magnetic disk, CD, tape, solid-state memory, or other recording medium and shipped to the receiver. It is also possible to simply produce a hard copy of an image and then ship the hardcopy to the receiver. The hard copy can also be faxed, scanned, or otherwise electronically sent to the receiver.
[Para 31] Referring now to Fig. 5, the operational steps for performing spectral correction are described for an embodiment of the present invention. Initially, at block 236, spectral information is collected. As noted previously, spectral information is collected, in an embodiment, using CCD detectors within an imaging system. Referring again to Fig. 2, the imaging system 132, comprises sensors 140 - 152 that comprise CCD detectors for each spectral band. The sensors 140 - 152 collect spectral information in their respective bands. The radiometric dynamic range of each image, in one embodiment, is 11 bits, although any dynamic range may be used. In this embodiment, the digital number produced from each element within a CCD detector has a range from 1 to 2047, with zeros used for backfill. As is understood, CCD detectors collect spectral information and output the digital number associated with the amount of spectral radiation received at the detector. Numerous methods exist for the collection and sampling of the spectral information within a CCD detector to produce the digital number, the details of which are well understood in the art.
[Para 32] Referring again to Fig. 5, following the collection of spectral information, radiometric correction coefficients associated with the sensor are applied to the spectral information, as noted at block 240. Within the imaging sensor, each CCD detector has a set of coefficients, called radiometric correction coefficients, which represent an approximate, though not always, linear response between the amount of spectral radiance received at the detector and the DN output of the detector. The response of each detector, even though they may be manufactured at the same time using the same technique, commonly differs from detector to detector. If left uncorrected the resulting image would contain bands and stripes corresponding to different responses from each detector. By applying the radiometric correction coefficients to the output of the detector, the response of each detector is corrected. The process of correcting the response of each detector is termed "radiometric correction". In this process, a set of linear calibration coefficients (corresponding to a gain and a dark offset) are measured for each detector and then applied to the image data to provide consistency between detectors. In one embodiment, the calibration coefficients are estimated on a regular basis on-orbit using standard calibration techniques that are well known.
[Para 33] Following the radiometric correction, the next step in the spectral correction of imaging data is the conversion of the radiometrically corrected data to spectral radiance, as noted at block 244. As mentioned previously, imaging data is collected in several bands. This conversion is applied to each spectral band as follows:
L T < {buand ^) = D mNrα{band ^) ■ K( κband) J (\ -— W \
Δψand) y m ■ ster ■ μm J
where L^TOA (band) is the spectral radiance of the band, DN(band) is the radiometrically corrected Digital Number (DN) of the band, K(band) is the band integrated radiance conversion factor (W m"2 ster'1 DN'1), and Δ(band) is the effective bandwidth(μm) of the band. The K factors are factors that are provided for the imaging system and are described in more detail below. Once the correction listed in this equation is performed, the image data represents what the sensor measures, within radiometric error estimates. Lj1, TOA is referred to as the Top Of Atmosphere (TOA) spectral radiance.
[Para 34] In one embodiment, the K factors are band dependant factors that convert the input corrected DN data to band integrated radiance (in units of W m-2 sr-1). When the K factor is divided by the effective bandwidth of the spectral channel, the result, which converts input DN's TOA spectral radiance, is termed the kappa factor. The kappa factor, since it is derived from the original k factor, is also band dependant.
[Para 35] An example of K-factors, effective bandwidths of the sensor channels, and kappa factors are listed in Table A-I .
Figure imgf000013_0001
Table A-I: Spectral Radiance Conversion Factors for spectral bands.
[Para 36] The conversion of the radiometrically corrected DNs to L^TOA may thus also be performed by taking the product of kappa and the DN. The conversion to L^JOA is straightforward, however, because of the small magnitude of the kappa coefficients (Table A- 1), the direct application of the kappa coefficients of this example to the DN data collapses the dynamic numeric range of the imagery data into a smaller range. In the example of Table A-I, an original DN of 2047 would convert to 482.9 (blue band), which, when handled as integer numbers, is equivalent to a loss of radiometric resolution if the data is kept in integer format.
[Para 37] In order to provide enhanced results, in one embodiment, each kappa factor is converted to a dimensionless ratio, and these ratios are applied to the DN data. This process preserves the relative spectral balances among the bands and preserves, and may in some cases enhance, the radiometric range of the adjusted DN data. The transformed factors are termed "adjusted kappa" factors and are computed as follows: 1
Kappa λVE \ t ∑Kappaψand) num bands band
Tir Kappaiband)
Kappa ADJ = ,7
Kappa AVE
[Para 38] Adjusted kappa factors using the example of table A-I for each band are listed in table A-2. Note that they are non-dimensional.
Figure imgf000014_0001
Table A-2: Adjusted kappa factors for the spectral bands.
[Para 39] Following the conversion to spectral radiance, the interaction of solar radiation with the atmosphere is taken into account. The atmospheric scattering contributions, which effect the Lupweiiing and Ldownweiiing terms, as discussed above, are generally governed by Rayleigh and Mie scattering. In one embodiment, a full atmospheric correction may take into account all three terms of Lupweiiing, Ldownweiiing, and Ldjrect- This is a relatively complex transformation, requiring an atmospheric radiative transfer model application such as MODTRAN. These programs provide a full radiative transfer model capable of modeling the interactions of radiant energy with the Earth's atmosphere to a high degree of accuracy if the physical state of the atmosphere is well known. This process is generally very time consuming owing to the complex nature of the desired correction and the accuracy is dependent on the exact knowledge of the atmospheric constituents for each scene.
[Para 40] In the embodiment of Fig. 5, as noted at block 248, a path radiance is estimated by noting the "dark target" DN value in each spectral band. This embodiment thus estimates the path radiance in each band. This DN then becomes a DN offset value. This offset is an estimate of the total amount of radiation entering the sensor that is not reflected into the sensor by the Earth's surface. An estimation of the DN associated with path radiance for each of the spectral bands of imaging data (blue, green, red, and NIR) is generated. At block 252, the DN offset value is applied to L^JOA to generate a path-corrected TOA spectral radiance. The path-corrected TOA spectral radiances of the scene in a given spectral band are obtained by subtracting the path-radiance value (the DN offset value) for that band from the total spectral radiances for that band. In this embodiment, spectral properties based on TOA path-corrected spectral radiances are similar to spectral properties based on the spectral radiances of surface materials observed just above the surface. The only property that changes significantly from surface to TOA is the overall intensity of the three bands. In other words, if an observer at TOA could see path-corrected spectral radiances of an object on the Earth, he or she would see similar colors from close range at the surface of the Earth (on a cloudless, relatively haze-free day with the same solar irradiance geometry).
[Para 41] In one embodiment, the dark target DN value is estimated automatically for each band by identifying the 0.1% point in the cumulative distribution histogram for the band. This low-end cutoff represents the first DN in the band where an appreciable signal from surface objects is measured, and it is an estimate of the path radiance contribution to the band. It should be noted that this low-end cutoff at the 0.1% point in the cumulative distribution histogram is merely one example of estimating the dark target DN value. Other points in the distribution may be selected, the dark target DN value may be manually selected by a user, or the point in the distribution may be dynamically adjusted based on the scene content and distribution histogram. The low-end cutoff point is computed with respect to the adjusted DN values of this embodiment. Subtracting out this adjusted DN value from each pixel is equivalent to subtracting out the path radiance contribution from the overall TOA spectral radiance.
[Para 42] Once the low-end cutoff point in the image cumulative distribution histogram is identified, the DN associated with the low-end cutoff is set as the dark target DN value, also referred to as DN path(band). The Path Radiance Corrected (PRC) DN can be computed as follows:
DN _ pathADJ {band) = DN _ path{band) Kappa ADJ {band)
DNPRC {band, pixel) = DNADJ {band, pixel) - DN _ pathADJ {band)
In one embodiment, the value of DN_pathADJ(band) is determined for each spectral-band of imaging data. For example, if four bands of imagery are collected, a value is produced for DN_pathADj(band_l), DN_pathADJ(band_2), DN_pathADj(band_3), and DN_pathADj(band_4) representing the blue-light, green-light, red-light, and NIR bands, respectively. [Para 43] Each of these separately estimated DN_pathADj(band) values is also related, in a direct linear way, to Lλ;TOA through the straightforward application of the band-specific
K_Factor(band) and Δ(band) coefficients. That is,
U,τoA(band_l) = DN_pathADj(band_l) x K_Factor(band-l) / Δ(band_l) Lλ,τoA(band_2) = DN _pathADj(band_2) x K_Factor(band_2) / Δ(band_2) Lλ,τoA(band_3) = DN_pathADj(band_3) x K_Factor(band_3) / Δ(band_3) Lλ>τoA(band-4) = DN_pathADJ(band_4) x K_Factor(band_4) / Δ(band_4)
As discussed above, the combination of K_Factor / Δ is referred to as the kappa factor. [Para 44] Using the example from Table A-I, values of kappa for each band are: KAPPA l = 0.2359 Watts per square meter per steradian per micrometer per DN KAPPA 2 = 0.1453 Watts per square meter per steradian per micrometer per DN KAPP A 3 = 0.1785 Watts per square meter per steradian per micrometer per DN KAPP A_4 = 0.1353 Watts per square meter per steradian per micrometer per DN [Para 45] In turn, each LχjoA(band) value can be converted to an apparent reflectance factor at the top of the atmosphere (RFTOA), as seen by an observer (RFTO A_path_band). [Para 46] To calculate a value of RFTOA from each value of Lχ TθA, three items of information are required:
The spectral irradiance of the sun at TOA (SISUNTOA), The elevation angle, in degrees, of the sun (SUNELEV), and The earth-sun distance, in Astronomical Units (A.U.), (D)
SISUNTOA is a known constant for each spectral band. SUNELEV and D are the same for each spectral band. SUNELEV varies from place to place on the earth and from date to date
(day of year, DOY). D varies with DOY only.
[Para 47] With these parameters, the conversion equation from SRTOA to RFTOA is as follows:
RFTOA = Lλ>TOA x pi x D x D x 100% / [SISUNTOA x SIN(SUNELEV)] where pi = 3.1415927 ... .
[Para 48] Values of SISUNTOA for each band are, in one example: SISUNTO A l = 1930.1 1 Watts per square meter per micrometer SISUNTOA 2 = 1842.47 Watts per square meter per micrometer SISUNTOA 3 = 1561.61 Watts per square meter per micrometer SISUNTOA_4 = 1097.64 Watts per square meter per micrometer [Para 49] In one embodiment, larger errors are associated with the estimation of DN_pathADj(band_4) than for the other DN_pathADj(band) values. This is due to the fact that in some scenes, there are no non-reflecting surface objects in the NIR. But, in the other bands, especially in the red-light band, dark (non-reflecting) objects are commonly present in a scene, namely, dense green vegetation. RFTOA generally decreases with wavelength from Band 1 to Band 4. However, due to the various conversion factors (i.e., KAPPA and SISUNTOA), the relative values of DN_pathADj(band) will normally not exhibit any such simple pattern of decrease with wavelength. However, since D and SIN(SUNELEV) are the same for all bands (i.e., wavelength does not affect these), the ratio of RFTOA_path_4 divided by RFTOA_path_3, called RFRATIO43, is related to the ratio of DN_path_4 divided by DN_path_3, called DNRATIO43, in a fixed way, as follows:
RFRATIO43 = DNRATIO43 x 1.078381 Inversely,
DNRATIO43 = 0.927316 x RFRATIO43
[Para 50] It is generally expected that RFRATIO43 is 0.8 or less. If RFRATIO43 is greater than 0.8, then, in one embodiment, it is assumed that the RFTOA_path_4 value is too high, and that no totally absorbing body was present in the scene as seen in the NIR band. In this case, in one embodiment, an estimate is selected as the value of RFTOA_path_4 as:
RFTOA_path_4_better = RFTOA_path_4 x 0.8
[Para 51] In terms of DN_pathADj values, DNRATIO43 is normally 0.741853 or less in an embodiment. If DNRATIO43 is greater than this value, an estimate is selected as the value of DN_pathADj(band_4) as:
DN_pathADJ_better(band_4) = DN_pathADJ(band_3) x 0.741853 [Para 52] In a scene with 100% cloud cover, as is often processed by the processing system, the above method identifies a path radiance equivalent (low-end) DN value that is much too high, resulting in an incorrect spectral transformation. To avoid this problem, in one embodiment, path radiance values are not allowed to exceed to an absolute maximum value. A value of 400 DNs (1 1 bit input data) is set as the maximum path radiance value for every band in one embodiment, although a different value may be used so long as the spectral transformation is acceptable. Actual values of path-radiance DNs are not the same for all bands and tend to be higher for blue light and green light than for red light and NIR. Nevertheless, under cloudy conditions, actual scene DNs are generally quite high (above 1500). In such cases, minor differences among path-radiance DN estimates have relatively small effects on the perceived color of the transformed imagery.
[Para 53] In one embodiment, a Look Up Table (LUT) is computed for each band corresponding to the DNPRC. The DNPRC values are generated on a band specific basis using the above equations. The DNOng value in the equations is simply in the integer range of the input data such as 1 - 2047 for 11 bit data, and 1 - 255 for 8 bit data. In this embodiment, intermediate computations are kept in floating point format to avoid round off and overflow errors during the calculation. Also, the results of the LUT are not allowed to exceed the allowed range of the output DN values. This can be accomplished by testing the DN value for values that fall below 1 or above the maximum DN output, and resetting the value to 1 or to the maximum accordingly.
[Para 54] Once the data has been converted to TOA path-radiance corrected spectral radiances, the visual contrast is enhanced to facilitate visual interpretation of the imagery. Fig. 6 illustrates the operational steps for enhancing contrast for one embodiment of the present invention. The contrast enhancement technique of this embodiment differs from many COTS contrast stretches in that it is a color-hue-and-color-saturation preserving contrast stretch. The color preservation property of this stretch is beneficial since the spectral correction operations previously described are designed to correct the imagery in terms of hue and saturation. Thus, the desire in making contrast improvements is to only increase its brightness (intensity) while not affecting the color-hue or color-saturation.
[Para 55] Referring to Fig. 6, initially, a determination is made, noted at block 260, for the upper and lower percentile limits of the of the image histogram for each band. In one embodiment, the lower percentile limit is set to 1% of the cumulative distribution of the image histogram, and the upper percentile limit is set to 99% of the cumulative distribution of the image histogram. Any percentile cutoff can be used for this determination, such as 2%, which corresponds to the 98% point in the cumulative histogram. However, a 2% cutoff may cut off too much of the upper end of the DN histogram, resulting in significant loss of detail over bright objects. In such a situation, a 0.5 percentile cutoff may be used, which corresponds to the 0.5% and 99.5% points in the image histogram. As will be understood, any value for the percentile cutoff may be used and, in one embodiment, the contrast enhancement algorithm is implemented with these parameters as configurable values that may be configured by a user. [Para 56] At block 264, the DN corresponding to the upper and lower percentile limits for each band are identified. The cutoff value of the upper DN (hi dn) and lower DN (lo_dnADj) is selected at block 268. The selection is made by first identifying the highest DN corresponding to the high percentile cutoff for each band in the image, and the largest of these values is selected at block 268. Similarly, the selection of lo_dn is made by identifying the lowest DN corresponding to the lower percentile cutoff for all bands in the image. The values of hi dn and lo dn are selected, in this embodiment, according to the following equations: hi_dn = MAX (hi_dn(band)) lo_dn = MIN (lo_dn(band))
Once the values of hi dn and lo dn are identified, they are then converted to adjusted DNs using the following equations: hi dnADj = hi dn x Kappa(band)/KappaAVE lo_dnADj = lo_dn x Kappa(band)/KappaAvE
The value of the band used in the above equations is the band that contains the hi_dn and the band that contains lo_dn, respectively.
[Para 57] At block 272, the average target brightness value for the image is identified. The value of the average target brightness has a significant impact on the visual appearance of the image, and is often open to subjective interpretation. Thus, in an embodiment, the value of the average target brightness is configurable by a user. In other embodiments, the value is the average target brightness is preset to a value that is likely to produce a visually acceptable image. At block 276, the median of the modified DN values for each band in the image is computed. This median is computed by determining the median value of the cumulative image histogram for each band. At block 280, the average of the medians from each band is computed. In one embodiment the average of the medians from each band is computed according to the following equation:
I nbanώi avg _ median _dn = V medianφand) nbands b^=i where median(band) indicates the median value of the original image for the given band. Following the computation of the average of the medians for each band, a gamma factor (G) is computed, according to block 284. The value of G is determined by using the average target brightness value for the image and computing the value of G needed to bring the overall median brightness of the image up to the target brightness value. In one embodiment, G is computed based on the desired target brightness and the average brightness according to the following equation:
Figure imgf000020_0001
where avg_median_dn is the average of the median modified DN values for each band in the image; max dn is the maximum DN value of the image output (i.e. 255 for 8-bit products or 2047 for 1 1 -bit products); and trgt_dn is the DN value of the target brightness in the image.
[Para 58] In the embodiment of Fig. 6, the value of gamma is limited to ensure that an exaggerated stretch is not applied to the imagery. At block 288, it is determined if the value of gamma is outside of these limits. If the value of gamma is outside the limits, the value for gamma is set at the appropriate high or low limit, as noted at block 292. In one embodiment, gamma values are set to have a limit of no greater than 2.0 and no less than 0.5. In other embodiments, these limits are implemented as run-time configurable values. Following the determination of the gamma value, the stretch is applied, as noted at block 296. The stretch equation, in an embodiment, is:
DN slrelch (band, pixel) = x hi value
Figure imgf000020_0002
where DNpRc is the TOA path radiance corrected DN, lo dnADj is the lowest cutoff from all the bands, hi_dnADj is the highest cutoff from all the bands, G is the gamma factor, and hi value is the maximum DN desired in the output image. The stretch equation is applied to each pixel in each band to produce a stretched DN value for the pixel. This technique increases the intensity of each channel without materially changing the hue and saturation of the imagery. The resulting image product maintains the correct spectral balance of the imagery while providing an acceptable level of contrast enhancement. The result is a visually appealing image that requires no manipulation for proper display. The results of the algorithm generally do not depend on scene content, and interpretation of the product requires little or no experimentation or "guessing" by the remote sensing analyst.
[Para 59] In one embodiment, as mentioned above, a lookup table is used to determine a path radiance corrected DN for each pixel in a band. In this embodiment, the original pixel values (for each band) are read in from the original image. These DNs are used as an index into the DNPRC lookup tables to retrieve the value of DNpRc in the stretch equation. The stretch is then applied and the pixel value is written out to the image. In one embodiment, the value of the DN for each pixel is evaluated to verify that the DN value is not outside of certain limits following the application of the stretch equation. For example, if the DNprc- lo_dn is less than 0, then DNstretch is set to 1. In this embodiment, the DN value of zero is reserved for blackfill. Similarly, if DNprc is greater than the hi value, then DNstretch is set to be the hi value.
[Para 60] In another embodiment of the present invention, spectral compensation may be generated for known imaging sensor non-linearities. As is known, each band of imagery has an associated imaging sensor that receives radiance from the particular band of interest. In some cases, the sensor may have a non-linear response, and provide DN values that reach a maximum limit when the amount of radiance received at the sensor is above a threshold limit. Referring to Fig. 7, the output of three spectral bands is illustrated as a function of radiance received at the sensor associated with the respective spectral band. In this example, a first band has a linear response throughout the full range of DNs, as indicated by line 310. Similarly, a second band also has a linear response throughout the full range of DNs, as indicated by line 314. However, a third band has a non-linearity, as indicated by line 318. In this example, the third band has a linear response up to a certain amount of radiance received, and the output of the sensor is non-linear beyond this point. This point is referred to herein as a roll-off point, and is indicated at point 322 in Fig. 7. In the example of Fig. 7, the third sensor reaches a maximum output at point 326, after which the output of the sensor is the same DN regardless of any additional radiance received at the sensor. The output of the nonlinear sensor is corrected in one embodiment using a technique referred to as "spectral feathering."
[Para 61] In an embodiment, spectral feathering is performed according to the flow chart illustration of Fig. 8. Initially, at block 330, is it determined if there is a known non- linearity of a sensor band. If there is no known non-linearity, no compensation is required, and the operations are complete, as indicated at block 334. If there is a known non-linearity, it is determined if the pixel within the band has a DN value that is greater than the DN value associated with the roll-off point, as noted at block 338. If the pixel value is less than the roll-off value, no compensation is required and the operations are done as noted at block 334. If the pixel DN value is greater than the roll-off value, the DN value of the pixel is compensated based on the pixel DN values of the other bands, as indicated at block 342. In one embodiment, the compensated pixel value is computed based on an average of the remaining bands DN values. Following the compensation, the compensated pixel DN value is output for the non-linear band, as indicated at block 346.
[Para 62] In one embodiment, a spectral band (denoted by "band") has a known non- linearity. In this embodiment, for a particular pixel (denoted by "pixel"), it is determined if the DN value of the pixel in the band is greater than the rolloff value for the band (denoted by "band rolloff '). If so, the pixel requires spectral compensation. In this case, first, the amount of compensation required to adjust the pixel is determined as follows:
. . band Imt - input dn{ pixel, hand) fraction = = ^- = — — - band _ Imt - band _ rolloff where band lmt is the DN value beyond which the sensor becomes non-linear, band_roUoff is the DN corresponding to the beginning of the non-linear behavior, and input_dn(pixel,band) indicates the DN value associated with the given pixel in the band that requires compensation. The values of fraction are bounded as follows to provide a numerically consistent result: if( fraction < 0) then fraction = 1 if( fraction > 1) then fraction = 1 Physically the value of fraction represents how much compensation is to be applied to the pixel. A value of zero indicates no compensation, while a value of one indicates full compensation. In order to compensate the pixel for any known non-linearity of detector response, pixel values from adjacent spectral bands are used. The average output value of the adjacent spectral bands is computed according to the following equation:
, output dn( pixel, band + \) -H)utput dn( pixel, band - V) ave dn out = — - — = — — - — = — — -
~ ~ 2.0 where output_dn(pixel,band+l) indicates the output DN value of the spectrally adjacent band of longer wavelength than the band to be compensated, and output_dn(pixel,band-l) indicates the output DN value of the spectrally adjacent band of shorter wavelength than the band to be compensated. In the situation where the band to be compensated does not lie spectrally between any sensor bands, two bands of either longer or shorter wavelength than the band to be compensated may be used. The final compensated pixel value is determined as follows: comp _ value{pixel, band) = output _ value{pixel, band) x (1 - fraction) + ave _dn_ out * fraction
where output_value(pixel,band) is the uncompensated DN corresponding to the pixel in the band that requires compensation, fraction is computed above, and ave dn out is computed above. In this manner the pixel value is spectrally compensated in a smoothly varying manner using spectral information from adjacent spectral bands. This technique is referred to as "spectral feathering".
[Para 63] While the invention has been particularly shown and described with reference to a preferred embodiment thereof, it will be understood by those skilled in the art that various other changes in the form and details may be made without departing from the spirit and scope of the invention.

Claims

What is claimed is:
1. A method for producing an enhanced digital image, comprising: receiving a plurality of pixels of imaging data from an imaging system, each of said pixels of imaging data comprising a digital number; processing said digital number of each of said pixels to determine a distribution of digital numbers for said plurality of pixels; and determining a spectral radiance of each of said plurality of pixels based on said distribution.
2. The method for producing an enhanced digital image, as claimed in claim 1, wherein said receiving step comprises: receiving a plurality of bands of digital numbers of imaging data from a plurality of imaging sensors.
3. The method for producing an enhanced digital image, as claimed in claim 2, wherein said plurality of imaging sensors comprises: a blue band imaging sensor; a green band imaging sensor; a red band imaging sensor; and a near-infrared band imaging sensor.
4. The method for producing an enhanced digital image, as claimed in claim 1, wherein said receiving step comprises: receiving a plurality of top-of-atmosphere digital numbers of imaging data from a plurality of imaging sensors of said imaging system; and adjusting said top-of-atmosphere digital numbers for a first imaging sensor of said imaging sensors based on known sensor characteristics of said first imaging sensor.
5. The method for producing an enhanced digital image, as claimed in claim 1 , wherein said processing said digital number step comprises: determining a lower distribution cutoff of said distribution of digital numbers.
6. The method for producing an enhanced digital image, as claimed in claim 5, wherein said lower distribution cutoff is set at 0.1% of the cumulative distribution.
7. The method for producing an enhanced digital image, as claimed in claim 5, wherein said lower distribution cutoff is based on a point in the cumulative distribution at which said digital numbers are indicative of path spectral radiance.
8. The method for producing an enhanced digital image, as claimed in claim 5, wherein said determining a spectral radiance step comprises: subtracting a digital number associated with said lower distribution cutoff from each of said plurality of digital numbers.
9. The method for producing an enhanced digital image, as claimed in claim 5, further comprising: determining the value of a digital number associated with said lower distribution cutoff; and subtracting said digital number associated with said lower distribution cutoff from each of said plurality of digital numbers when said digital number associated with said lower distribution cutoff is less than a predetermined maximum value.
10. The method for producing an enhanced digital image, as claimed in claim 9, further comprising: subtracting said predetermined maximum value from each of said plurality of digital numbers when said digital number associated with said lower distribution cutoff is greater than said predetermined maximum value.
1 1. The method for producing an enhanced digital image, as claimed in claim 1, further comprising: determining a median value of said distribution of digital numbers; determining a target brightness value for said enhanced digital image; and adjusting at least a subset of said digital numbers to generate a second distribution of digital numbers, said second distribution having a median value that is substantially equal to said target brightness value.
12. The method for producing an enhanced digital image, as claimed in claim 11, wherein said receiving a plurality of pixels step comprises: receiving a plurality of bands of pixels of imaging data from a plurality of imaging sensors.
13. The method for producing an enhanced digital image, as claimed in claim 12, wherein said determining a median value of said distribution of imaging data comprises: processing digital numbers of each of said bands of imaging data to determine a distribution of digital numbers for each of said plurality bands of imaging data; determining a median value of each of said distributions of digital numbers; and computing an average of said median values.
14. The method for producing an enhanced digital image, as claimed in claim 1, wherein said determining a spectral radiance step is performed independently of scene content of the digital image.
15. The method for producing an enhanced digital image, as claimed in claim 1, wherein said determining a spectral radiance step is performed on a physical basis of said imaging data.
16. A satellite image comprising a plurality of pixels each having an output digital number determined by: determining a magnitude of spectral radiance of each a plurality of input digital numbers; processing said magnitudes of spectral radiance to determine a distribution of input digital numbers; and calculating the output digital number of each of said plurality of pixels based on said distribution.
17. The satellite image, as claimed in claim 16, wherein said determining a magnitude step comprises: receiving a plurality of bands of input digital numbers from a plurality of imaging sensors, each input digital number representing the spectral radiance received by the associated imaging sensor; and determining a magnitude of spectral radiance for each of said input digital numbers for each of said bands.
18. The satellite image, as claimed in claim 17, further comprising: compensating said magnitude of spectral radiance for at least a portion of said digital numbers for any given band of said bands of pixels based on a known non-linear response of the imaging sensor associated with the respective band of pixels.
19. The satellite image, as claimed in claim 18, wherein said portion of said digital numbers are digital numbers which are greater than a digital number associated with a sensor roll-off for said imaging sensor.
20. The satellite image, as claimed in claim 18, wherein said compensating step comprises: firstly determining that a digital number for a pixel of said band is greater than a predetermined digital number; secondly determining digital numbers associated with said pixel from the remaining bands; and calculating a compensated digital number based on digital numbers from said secondly determining step.
21. The satellite image, as claimed in claim 16, wherein said determining a magnitude step comprises: receiving an input top-of-atmosphere digital number associated with each of said pixels of imaging data; and adjusting said input top-of-atmosphere digital numbers based on known imaging sensor characteristics.
22. The satellite image, as claimed in claim 16, wherein said calculating the value step is performed independently of scene content of the digital image.
23. The satellite image, as claimed in claim 16, wherein said calculating the value step is performed on a physical basis of said imaging data.
24. A method for transporting a color enhanced image towards an interested entity, comprising: conveying, over a portion of a computer network, an image that includes a plurality of pixels each having an output digital number that has been determined based on a distribution of spectral radiance values associated with the plurality of input digital numbers.
25. The method, as claimed in claim 24, wherein said output digital numbers have further been based on a median brightness value of the distribution of the input digital numbers.
26. The method, as claimed in claim 25, wherein said image includes a plurality of bands of pixels, each of said bands having a plurality of pixels, each pixel having an output digital number that has been determined based on a distribution of input digital numbers associated with the band of pixels.
27. The method, as claimed in claim 26, wherein a band includes a plurality of digital numbers that have been compensated based on a known non-linearity of an imaging sensor associated with said band.
28. The method, as claimed in claim 24, wherein said output digital numbers are determined by subtracting a path spectral radiance from the value of each of said input digital numbers.
29. The method, as claimed in claim 28, wherein said path spectral radiance is determined based on said distribution of input digital numbers.
30. The method, as claimed in claim 24, wherein said image comprises a plurality of pixels each having an output digital number that has been determined based on a distribution of spectral radiance values associated with the plurality of input digital numbers and that have been contrast enhanced by applying a stretch to the value of each of said plurality of input digital numbers.
31. The method, as claimed in claim 30, wherein said stretch is determined based on a median value of said distribution of input digital numbers, a target brightness value for the digital image, and upper and lower percentile limits for said distribution of input digital numbers.
32. A method for producing an enhanced digital image, comprising: receiving a plurality of digital numbers imaging data from an imaging system; determining a magnitude of spectral radiance of each said plurality of digital numbers; processing said magnitudes of spectral radiance to determine a distribution of magnitudes of spectral radiance; firstly adjusting each of said plurality of digital numbers based on said distribution to produce an adjusted value spectral radiance digital number; processing the adjusted value spectral radiance digital number for each of said plurality of digital numbers to determine a first adjusted distribution for said plurality of spectral radiance digital numbers; assessing said first adjusted distribution to determine a range of adjusted values and a median value of said range of adjusted values; and secondly adjusting the value of at least a subset of said plurality of spectral radiance digital numbers to create a second adjusted distribution wherein the median of said second adjusted distribution corresponds with a target median and the range of said second adjusted distribution corresponds with a target range.
33. The method for producing an enhanced digital image, as claimed in claim 32, wherein said receiving step comprises receiving a plurality of bands of digital numbers from a plurality of imaging sensors within said imaging system, each digital number representing the spectral radiance received by the associated imaging sensor; and said determining step comprises determining a magnitude of spectral radiance for each of said digital numbers for each of said bands of pixels.
34. The method for producing an enhanced digital image, as claimed in claim 33, further comprising: compensating said magnitude of spectral radiance for at least a portion of said digital numbers for a first band of said bands of pixels based on a known non-linear response of an imaging sensor of said imaging sensors associated with said first band.
35. The method for producing an enhanced digital image, as claimed in claim 34, wherein said portion of said digital numbers are digital numbers which are greater than a digital number associated with a sensor roll-off for said first imaging sensor.
36. The method for producing an enhanced digital image, as claimed in claim 34, wherein said compensating step comprises: firstly determining that a first digital number for a first pixel of said first band is greater than a predetermined digital number; secondly determining digital numbers associated with said first pixel from the remaining bands; and calculating a compensated first digital number based on digital numbers from said secondly determining step.
37. The method for producing an enhanced digital image, as claimed in claim 32, wherein said firstly adjusting step is preformed independently of scene content of the digital image.
38. The method for producing an enhanced digital image, as claimed in claim 32, wherein said firstly adjusting step is performed on a physical basis of said imaging data.
39. The method for producing an enhanced digital image, as claimed in claim 32, wherein said firstly adjusting step is based on a point in the cumulative distribution of magnitudes of spectral radiance at which said digital numbers are indicative of path spectral radiance.
PCT/US2005/044917 2004-12-13 2005-12-13 Method and apparatus for enhancing a digital image WO2006065741A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2007546809A JP2008527766A (en) 2004-12-13 2005-12-13 Method and apparatus for improving digital images
CA002591351A CA2591351A1 (en) 2004-12-13 2005-12-13 Method and apparatus for enhancing a digital image
EP05853757A EP1828960A2 (en) 2004-12-13 2005-12-13 Method and apparatus for enhancing a digital image
IL183894A IL183894A0 (en) 2004-12-13 2007-06-13 Method and apparatus for enhancing a digital image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/905,042 US20060126959A1 (en) 2004-12-13 2004-12-13 Method and apparatus for enhancing a digital image
US10/905,042 2004-12-13

Publications (2)

Publication Number Publication Date
WO2006065741A2 true WO2006065741A2 (en) 2006-06-22
WO2006065741A3 WO2006065741A3 (en) 2006-12-07

Family

ID=36583941

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/044917 WO2006065741A2 (en) 2004-12-13 2005-12-13 Method and apparatus for enhancing a digital image

Country Status (7)

Country Link
US (2) US20060126959A1 (en)
EP (1) EP1828960A2 (en)
JP (1) JP2008527766A (en)
CN (1) CN101120363A (en)
CA (1) CA2591351A1 (en)
IL (1) IL183894A0 (en)
WO (1) WO2006065741A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010005926A2 (en) * 2008-07-08 2010-01-14 Harris Corporation Processing of remotely acquired imagery
US7936949B2 (en) 2006-12-01 2011-05-03 Harris Corporation Panchromatic modulation of multispectral imagery
US8078009B2 (en) 2008-07-08 2011-12-13 Harris Corporation Optical flow registration of panchromatic/multi-spectral image pairs
US8094960B2 (en) 2008-07-07 2012-01-10 Harris Corporation Spectral calibration of image pairs using atmospheric characterization
US8260086B2 (en) 2009-03-06 2012-09-04 Harris Corporation System and method for fusion of image pairs utilizing atmospheric and solar illumination modeling
US8478067B2 (en) 2009-01-27 2013-07-02 Harris Corporation Processing of remotely acquired imaging data including moving objects
RU2621877C1 (en) * 2016-03-18 2017-06-07 Акционерное общество "Российская корпорация ракетно-космического приборостроения и информационных систем" (АО "Российские космические системы") Method for radiometric images correcting from multi-element infrared photodetector

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BRPI0512543A (en) * 2004-06-25 2008-03-25 Digitalglobe Inc method and equipment for determining a location associated with an image
US20060126959A1 (en) * 2004-12-13 2006-06-15 Digitalglobe, Inc. Method and apparatus for enhancing a digital image
US7660430B2 (en) * 2005-05-23 2010-02-09 Digitalglobe, Inc. Method and apparatus for determination of water pervious surfaces
US7917292B1 (en) 2006-10-17 2011-03-29 Jpmorgan Chase Bank, N.A. Systems and methods for flood risk assessment
US8655595B1 (en) 2006-10-17 2014-02-18 Corelogic Solutions, Llc Systems and methods for quantifying flood risk
US8542884B1 (en) 2006-11-17 2013-09-24 Corelogic Solutions, Llc Systems and methods for flood area change detection
US8077927B1 (en) 2006-11-17 2011-12-13 Corelogic Real Estate Solutions, Llc Updating a database with determined change identifiers
US8649567B1 (en) 2006-11-17 2014-02-11 Corelogic Solutions, Llc Displaying a flood change map with change designators
US8538918B1 (en) 2006-12-05 2013-09-17 Corelogic Solutions, Llc Systems and methods for tracking parcel data acquisition
SE531942C2 (en) * 2007-02-01 2009-09-15 Flir Systems Ab Method for image processing of infrared images including contrast enhancing filtering
US8111935B2 (en) * 2007-10-03 2012-02-07 Himax Technologies Limited Image processing methods and image processing apparatus utilizing the same
JP5137893B2 (en) * 2009-04-14 2013-02-06 三菱電機株式会社 Image processing device
US8111943B2 (en) 2009-04-15 2012-02-07 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Smart image enhancement process
US9576349B2 (en) * 2010-12-20 2017-02-21 Microsoft Technology Licensing, Llc Techniques for atmospheric and solar correction of aerial images
EP2705463A1 (en) * 2011-05-06 2014-03-12 Ventana Medical Systems, Inc. Method and system for spectral unmixing of tissue images
US8923567B2 (en) * 2011-12-19 2014-12-30 General Electric Company Apparatus and method for predicting solar irradiance variation
JP6233869B2 (en) * 2012-06-07 2017-11-22 日本電気株式会社 Image processing apparatus, image processing apparatus control method, and program
US9396528B2 (en) * 2013-03-15 2016-07-19 Digitalglobe, Inc. Atmospheric compensation in satellite imagery
JP6282095B2 (en) 2013-11-27 2018-02-21 キヤノン株式会社 Image processing apparatus, image processing method, and program.
US9378417B2 (en) * 2014-05-30 2016-06-28 Rolta India Ltd Contrast for RGB images in a GIS application
JP6772838B2 (en) * 2014-12-19 2020-10-21 日本電気株式会社 Image information processing device, image information processing system, image information processing method, and image information processing program
KR20170117603A (en) * 2015-04-23 2017-10-23 더 팀켄 컴퍼니 How to manufacture bearing components
US10489894B2 (en) * 2015-05-28 2019-11-26 Nec Corporation Image processing device, image processing method, and program recording medium
CN105092055B (en) * 2015-08-21 2018-01-16 国家卫星气象中心 Meteorological satellite sun reflected waveband Calibration Method based on cold cloud target
FR3042882B1 (en) * 2015-10-22 2018-09-21 Thales SYSTEM PROVIDED TO PROVIDE OPERATOR WITH INCREASED VISIBILITY AND ASSOCIATED METHOD
CN106023101B (en) * 2016-05-16 2018-12-18 中国资源卫星应用中心 A kind of Remote sensing image processing method of view-based access control model fidelity
CN109478309A (en) 2016-07-22 2019-03-15 日本电气株式会社 Image processing equipment, image processing method and recording medium
JP6818463B2 (en) * 2016-08-08 2021-01-20 キヤノン株式会社 Image processing equipment, image processing methods and programs
CN107036629B (en) * 2017-04-20 2020-07-24 武汉大学 Video satellite on-orbit relative radiation calibration method and system
CN107507151B (en) * 2017-09-02 2020-09-15 首都师范大学 Multispectral remote sensing image real color restoration method and system
EP3540774B1 (en) * 2018-03-16 2020-09-30 Teledyne Dalsa B.V. Image sensor and imaging system comprising the same
FR3085207B1 (en) * 2018-08-27 2020-07-24 Centre Nat Etd Spatiales METHOD AND DEVICE FOR MEASURING ATMOSPHERIC PARAMETERS TO ESTIMATE AIR QUALITY AND CLIMATE VARIABLES
CN109710784B (en) * 2018-12-11 2020-08-25 中国四维测绘技术有限公司 Remote sensing image data space rapid visualization method based on lerc
US11386649B2 (en) * 2019-11-15 2022-07-12 Maxar Intelligence Inc. Automated concrete/asphalt detection based on sensor time delay
CN112918956A (en) * 2021-02-20 2021-06-08 陆伟凤 Garbage classification system based on image recognition technology
CN113643323B (en) * 2021-08-20 2023-10-03 中国矿业大学 Target detection system under urban underground comprehensive pipe rack dust fog environment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030152292A1 (en) * 2001-12-17 2003-08-14 Scott Walter S. System, method, and apparatus for satellite remote sensing
US6819123B2 (en) * 2001-09-19 2004-11-16 Texas Instruments Incorporated Extraction of interconnect parasitics
US20040264796A1 (en) * 2003-06-30 2004-12-30 Turner Robert W System and method for generating pan sharpened multispectral imagery
US6909815B2 (en) * 2003-01-31 2005-06-21 Spectral Sciences, Inc. Method for performing automated in-scene based atmospheric compensation for multi-and hyperspectral imaging sensors in the solar reflective spectral region

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS59133667A (en) * 1983-01-20 1984-08-01 Hitachi Ltd Processing system of picture correction
US4688092A (en) * 1986-05-06 1987-08-18 Ford Aerospace & Communications Corporation Satellite camera image navigation
US6023291A (en) * 1996-10-16 2000-02-08 Space Systems/Loral, Inc. Satellite camera attitude determination and image navigation by means of earth edge and landmark measurement
US5884226A (en) * 1996-10-25 1999-03-16 The United States Of America As Represented By The Secretary Of The Air Force System and method for modelling moderate resolution atmospheric propagation
JPH11184375A (en) 1997-12-25 1999-07-09 Toyota Motor Corp Apparatus and method for digital map data processing
US6694064B1 (en) * 1999-11-19 2004-02-17 Positive Systems, Inc. Digital aerial image mosaic method and apparatus
US6504502B1 (en) * 2000-01-07 2003-01-07 Hughes Electronics Corporation Method and apparatus for spacecraft antenna beam pointing correction
US20020041328A1 (en) * 2000-03-29 2002-04-11 Astrovision International, Inc. Direct broadcast imaging satellite system apparatus and method for providing real-time, continuous monitoring of earth from geostationary earth orbit and related services
US7019777B2 (en) * 2000-04-21 2006-03-28 Flight Landata, Inc. Multispectral imaging system with spatial resolution enhancement
US6584405B1 (en) * 2000-05-05 2003-06-24 Atmospheric And Environmental Research, Inc. Radiance modeling
WO2002010718A2 (en) 2000-07-27 2002-02-07 Bae Systems Information And Electronic Systems Integration, Inc. Spectral drift and correction technique for hyperspectral imaging systems
US6587575B1 (en) * 2001-02-09 2003-07-01 The United States Of America As Represented By The Secretary Of Agriculture Method and system for contaminant detection during food processing
US6735348B2 (en) * 2001-05-01 2004-05-11 Space Imaging, Llc Apparatuses and methods for mapping image coordinates to ground coordinates
US6834125B2 (en) * 2001-06-25 2004-12-21 Science And Technology Corp. Method of improving a digital image as a function of its dynamic range
US6810153B2 (en) * 2002-03-20 2004-10-26 Hitachi Software Global Technology, Ltd. Method for orthocorrecting satellite-acquired image
US6921898B1 (en) * 2002-06-20 2005-07-26 The United States Of America As Represented By The Secretary Of The Navy Bi-directional reflectance distribution function determination by large scale field measurement
KR100519054B1 (en) * 2002-12-18 2005-10-06 한국과학기술원 Method of precision correction for geometrically distorted satellite images
US20060041375A1 (en) 2004-08-19 2006-02-23 Geographic Data Technology, Inc. Automated georeferencing of digitized map images
US20060126959A1 (en) 2004-12-13 2006-06-15 Digitalglobe, Inc. Method and apparatus for enhancing a digital image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6819123B2 (en) * 2001-09-19 2004-11-16 Texas Instruments Incorporated Extraction of interconnect parasitics
US20030152292A1 (en) * 2001-12-17 2003-08-14 Scott Walter S. System, method, and apparatus for satellite remote sensing
US6909815B2 (en) * 2003-01-31 2005-06-21 Spectral Sciences, Inc. Method for performing automated in-scene based atmospheric compensation for multi-and hyperspectral imaging sensors in the solar reflective spectral region
US20040264796A1 (en) * 2003-06-30 2004-12-30 Turner Robert W System and method for generating pan sharpened multispectral imagery

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7936949B2 (en) 2006-12-01 2011-05-03 Harris Corporation Panchromatic modulation of multispectral imagery
US8094960B2 (en) 2008-07-07 2012-01-10 Harris Corporation Spectral calibration of image pairs using atmospheric characterization
WO2010005926A2 (en) * 2008-07-08 2010-01-14 Harris Corporation Processing of remotely acquired imagery
WO2010005926A3 (en) * 2008-07-08 2010-03-18 Harris Corporation Processing of remotely acquired imagery
US8073279B2 (en) 2008-07-08 2011-12-06 Harris Corporation Automated atmospheric characterization of remotely sensed multi-spectral imagery
US8078009B2 (en) 2008-07-08 2011-12-13 Harris Corporation Optical flow registration of panchromatic/multi-spectral image pairs
US8478067B2 (en) 2009-01-27 2013-07-02 Harris Corporation Processing of remotely acquired imaging data including moving objects
US8260086B2 (en) 2009-03-06 2012-09-04 Harris Corporation System and method for fusion of image pairs utilizing atmospheric and solar illumination modeling
RU2621877C1 (en) * 2016-03-18 2017-06-07 Акционерное общество "Российская корпорация ракетно-космического приборостроения и информационных систем" (АО "Российские космические системы") Method for radiometric images correcting from multi-element infrared photodetector

Also Published As

Publication number Publication date
US20060126959A1 (en) 2006-06-15
IL183894A0 (en) 2007-10-31
JP2008527766A (en) 2008-07-24
WO2006065741A3 (en) 2006-12-07
US7715651B2 (en) 2010-05-11
US20090219407A1 (en) 2009-09-03
CN101120363A (en) 2008-02-06
CA2591351A1 (en) 2006-06-22
EP1828960A2 (en) 2007-09-05

Similar Documents

Publication Publication Date Title
US7715651B2 (en) Method and apparatus for enhancing a digital image
Gordon et al. Absolute calibration and characterization of the multiband imaging photometer for Spitzer. II. 70 μm imaging
Goslee Analyzing remote sensing data in R: the landsat package
US8094960B2 (en) Spectral calibration of image pairs using atmospheric characterization
Marta Planet imagery product specifications
KR101702187B1 (en) Device and method for calibration of high resolution electro optical satellite
Danaher et al. Bi-directional reflectance distribution function approaches to radiometric calibration of Landsat ETM+ imagery
Meister et al. Corrections to the MODIS Aqua calibration derived from MODIS Aqua ocean color products
Dewitte et al. The geostationary earth radiation budget edition 1 data processing algorithms
Ren et al. A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera
US20190145819A1 (en) Information processing device, information processing method, and program
Mahanti et al. Inflight calibration of the lunar reconnaissance orbiter camera wide angle camera
Mishra et al. Review of topographic analysis methods for the western Himalaya using AWiFS and MODIS satellite imagery
CN113218874A (en) Method and system for obtaining surface target object reflectivity based on remote sensing image
CN110689505B (en) Scene-based satellite-borne remote sensing instrument self-adaptive correction method and system
Wu et al. Post-launch calibration of GOES Imager visible channel using MODIS data
Schläpfer et al. Evaluation of brefcor BRDF effects correction for HYSPEX, CASI, and APEX imaging spectroscopy data
Pinto et al. Landsats 1–5 Multispectral Scanner System Sensors Radiometric Calibration Update
Schmidt et al. A method for operational calibration of AVHRR reflective time series data
Molada-Teba et al. Towards colour-accurate documentation of anonymous expressions
Statella et al. Cross-calibration of the Rosetta Navigation Camera based on images of the 67P comet nucleus
Honkavaara et al. Radiometric performance of digital image data collection-a comparison of ADS40/DMC/UltraCam and EmergeDSS
Liew et al. " Cloud-free" multi-scene mosaics of SPOT images
Hobbs et al. Marsobot: design and performance characterization of a low-cost, ground-based multispectral camera on an open source rover
Minomura et al. Atmospheric correction of satellite data using multi-wavelength lidar data with MODTRAN3 code

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2591351

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2007546809

Country of ref document: JP

Ref document number: 183894

Country of ref document: IL

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2005853757

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 200580048095.3

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 2005853757

Country of ref document: EP