|Publication number||USH741 H|
|Application number||US 06/643,904|
|Publication date||Feb 6, 1990|
|Filing date||Jul 12, 1984|
|Priority date||Jul 12, 1984|
|Publication number||06643904, 643904, US H741 H, US H741H, US-H-H741, USH741 H, USH741H|
|Inventors||Norman F. Powell, Giora A. Bendor|
|Original Assignee||The United States Of America As Represented By The Secretary Of The Air Force|
|Export Citation||BiBTeX, EndNote, RefMan|
|Referenced by (17), Classifications (9), Legal Events (1)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The invention described herein may be manufactured and used by or for the Government for governmental purposes without the payment of any royalty thereon.
The present invention relates generally to synthetic aperture radar imaging systems and more specifically to an autofocus image processing system for estimating image displacement relative to a reference frame.
A large class of problems involving image processing are a result of the need for an accurate registration cability. This task has been alleviated to some degree by the prior art techniques given in the following patents:
U.S. Pat. No. 4,330,833 issued to Pratt et al on 18 May 1982;
U.S. Pat. No. 4,244,029 issued to Hogan et al on 6 Jan 1981;
U.S. Pat. No. 3,955,046 issued to Ingham et al on 4 May 1976;
U.S. Pat. No. 3,943,277 issued to Everly et al on 9 Mar 1976;
U.S. Pat. No. 4,162,775 issued to Voles on 31 Jul 1979;
U.S. Pat. No. 4,368,456 issued to Forse et al on 11 Jan. 1983;
Pratt et al disclose a method and apparatus for digital image processing which operates on dots or "pixels" with an operator matrix having dimensions smaller than a conventional operator. It may be used in the restoration improvement of photographs or other images taken by satellites or astronauts in outer space and then transmitted to earth. Hogan et al disclose a digital video correlator in which a reference image and a live image are digitized and compared against each other in a shifting network to determine the correlation between the two images. In Ingham et al phase shifts are detected and used to follow a target. Correlation type trackers are also disclosed in the Everly and Voles patents. Forse et al teach an image correlator in which a reference representation is updated by a control processor when the correlation of it with a current representation reaches a peak.
In view of the foregoing discussion it is apparent that in the realm of synthetic aperture radar imaging systems there exists a need for development in the area of accurate imaging, particularly if the amount of available data is space. The present invention is directed towards satisfying that need.
The present invention provides a correlation system with subpixel accuracy for sparcely sampled data using a correlator, interpolator and displacement estimator. The complex gradient correlator is used to correlate the complex gradient data and obtain a pronounced response from the data from the range cells of the radar. Then the interpolator will interpolate the resulting cross-correlation. The displacement estimator receives the interpolation result to yield an accurate estimate of shifts a fraction of a pixel for sparcely sampled data.
It is an object of the invention to provide a new and improved Feature Referenced Error Correction (FREC) autofocusing system but its usefulness is by no means limited to that alone. Any displacement estimate for a gray scale (detected) data relative to a reference frame can be carried out in the same manner as described in this disclosure. When the data is substantially oversampled (as it may be for a direct photograph of a scene) the increased complexity, due to the need to generate complex gradients, cannot be justified and as such other more conventional correlation schemes may suffice. Thus for marginally sampled image the discrete complex correlation scheme offers a substantial subpixel accuracy improvement at the expense of somewhat more demanding processing.
It is a principle object of this invention to provide a new and improved correlation system with subpixel accuracy for sparcely sampled data.
These together with other objects, features and advantages of the invention will become more readily apparent form the following detailed description when taken in conjunction with the accompanying drawing wherein like elements are given like reference numerals throughout.
FIG. 1 is an illustration of the use of an autofocus in synthetic aperture radar processing;
FIG. 2 is a functional block diagram of one embodiment of the present invention;
FIG. 3 is an illustration of the Sobel Window;
FIG. 4 is a graph of the discrete complex correlator response in two dimensions;
FIG. 5 is an illustration of the discrete complex correlator response in three dimensions;
FIG. 6A is the response of the complex correlator to real subaperture data for zero shift correlation;
FIG. 6B is the response of the complex correlator to real subaperture data for non zero shift correlation; and
FIG. 7 is a set of charts depicting the interpolation process and its effects on a signal as processed by the three steps of interpolation.
This invention is directed to an image processing arrangement used to estimate image displacement relative to a reference interpolator, and a displacement estimator. The unique nature of the system is its ability to estimate shifts of a fraction of a pixel for sparcely sampled data. This is accomplished by extracting the gradients via a discrete complex correlator. The resulting cross-correlation function is then interpolated to yield an accurate estimate of the shift. The invention is particularly adapted for use in the autofocus portion of a synthetic aperture radar imaging system.
Autofocus is a processing technique that extracts information from the partially processed data to yield an estimate of the error phase present in the data. This, in turn, is used to remove the phase errors from the data prior to its final processing.
A number of autofocus techniques have been developed that successfully estimate the error phase, but with various degrees of processing complexity. Techniques that utilize the fully processed image (in complex form), and that require multiple passes to achieve the final focus, have been successful but cumbersome and therefore not practical in a real-time environment. Other variations of the multipass techniques using iterative search have been successful yet suffer from the same non-real-time constraint.
A different approach is the Feature Reference Error Correction (FREC) technique, which is based on the requirement of a single pass and integration with existing Synthetic Aperture Radar (SAR) processing.
FIG. 1 is an illustration of the use of a single pass, feed forward autofocus in use for typical synthetic aperture radar processing. Radar data is input from range processing into the First Stage Fast Fourier Transform (FFT) 101. The output of FFT 101 is the subaperture data which needs to be correlated in a specific manner to yield the error phase which can be removed.
The correlation is done after the data enters the bulk memory 102 by the autofocus 103. The autofocus 103 uses the partially processed data residing in the bulk memory 102 to extract the error phase with the result that the phase errors are removed 104 from the data and sent to the second stage FFT 105 prior to final processing.
The autofocus 102 extracts the error phase by correlating the subaperture data in a specific manner to yield the shifts relative to a reference subframe. These shifts are then reconstructed in a way which regenerates the complete error phase across the full aperture. Starting with data (for a single subframe and a reference subframe) the process involves the following steps:
a generation of the complex gradient of the detected data;
b. line per line complex correlation with an ensemble average over all range cells;
c. interpolating the data block to obtain the shift estimate; and
d. estimation of the displacement between the two subframes.
The steps followed by the autofocus in extracting the error phase may be accomplished completely by software on a high speed data processor by following the procedure described below, or the steps may be accomplished by the combination of software and the hardware equivalents depicted in FIG. 2.
FIG. 2 is a functional block diagram of one embodiment of the present invention. In FIG. 2, the functions of the autofocus 103 of FIG. 1 are accomplished by the following:
a data processor 200 performs the functions of complex gradient generation 201 and complex correlation 202 (using the process described below);
an interpolator 300 consists of a Fast Fourier Transform (FFT) 301, a Zero Filling Device 302, and an Inverse FFT 303; and
the Displacement Estimator 400 consists of a multiplier 401 and either a Least Square Fit 402 or an integrator. These functional hardware blocks perform the process described below which may also be accomplished entirely in software by a high speed data processor.
After the autofocus 103 receives the output of the first stage azimuth FFT 101, the complex data is linear detected, noise clipped to yield the predominant signal (gray scale) and its average intensity is estimated. At this point each subframe is transformed (on a line per line basis) into a complex gradient subframe. This is carried out via a Sobel Window as given in FIG. 3.
The Sobel window of FIG. 3 is characterized by the function Fi as defined below in Table 1.
TABLE 1______________________________________x = A2 + A4 - (A0 + A6) + 2(A3 - A7)y = A0 + A2 - (A6 + A4) + 2(A1 - A6) ##STR1##φ = tan-1 (y/x)Fi = Ai ejφ i = xi + jyi______________________________________
The average power of each subframe (computed in conjunction with the gain distribution across the full aperture) is then used to selected a threshold which in turn is used to reject bad correlation lines. Each line which passes the power threshold is correlated (for 5 to 7 shifts about zero shift) and a running (ensemble) average is used to collapse all of the correlation data over all of the range cells utilized. It is this function which must then be processed further in order to estimate accurately the associated displacement.
Once the intensity gradient is generated a complex set of numbers are obtained for each subframe. Conventional correlation techniques applied to the magnitude of the gradient cannot achieve superior performance to that of conventional intensity correlators. However, when a complex correlator is used to correlate the complex gradient data the response is more pronounced and devoid of ambiguities. The complex correlator is given as: ##EQU1##
The algorithm computes the cross correlation (more precisely a match index) which is the summation of the gradient vector alignments between scene pairs. Here A is the complex gradient for frame A while B is the complex gradient of the reference frame B. Note that RAB is normalized by the total power and furthermore that when A=B (a match) the resultant due to the match summation of the numerator yields a real positive number. Thus for a perfect match the correlation is positive while the range for RAB is between -1 to +1. This algorithm has the characteristic of a coherent process in that images that are misaligned produce very low (essentially zero) correlation value. Only when alignment is close does the index have non-zero values. The correlation for correct alignment is the result of coherent summation for intensity gradient pairs which produces a spike like response which peaks at the correct match position. FIG. 4 indicates intensity gradient vector correlation behavior for a discrete intensity pedestal which produces the array of intensity gradients. It is evident that the correlation function (i.e., autocorrelation) is spikey and very rapidly settles to zero.
When the technique is applied to two dimensional scenes the resulting function retains its essential characteristics of unique peak and rapid decorrelation away from the peak. A three dimensional plot of the resulting response is given in FIG. 5 where the sharp peak and the bipolar value of the response function is evident.
It can therefore be concluded that the complex correlator possesses the essential ingredients needed for the FREC processing. The only remaining crucial issue is how to achieve subpixel accuracy for marginally sampled data. This is the subject of the following section.
The complex gradient correlation process described above, yields (for realistic data) a very narrow and well defined correlation function (whose positive peak is the only region of interest). This is shown in FIG. 6 for a zero shift correlation (in this case the autocorrelation) and a non-zero shift correlation. It is evident that the correlation function is marginally sampled and as such yields a rather crude estimate of the displacement which is desired to within 1/100 of a pixel.
The interpolation procedure is shown in FIG. 6 where 5 or 7 points of the correlation functions are considered to be the data. This interpolation procedure is also summarized by the block diagrams of FIG. 2 and consists of: doing a FFT 301, adding trailing zeros 302 (as depicted in FIG. 6), then performing an Inverse FFT 303. In FIG. 6, the FFT of the new data block is carried out, zeros are added to its midpoint resulting in a total of 128 pts (for this example of KOSF=8 i.e. 16×8=128).
By adding trailing zeros to the data, it is converted to a convenient binary number. The inverse FFT of this modified spectrum is then carried out to yield the interpolated data block. Selecting the maximum point of this interpolated data block as well as the neighboring 5 points on either side of it, gives a good description of the peak region. At this point a second order LSE fit is carried out on the eleven new data points from which one can easily estimate the local peak (for ax2 +bx+c=0, xp =-b/2a). The true peak is then related to the original data by accounting for the oversampling factor as well as the necessary index changes. This interpolation procedure results in an accurate shift estimate.
The shift estimate from the interpolator is next processed by the Displacement Estimator 400 of FIG. 2. First the shift estimate is converted into a phase rate to obtained a displacement history. One way to accomplish this is simply to multiply the shift estimate by a constant as seen in 401 of FIG. 2. The result in turn can be integrated to yield the desired error phase estimate. An alternative to integration is the least square fit 402 which performs a least square estimation to obtain the estimate of the phase error since typically some 32 subapertures are used, a good quality phase estimate is possible.
With the accurate phase error obtained by the autofocus 103, using the procedure described above, the data from the synthetic aperture radar next has this error subtracted from it as shown by 104 of FIG. 1. The result is the removal of phase errors from the data prior to its final processing with the elimination of shifts of a fraction of a pixel for sparcely sampled data.
While the invention has been described in its presently preferred embodiment it is understood that the words which have been used are words of description rather than words of limitation and that changes within the purview of the appended claims may be made without departing from the scope and spirit of the invention in its broader aspects.
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US4978960 *||Dec 27, 1988||Dec 18, 1990||Westinghouse Electric Corp.||Method and system for real aperture radar ground mapping|
|US5021789 *||Jul 2, 1990||Jun 4, 1991||The United States Of America As Represented By The Secretary Of The Air Force||Real-time high resolution autofocus system in digital radar signal processors|
|US5119100 *||Mar 21, 1991||Jun 2, 1992||Selenia Industrie Elettroniche Associates, S.P.A.||Device for improving radar resolution|
|US5164730 *||Oct 28, 1991||Nov 17, 1992||Hughes Aircraft Company||Method and apparatus for determining a cross-range scale factor in inverse synthetic aperture radar systems|
|US5184133 *||Nov 26, 1991||Feb 2, 1993||Texas Instruments Incorporated||ISAR imaging radar system|
|US5191344 *||Nov 27, 1991||Mar 2, 1993||Deutsche Forschungsanstalt Fur Luft- Und Raumfahrt||Method for digital generation of sar images and apparatus for carrying out said method|
|US5281972 *||Sep 24, 1992||Jan 25, 1994||Hughes Aircraft Company||Beam summing apparatus for RCS measurements of large targets|
|US5703970 *||Jun 7, 1995||Dec 30, 1997||Martin Marietta Corporation||Method of and apparatus for improved image correlation|
|US5854602 *||Apr 28, 1997||Dec 29, 1998||Erim International, Inc.||Subaperture high-order autofocus using reverse phase|
|US5861835 *||Nov 10, 1995||Jan 19, 1999||Hellsten; Hans||Method to improve data obtained by a radar|
|US6670907 *||Jan 30, 2002||Dec 30, 2003||Raytheon Company||Efficient phase correction scheme for range migration algorithm|
|US7719684||Jan 9, 2008||May 18, 2010||Lockheed Martin Corporation||Method for enhancing polarimeter systems that use micro-polarizers|
|US7760128 *||May 14, 2008||Jul 20, 2010||Sandia Corporation||Decreasing range resolution of a SAR image to permit correction of motion measurement errors beyond the SAR range resolution|
|US20080297405 *||Apr 7, 2008||Dec 4, 2008||Morrison Jr Robert L||Synthetic Aperture focusing techniques|
|EP0449303A2 *||Mar 28, 1991||Oct 2, 1991||Hughes Aircraft Company||Phase difference auto focusing for synthetic aperture radar imaging|
|EP0449303A3 *||Mar 28, 1991||Aug 11, 1993||Hughes Aircraft Company||Phase difference auto focusing for synthetic aperture radar imaging|
|WO2008086406A3 *||Jan 9, 2008||Sep 18, 2008||Lockheed Corp||Method and system for enhancing polarimetric and/or multi-band images|
|U.S. Classification||342/25.00F, 342/378, 342/196|
|International Classification||G01S13/90, G06T7/20|
|Cooperative Classification||G06T7/262, G01S13/9011|
|European Classification||G01S13/90C, G06T7/20F|
|May 21, 1985||AS||Assignment|
Owner name: UNITED STATES OF AMERICA AS REPRESENTED BY THE SEC
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNORS:WESTINGHOUSE ELECTRIC CORPORATION;POWELL, NORMAN F.;BENDOR, GIORA A.;REEL/FRAME:004403/0434;SIGNING DATES FROM 19840531 TO 19840621