CA2253935C - Polynomial filters for higher order correlation and multi-input information fusion - Google Patents

Polynomial filters for higher order correlation and multi-input information fusion Download PDF

Info

Publication number
CA2253935C
CA2253935C CA002253935A CA2253935A CA2253935C CA 2253935 C CA2253935 C CA 2253935C CA 002253935 A CA002253935 A CA 002253935A CA 2253935 A CA2253935 A CA 2253935A CA 2253935 C CA2253935 C CA 2253935C
Authority
CA
Canada
Prior art keywords
image data
image
correlation
pattern
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002253935A
Other languages
French (fr)
Other versions
CA2253935A1 (en
Inventor
Abhijit Mahalamobis
Vijaya B. V. K. Kumar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Raytheon Co
Original Assignee
Raytheon Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Raytheon Co filed Critical Raytheon Co
Publication of CA2253935A1 publication Critical patent/CA2253935A1/en
Application granted granted Critical
Publication of CA2253935C publication Critical patent/CA2253935C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • G06F18/256Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/431Frequency domain transformation; Autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/88Image or video recognition using optical means, e.g. reference filters, holographic masks, frequency domain filters or spatial domain filters
    • G06V10/89Image or video recognition using optical means, e.g. reference filters, holographic masks, frequency domain filters or spatial domain filters using frequency domain filters, e.g. Fourier masks implemented on spatial light modulators
    • G06V10/893Image or video recognition using optical means, e.g. reference filters, holographic masks, frequency domain filters or spatial domain filters using frequency domain filters, e.g. Fourier masks implemented on spatial light modulators characterised by the kind of filter

Abstract

A method and apparatus for detecting a pattern within an image. Image data (22) is received which is representative of the image. Filter values (70) are determined which substantially optimizes a first predetermined criterion (68). The first predetermined criterion (68) is based upon image data (22). A correlation output (40) is determined which is indicative of the presence of the pattern within the image data (22). The correlation output (40) is based upon the determined filter values (70) and the image data (22) via a non-linear polynomial relationship (78).

Description

POLYNOMIAL FILTERS FOR HIGHER ORDER CORRELATION
AND MULTI-INPUT INFORMATION FUSION
BACKGROUND OF THE INVENTION
1. Field of the Invention l0 The present invention relates generally to pattern recognition, and more particularly, to correlation filters used in pattern recognition.
2. Description of Related Art Two-dimensional correlation techniques have used spatial filters (known as correlation filters) to detect, locate and classify targets in observed scenes. A
correlation filter should attempt to yield: sharp correlation peaks for targets of interest, high discrimination against unwanted objects, excellent robustnE=ss to noise in the input scene and high tolerance to distortions in the input. A variety of filters to address these aspects and other aspects have been proposed (for example, see: B.V.K. Vijaya Kumar, "Tutorial Survey of Composite Filter Designs for Optical Correlators,"
Applied Optics, Vol. 31, pp. 4773-4801, 1992).
Linear filters known as Synthetic Discriminant Function (SDF) filters have been introduced by Hester and Casasent as well as by Caulfield and Maloney (see: C.F.
Hester and D. Casasent, "Multivariant Techniques for Multiclass Pattern Recognition," Applied Optics, Vol. 19, pp. 1758-1761, 1980; H.J. Caulfield and W.T. Maloney, "Improved Discrimination in Optical Character Recognition," Applied Optics, Vol. .8, pp. 2354-2356, 1969 ) .
Other correlation filters include the minimum squared error Synthetic Discriminant Function (MSE SDF) where the correlation filter is selected that yields the smallest average squared error between the resulting correlation outputs and a specified shape (see: B. V. K. Vijaya Kumar, A. Mahalanobis, S. Song, S.R.F. Sims and J. Epperson, "Minimum Squared Error Synthetic Discriminant Functions,"
02~tical EncrineerinQ, Vol. 31, pp. 915-922, 1992) .
Another filter is the maximum average correlation height (MACH) filter that determines and uses the correlation shape yielding the smallest squared error (see: A. Mahalanobis, H.V.K. Vijaya Kumar, S.R.F. Sims, J.
Epperson, "Unconstrained Correlation Filters," Applied Optics, Vol. 33, pp. 3751-3759, 1994). However, the MACH
filter and other current filters generally perform only 1-inear operations on input image data and consequently are -limited in their performance to detect patterns within the input image data. Moreover, the current approaches suffer the disadvantage of an inadequate ability to process information from multiple sensors as well as at different resolution levels.
SUMMARY OF THE INVENTION
The present invention is a method and apparatus for detecting a pattern within an image. Image data is . received which is representative of the image. Filter values are determined which substantially optimize a first predetermined criterion. The first predetermined criterion is based upon the image data. A correlation output is generated using a non-linear polynomial .relationship based upon the determined filter values and the image data. The correlation output is indicative of the presence of the pattern within the image data.
The present invention contains the following'features (but is not limited to): improved probability of correct target recognition, clutter tolerance and reduced false alarm rates. The present invention also contains such features as (but is not limited to): detection and recognition of targets with fusion of data from multiple sensors, and the ability to combine optimum correlation filters with multi-resolution information (such as Wavelets and morphological image transforms) for enhanced performance.
Additional advantages and features of the present 'invention will become apparent from the subsequent description and the appended claims, taken in conjunction with the accompanying drawings in which:
BRIEF DESCRIPTTON OF THE DRAWINGS
FIG. 1 is a flow diagram depicting the N-th order polynomial correlation filter;
FIG.s 2a-2b are flow charts depicting the operations involved for the correlation filter;
FIG. 3 is a flow diagram depicting the N-th order polynomial correlation filter for multi-sensor fusion;
FIG. 4 are perspective views of sample tanks at different angles of perspective; and FIG. 5 is a graph depicting peak-to-sidelobe ratio versus frame number.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Notation Format The notation employed in this paper is as-follows:
images in the space domain are denoted in lower case italics while upper case italics are used to represent the same in the frequency domain. Thus, a two dimensional (2D) image x (m, n) has Fourier transform X (k, 1) . Vectors are represented by lower case bold characters while matrices are denoted by upper case bold characters.
Either x(m,n) or X(k,1) can be expressed as a column vector x by lexicographical scanning. The superscript T
denotes the transpose operation, and ' denotes the complex conjugate transpose of vectors and matrices.

Referring to FIG. 1, the correlation filter 20 of the present invention receives input image data 22 from input device 23 in order to detect a pattern within the image data-22. A first order term 24 of image data 22 is 5 associated with a first filter term 26. Successive order terms (28 and 30) of image data 22 are associated with successive filter terms (32 and 34). Ultimately, the Nth order term 36 is associated with filter term hN 38.
Values for the filter terms are determined which substantially optimize a performance criterion which is based upon the image data, and a spectral quantity. The spectral quantity represents a spectral feature of the image data 22. For a description of spectral quantities and features, please see: A. Mahalanobis, B.V.K. Vijaya .Kumar, and D. Casasent, "Minimum Average Correlation Energy Filters," AR,plied Optics, vol. 26, pp. 3633-3640, 1987.
A correlation output gX 40 is produced based upon the determined filter values and the image data 22 using a non-linear polynomial relationship. The ion-linear polynomial relationship is a feature of the present invention over other approaches -- that is, the present invention treats the output as a non-linear function of the input. In the present invention, the non-linear polynomial relationship of the output is expressed as:
gr - Alx 1 + AZx Z + . . . +ANx H ( 1 ) - where xi represents the vector x with each of its element raised to the power i, and Ai is a matrix of coefficients associated with the ith term of the polynomial. It should be noted that the output gX is also a vector.
Equation (1) is termed the polynomial correlation filter (or PCF). Thus if x represents the input image in vector notation, then gX is a vector which represents the output correlation plane as a polynomial function of x. To ensure that the output is shift invariant, all the coefficient matrices are in a Toeplitz format. For a description of the Toeplitz format, see the following reference: Matrix Computations, Gene H~. Golub, Charles F.
Van Loan, Johns Hopkins Press, 1989. Each term in the polynomial is computed as a linear shift-invariant filtering operation:
~lix i = hi (m, n) ~ x 1 (m, n) (2) or that filtering xi(m,n) by hi (m,n) is equivalent to multiplying xi by Ai_ The symbol "~" is used to indicate spatial filtering. The output of the polynomial correlation filter is mathematically expressed as:
N
gX (m. n) - E hi (m, n) ~ x i (m, n) (3) =i The filters hi (m, n) are determined such that the structure shown in FIG. 1 optimizes a performance criterion of choice. For the preferred embodiment, the Optimal Trade-off (OT) performance criterion is selected (for a discussion of the OT performance criterion, see Ph.
Refregier, "Filter Design for Optical Pattern Recognition:
Multicriteria Optimization Approach," Optics Letters, Vol.
15, pp. 854-856, 1990). The OT performance criterion is as expressed as:
~mrjl~2 J(h) _ (4) h 'Bh where h is the filter vector in the frequency domain, B is a diagonal matrix related to a spectral quantity, and m is the mean image vector, also in the frequency domain. The following spectral quantities can be used in the OT
performance criterion: average correlation energy (ACE);
average similarity measure (ASM); output noise variance (ONV); or combinations of these performance criterion can be used which are all of the same quadratic form as the denominator of Eq. (4). However, it is to be understood that the present invention is not limited to only these spectral quantities, but includes those which will function for the application at hand. An alternate embodiment of the present invention includes optimizing the same class of performance criteria.
Sample Second Order Correlation Filter By way of example, the operations involved in a second order correlation filter of the present invention is discussed herein. However, it is to be understood that the present invention is not limited to only second order - correlation filters but includes any higher order correlation filter.
Accordingly in this example, the polynomial has only two terms and the output is expressed as:
g(m, n) -(5) x(m, n) ~ hl (m, n) + x~ (m, n) ~ h2 (m, n) The expression for J(h)is obtained by deriving the numerator and the denominator of Eq. (4). In vector notation, the average intensity of the correlation peak for a second order filter is ~AveragePeak~2=~himll2 + ~h2'm~I2 (5) where hl and h2 are vector representations of the filters associated with the first and second terms of the polynomial, and k __ _1 ~ k ( m x1 L t=i is the mean of the training images xi,l s i s L, raised to the kth power. For illustration purposes only, the denominator of the performance criterion in Eq. (4) is chosen to be the ASM metric while noting that the present invention includes any other quadratic form such as ONV or ACE or any combination thereof. The ASM for the second order non-linear filter is expressed as:
ASM = 1 ~ I hl Ri + h2 Ri - hl Ml - h2 M2 ~ 2 ( 8 ) L t=i where Xyk, 1 s i s L,is the ith training image raised to the kth power expressed as a diagonal matrix, and Mk is their average (also a diagonal matrix). After algebraic manipulations, the expression for ASM is:
ASM = hi $llhl + h2~s22h2 + hl S12h2 + h2'S21h1 ( 9 ) where Skl 1 Ri !R1 ) ~ - M''!M1) ~, lsk, 1S2 (10) ~ i=1 are all diagonal matrices. The block vectors and matrices are expressed as:
h =hl , m =m2 , and S =S11 Si2 (11) 2 m S21 S22 The expression for J(h) for the second order filter is expressed as:

J(h) _ average peak' ASM
~hl.my2 + ~hzmz~z (12) ' h1 S11h1 + h2 S2yh2 + h1 S12h2 + hz S21h1 _ ~m.h~z h 'Sh The following equation maximizes J(h):
h = s-1 m (13) Using the definitions in Eq.(11), the solution for the two filters of the second order polynomial is:
-i h1 __ S11 S12 ~ 1 ( 14 ) ~2 S21 S11 2 The inverse of the block matrix is expressed as:
2 _ 1 $12m S22ZQ
2 _ h1 _ ~ S12 ~ S11 S22 ( 15 ) h2 S2 lm 1 _ Sllm 2 2 _ 5 The solution in Eq. (14) is extended to the general Nth order case. Following the same analysis as for the second order case, the N-th order solution is expressed as:

-i S11 S12 ... $1X 1 1 m h2 - S21 S22 ~.. $2N m 2 ( hX SHl SHI ... SHX m N
The block matrix to be inverted in Eq. (16) can be quite large depending on the size of the images. however, because all Sk1 are diagonal and Sk1 - (S1x) ~, the inverse can be efficiently computed using a recursive formula for inverting block matrices.
The present invention is not limited to only a power series representation of the polynomial correlation filter as used for deriving the solution in Eq. (16). The -analysis and the form of the solution remain substantially the same irrespective of the non-linearities used to obtain the terms of the polynomial. Thus, the correlation output is generally expressed as: -N
Aifi(x) (17) =i where f(.) is any non-linear function of x. For example, possible choices for the non-linearities include absolute magnitude and sigmoid functions. The selection of the proper non-linear terms depends on the specific application of the correlation filter of the present invention. For example, it may be detrimental to use logarithms when bipolar noise is present since the logarithm of a negative number is not defined.

FIG. 2a depicts the sequence of operations for the correlation filter of the present invention to determine the filter values. The preferred embodiment performs theser operations "off-line."
Start indication block 60 indicates that block 62 is to be executed first. Block 62 receives the exemplar image data from single or multiple sensor sources. Block 64 processes the exemplar image data nonlinearly and/or at different resolution levels. Processing the data nonlinearly refers to the calculation of the "f(.)" terms of equation 17 above.
Block 64 may use Wavelets and morphological image transforms in order to process information at different resolution levels. For a description of Wavelets and .morphological image transforms, see the following reference: "Morphological Methods in Image and Signal Processing," Giardine and Dougherty, Prentice Hall, Englewood Cliffs, 1988; and C.K. Chui, "An Introduction to Wavelets" Academic Press, New York, 1992.
Block 66 determines the filter values through execution of the subfunction optimizer block 68. The subfunction optimizer block 68 determines the filter values which substantially optimize a predetermined criterion (such as the Optimal trade-off performance criterion). The function of the predetermined criterion interrelates filter values 70, exemplar image data 71 and a spectral quantity 72 (such as average correlation energy (ACE), average similarity measure (ASM), output noise variance (ONV), and combinations thereof). Processing for determining the filter values terminates at termination block 73.
FIG. 2b depicts the operational steps for determining correlation outputs based upon the filter values. The preferred embodiment performs these operations "on-line."
Start indication block 80 indicates that block 82 is to be executed first. Block 82 receives image data from single or multiple sensor sources. Block 84 processes the image data non-linearly and/or at different resolution levels. Processing the data nonlinearly refers to the calculation of the "f(.)" terms of equation 17 above.
Block 86 determines the correlation output 40. The correlation output 40 is indicative of the presence of the pattern within the image data 22. A non-linear polynomial relationship 78 interrelates the correlation output 40, the determined filter values 70, and the image data 22.
Processing terminates at termination block 88.
As discussed in connection to FIG. 2b, the present invention can be used to simultaneously correlate data from different image sensors. In this case, the sensor imaging process and its transfer function itself are viewed as the non-linear mapping function. The different terms of the polynomial do not have to be from the same sensor or versions of the same data.
FIG. 3 depicts input image data from different sensors which is directly injected into the correlation filter 20 of the present invention resulting in a fused correlation output 40. For example, image sensor 100 is an Infrared (IR) sensor; image sensor 102 is a Laser Radar (LADAR) ' CA 02253935 1998-11-09 - sensor; image sensor 104 is a Synthetic Aperture Radar (SAR) sensor; and image sensor 106 is millimeter wave (MMW) sensor.
The analysis and the form of the solution remain the same as that in Eq. (16). Accordingly, each image sensor (100, 102, 104, and 106) has their individual input image data fed into their respective non-linear polynomial relationship (108, 110, 112, and 114). Each non-linear polynomial relationship (108, 110, 112, and 114) depicts a pixel by pixel nonlinear operation on the data.
Each image sensor (100, 102, 104, and 106) has their respective filter terms (116, 118, 120, and 122) determined in accordance to the optimization principles described above. The determined filter values are then -used along with the input image data to produce a fused correlation output 40.
Moreover, FIG. ~ depicts the- present invention's extension to mufti-sensor and mufti-resolution iriputs. In other words, the terms of the polynomial are the multi-spectral data represented at different resolutions levels, as for example to achieve correlation in Wavelet type transform domains. Wavelet type transform domains are described in the following reference: C.K. Chui, '~An Introduction to Wavelets" Academic Press, New York, 1992.
Example Sample images of a tank from a database are shown in FIG. 4. The images were available at intervals of three degrees in azimuth. The .end views of the tank are generally depicted at 140. The broadside views of the tank are generally depicted at 142.
The sample images were used for training and testing a conventional linear MACH filter versus a fourth order (N
5 - 4) PCF. The peak-to-sidelobe ratio (PSR) of the correlation peaks defined as PSR = P-mean _ P-a (18) standard deviation 6 was computed and used for evaluating the performance of the filters. In each case, Gaussian white noise was added to the test images to simulate a.per pixel signal to 10 noise ratio (SNR) of lOdB.
The PSR outputs of the conventional linear MACH filter 150 and the 4th order MACH PCF 152 are shown in FIG. 5 for comparison. FIG. 5 shows the behavior of PSR over the range of aspect angles. While the PSR is fundamentally 15 low at end views (where there are fewer pixel's on the target), the PSR output of the MACH PCF is always higher than its linear counterpart.
A detection threshold 154 is used to determine if the tank pattern has been detected within the image frame number. As seen from FIG. 5, the 4th order MACH PCF
missed fewer detections of the pattern than the conventional linear MACH filter.
The embodiments which have been set forth above for the purpose of illustration were not intended to limit the invention. It will be appreciated by those skilled in the art that various changes and modifications may be made to - the embodiments discussed in the specification without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

17 What is Claimed is:
1. A method for detecting a pattern within an image comprising the steps of:
(a) receiving image data which is representative of said image;
(b) determining filter values which substantially optimize a first predetermined criterion, said first predetermined criterion being based upon said image data; and (c) using a non-linear polynomial relationship to generate a correlation output based upon said determined filter values and said image data, said correlation output being indicative of the presence of said pattern within said image data.
2. The method of Claim 1 wherein said non-linear polynomial relationship is:
g x = A1 x1+A2 x2+...+ANx N
wherein g is representative of said correlation output, wherein x is representative of said image data, wherein A
is based upon said filter values.
3. The method of Claim 1 further comprising the steps of:
receiving a first set of image data from a first image sensing source; and receiving a second set of image data from a sec-and image sensing source;

wherein said first and second sets of image data are representative of different physical characteristics of said image.
4. The method of Claim 1 wherein said correla-tion output is a single correlation vector indicative of the presence of the pattern within said image data.
5. The method of Claim 1 wherein said correla-tion output is a single correlation vector which contains the degree of correlation between said pattern being found within said image data.
6. The method of Claim 1 further comprising the step of:
detecting said pattern within said image data where said degree of correlation satisfies a predetermined detection threshold.
7. The method of Claim 1 wherein said correla-tion output is shift invariant.
8. The method of Claim 1 further comprising the step of:
performing steps (b) and (c) at a first level of image resolution with respect to said image data performing steps (b) and (c) at a second level of image resolution with respect to said image data.
9. The method of Claim 1 wherein said image is associated with at least one spectral quantity, said first predetermined criterion being based upon said image data and said at least one spectral quantity.
10. An apparatus for detecting said pattern of Claim 1 within said image, said image being associated with at least one spectral quantity, comprising:
an input device for receiving image data repre-sentative of said image;
' an optimizer connected to said input device for determining filter values which substantially optimize a first predetermined criterion, said first predetermined criterion being based upon said image data and said spec-tral quantity; and a correlation filter connected to said optimizer for determining a correlation output which is indicative of the presence of said pattern within said image data, said correlation filter using a non-linear polynomial re-lationship to generate said correlation output based upon said determined filter values and said image data.
11. The apparatus of Claim 10 wherein A is based upon said filter values, said non-linear polynomial rela-tionship further including:
Ai x i=h i(m,n)~x i(m,n) wherein h is representative of said filter values.
12. The apparatus of Claim 10 wherein said image data is processed at a first and second level of resolu-tion.
CA002253935A 1997-04-04 1998-04-03 Polynomial filters for higher order correlation and multi-input information fusion Expired - Fee Related CA2253935C (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US4340897P 1997-04-04 1997-04-04
PCT/US1998/006280 WO1998045802A2 (en) 1997-04-04 1998-04-03 Polynomial filters for higher order correlation and multi-input information fusion
US60/043,408 1998-04-03
US09/054,497 1998-04-03
US09/054,497 US6295373B1 (en) 1997-04-04 1998-04-03 Polynomial filters for higher order correlation and multi-input information fusion

Publications (2)

Publication Number Publication Date
CA2253935A1 CA2253935A1 (en) 1998-10-15
CA2253935C true CA2253935C (en) 2002-06-18

Family

ID=26720390

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002253935A Expired - Fee Related CA2253935C (en) 1997-04-04 1998-04-03 Polynomial filters for higher order correlation and multi-input information fusion

Country Status (11)

Country Link
US (1) US6295373B1 (en)
EP (1) EP0916122A4 (en)
JP (1) JP3507083B2 (en)
KR (1) KR100310949B1 (en)
AU (1) AU715936B2 (en)
CA (1) CA2253935C (en)
IL (1) IL127239A (en)
NO (1) NO325762B1 (en)
PL (1) PL330358A1 (en)
TR (1) TR199802519T1 (en)
WO (1) WO1998045802A2 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002336504A1 (en) * 2001-09-12 2003-03-24 The State Of Oregon, Acting By And Through The State Board Of Higher Education On Behalf Of Oregon S Method and system for classifying a scenario
US7127121B1 (en) * 2002-03-20 2006-10-24 Ess Technology, Inc. Efficient implementation of a noise removal filter
US7715591B2 (en) * 2002-04-24 2010-05-11 Hrl Laboratories, Llc High-performance sensor fusion architecture
US7232498B2 (en) * 2004-08-13 2007-06-19 The Goodyear Tire & Rubber Company Tire with raised indicia
US7561732B1 (en) 2005-02-04 2009-07-14 Hrl Laboratories, Llc Method and apparatus for three-dimensional shape estimation using constrained disparity propagation
US9349153B2 (en) * 2007-04-25 2016-05-24 Digimarc Corporation Correcting image capture distortion
US8885883B2 (en) 2011-07-22 2014-11-11 Raytheon Company Enhancing GMAPD LADAR images using 3-D wallis statistical differencing
JP2016538630A (en) * 2013-11-28 2016-12-08 インテル コーポレイション A method for determining local discriminant colors for image feature detectors
US10387533B2 (en) 2017-06-01 2019-08-20 Samsung Electronics Co., Ltd Apparatus and method for generating efficient convolution
KR102314703B1 (en) * 2017-12-26 2021-10-18 에스케이하이닉스 주식회사 Joint dictionary generation method for image processing, interlace based high dynamic range imaging apparatus using the joint dictionary and image processing method of the same

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4578676A (en) * 1984-04-26 1986-03-25 Westinghouse Electric Corp. Delay lattice filter for radar doppler processing
WO1987006371A1 (en) * 1986-04-10 1987-10-22 Hewlett Packard Limited Expert system using pattern recognition techniques
US5341142A (en) * 1987-07-24 1994-08-23 Northrop Grumman Corporation Target acquisition and tracking system
US5109431A (en) * 1988-09-22 1992-04-28 Hitachi, Ltd. Pattern discrimination method and apparatus using the same
US5774591A (en) * 1995-12-15 1998-06-30 Xerox Corporation Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images
US5774491A (en) 1996-07-09 1998-06-30 Sti Optronics, Inc. Energy exchange between a laser beam and charged particles using inverse diffraction radiation and method for its use
US5960097A (en) * 1997-01-21 1999-09-28 Raytheon Company Background adaptive target detection and tracking with multiple observation and processing stages

Also Published As

Publication number Publication date
PL330358A1 (en) 1999-05-10
NO985635L (en) 1999-02-04
CA2253935A1 (en) 1998-10-15
AU715936B2 (en) 2000-02-10
JP3507083B2 (en) 2004-03-15
TR199802519T1 (en) 2000-11-21
NO985635D0 (en) 1998-12-03
NO325762B1 (en) 2008-07-14
KR100310949B1 (en) 2001-12-12
EP0916122A4 (en) 2001-08-16
WO1998045802A2 (en) 1998-10-15
KR20000016287A (en) 2000-03-25
JP2001527671A (en) 2001-12-25
IL127239A0 (en) 1999-09-22
AU7465698A (en) 1998-10-30
IL127239A (en) 2003-01-12
WO1998045802A3 (en) 1998-11-26
US6295373B1 (en) 2001-09-25
EP0916122A2 (en) 1999-05-19

Similar Documents

Publication Publication Date Title
Goerick et al. Artificial neural networks in real-time car detection and tracking applications
US5561431A (en) Wavelet transform implemented classification of sensor data
Soon et al. PCANet-based convolutional neural network architecture for a vehicle model recognition system
CN111563414B (en) SAR image ship target detection method based on non-local feature enhancement
CA2253935C (en) Polynomial filters for higher order correlation and multi-input information fusion
Mahalanobis et al. Polynomial filters for higher order correlation and multi-input information fusion
US5245675A (en) Method for the recognition of objects in images and application thereof to the tracking of objects in sequences of images
WO1998045802A9 (en) Polynomial filters for higher order correlation and multi-input information fusion
CN115496928A (en) Multi-modal image feature matching method based on multi-feature matching
Zhu et al. An improved KSVD algorithm for ground target recognition using carrier-free UWB radar
Yardimci et al. Robust direction-of-arrival estimation in non-Gaussian noise
US20020136457A1 (en) Method of and apparatus for searching corresponding points between images, and computer program
CN113466782B (en) Mutual coupling correction DOA estimation method based on Deep Learning (DL)
Rajani et al. Direction of arrival estimation by using artificial neural networks
CN108830290B (en) SAR image classification method based on sparse representation and Gaussian distribution
MXPA98010258A (en) Polynomial filters for higher order correlation and multi-input information fusion
Spence et al. Hierarchical Image Probability (H1P) Models
Wagner et al. Fool the COOL-on the robustness of deep learning SAR ATR systems
Mahalanobis Processing of multisensor data using correlation filters
Keydel et al. Reasoning support and uncertainty prediction in model-based vision SAR ATR
Weilei et al. An automatic target recognition algorithm for SAR image based on improved convolution neural network
CN113688742B (en) SAR image recognition method, SAR image recognition device and storage medium
Almutiry et al. A Novel Method for Pattern Recognition based on Radar Tomographic Images
Kittler et al. Model validation for model selection
Weber Quadratic detection filters and classifiers

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed

Effective date: 20150407