CA2322892A1 - Compression of hyperdata with orasis multisegment pattern sets (chomps) - Google Patents

Compression of hyperdata with orasis multisegment pattern sets (chomps) Download PDF

Info

Publication number
CA2322892A1
CA2322892A1 CA002322892A CA2322892A CA2322892A1 CA 2322892 A1 CA2322892 A1 CA 2322892A1 CA 002322892 A CA002322892 A CA 002322892A CA 2322892 A CA2322892 A CA 2322892A CA 2322892 A1 CA2322892 A1 CA 2322892A1
Authority
CA
Canada
Prior art keywords
data
exemplar
vectors
vector
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002322892A
Other languages
French (fr)
Inventor
John A. Antoniades
Mark M. Baumback
Jeffrey A. Bowles
John M. Grossmann
Daniel G. Haas
Peter J. Palmadesso
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
US Department of Navy
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2322892A1 publication Critical patent/CA2322892A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods

Abstract

The intelligent hypersensor processing system (IHPS) is a system for rapid detection of small, weak, or hidden objects, substances, or patterns embedded in complex backgrounds, providing fast adaptive processing for demixing (20) and recognizing patterns or signatures in data provided by certain types of hypersensors (10). This system represents an alternative to prior systems for hidden object detection. The CHOMPS version of IHPS provides an efficient means of processing, compressing and manipulating the vast quantities of data collected by the sensors. CHOMPS uses the adaptive learning module (30) to construct a compressed data set along with scene mapping data for later reconstruction of the complete data set (40).

Description

2 PC"T/US99100627 COMPRESSION OF HYPERDATA WITH ORASIS MULTISEGMENT PATTERN
SETS (CHOMPS) BACKGROUND OF THE INVENTION
1 The present invention relates generally to processing signals, and more particularly to a system for the rapid compression of hypersensor data sets that contain objects, substances, or s patterns embedded in complex backgrounds. A hypersensor is a sensor which produces as its 4 output a high dimensional vector or matrix consisting of many separate elements, each of which is a measurement of a different attribute of the system or scene under construction. A
6 hyperspectral imager is an example of a hypersensor. Hypersensors based on acoustic or other 7 types of signals, or combinations of different types of input signals are also possible.
8 Historically there have been three types of approaches to the problems relating to the 9 detection of small, weak or hidden objects, substances or patterns embedded in complex io backgrounds.
11 The first approach has been to use low dimensional sensor systems which attempt to detect 12 a clean signature of a well known target in some small, carefully chosen subset of all possible s attributes, e.g., one or a few spectral bands. These systems generally have difficulty when the . 14 _ -. target signature-is heavily mixed in with other signals,. so they typically can detect subpaxel .
targets or minority chemical constituents of a mixture only under ideal conditions, if at all. The 16 target generally must fill at least one pixel, or be dominant in some other sense as in some 1 r hyperspectral bands. Also, the optimal choice of bands may vary with the observing conditions 18 or background (e.g. weather and lighting), so such systems work best in stable, predictable 19 environments. These systems are simpler than the high dimensional sensors {hypersensors), but 10 they also tend to be less sensitive to subdominant targets and less adaptable.
21 The second approach has been to employ high dimensional sensor systems which seek to SUBSTITUTE SHEET (RULE 26) 1 detect well known (prespecified) targets in complex backgrounds by using Principle Components 2 Analysis (PCA) or similar linear methods to construct a representation of the background.
s Orthogonal projection methods are then used to separate the target from the background. This 4 approach has several disadvantages. The methods used to characterize the background are typically not 'real time algorithms'; they are relatively slow, and must operate on the entire data 6 set at once, and hence are better suited to post-processing than real time operation. The background characterization can get confused if the target is present in a statistically significant s measure when the background is being studied, causing the process to fail.
Also, the appearance 9 of the target signature may vary with the environmental conditions: this must be accounted for to in advance, and it is generally very difficult to do. Finally, these PCA
methods are not well suited 11 for detecting and describing unanticipated targets, (objects or substances which have not been 1Q prespecified in detail, but which may be important) because the representation of the background is constructed by these methods mix the properties of the actual scene constituents in an unphysical 14 and unpredictable way. PCA methods are also used for compression schemes however they have many of the same shortcomings. Linear Vector Quantization (LVQ) is also used for compression.
16 Current LVQ schemes use minimum noise fraction (MNF) or average patterns of PCs to compress 1 ~ the data, which are slow and require a priori knowledge of sensor characteristics.
18 The more recent approach, is based on conventional convex set methods, which attempt 19 to address the'endmember' problem. The endmembers are a set of basis signatures from which every observed spectra in the dataset can be composed in the form of a convex combination, i.e., $1 a weighted sum with non-negative coefficients. The non-negativity condition insures that the sum QR can sensibly be interpreted as a mixture of spectra, which cannot contain negative fractions of any ~,5 ingredient. Thus every data vector is, to within some error tolerance, a mixture of endmembers.
~4 If the endmembers are properly constructed, they represent approximations to the signature patterns of the actual constituents of the scene being observed. Orthogonal projection techniques 16 are used to demix each data vector into its constituent endmembers. These techniques are Q
SUBSTITUTE SHEET (RULE 26) 1 conceptually the most powerful of the previous approaches, but current methods for implementing 2 the cony ex set ideas are slow, (not real time methods} and cannot handle high dimensional pattern 3 spaces. This last problem is a serious limitation, and renders these methods unsuitable for 4 detecting weak targets, since every constituent of a scene which is more dominant than the target 3 must be accounted for in the endmember set, making weak target problems high dimensional. In 6 addition, current convex set methods give priority to the constituents of the scene which are dominant in terms offrequency ofoccurrence, with a tendency to ignore signature patterns which 8 are clearly above the noise but infrequent in the data set. This makes them unsuitable for 9 detecting strong but small targets unless the target patterns are fully prespecified in advance.
When operating in high dimensional pattern spaces massive quantities of data must be 11 managed which requires hundreds of millions of computations for each pixel.
Thus the need to 12 compress massive quantities of data for storage, download, and/or real time analysis becomes 13 increasingly important and equally elusive.

1 r Accordingly, it is an object of this invention to compress multispectral data, while 18 preserving the spectral information for the detection of objects or substances embedded in 19 complex backgrounds.
Another object of this invention is accurate and quickly compress multidimensional 11 data sets from chemical, acoustic or other types of hypersensors capable of handling multidimensional analysis.
2S Another object of this invention is to process signal and compress data with a fast set of 14 algorithms which provide a greatly reduced computational burden in comparison to existing 25 methods.
Q6 Another object of this invention is to compress hyperspectral or multispectral data in a 2 ~ system employing parallel processing which offers true real time operation in a dynamic SUBSTITUTE SHEET (RULE 26) 1 scenario.
2 These and additional objects of this invention are accomplished by the structures and s processes hereinafter described.
4 The Compression of Hyperdata with ORASIS Multisegment Pattern Sets, (CHOMPS), 3 system is an improved version of the Intelligent Hypersensor Processing System (IHPS) s employing the CHOMPS algorithm to compress the size of the data output and increase the computational ef$ciency of the IHPS prescreener operation or comparable operations in other 8 multidimensional signal processing systems. The CHOMPS configured prescreener employs a 9 structured search method used to construct the exemplar set with the minimum number of operations. IHPS forms a series of pattern vectors through the concatenation of the outputs of 11 multiple sensors. The data stream from the sensors is entered into a processing system which 11 employs a parallel-pipeline architecture. The data stream is simultaneously sent to two 18 separate processor pipelines. The first processor pipeline, referred to as the demixer pipeline, 14 comprises the demixer module, which decomposes each pattern vector into a convex 15 combination of a set of fundamental patterns which are the constituents of the mixture. The is decomposition is accomplished using projection operations called 'Filter Vectors' generated in 1 ~ the second pipeline, referred to as the adaptive learning pipeline, which contains the adaptive 18 learning module. The signature pattern of a weak constituent or an unresolved small target is 19 separated from background patterns which may hide the target pattern in the unmixed data.
Information detailing the composition of the demixed data patterns is sent to the 11 Display/Output Module along with information about the fundamental patterns and Filter 21 Vectors.
The CHOMPS embodiment of IHPS provides an efficient means of compressing and 24 manipulating the vast quantities of data collected by the sensors. CHOMPS
uses the 25 Prescreener, the Demixer Pipeline and the Adaptive Learning Module Pipeline to construct a 26 compressed data set, along with the necessary scene mapping data, facilitating the efficient SUBSTITUTE SHEET (RULE 26) 1 storage, download and the later reconstruction of the complete data set with minimal 2 deterioration of signal information.
s BRIEF DESCRIPTION OF THE DRAWINGS
s a Figure 1. is a representation of the data cube and the orientation of the spatial and wavelength 9 information in X, ~, and T coordinates.
to 11 Figure ~. is a block diagram of an embodiment of IHPS showing the system's parallel structure.

1s Figure s. is a logic flowchart of the operation of the prescreener Figure 4. is a representation of the plane created by V, and V3 during Gram-Schmidt operations.

Figure 5. is a representation of the orthogonal projections V,o and V,~o during Gram-Schmidt .8 19 operations.
l0 11 Figure 6. is a representation of the Salient vector and plane defined by V,o and V,~o.
2l ~3 Figure ~ . is a representation of the s-dimensional spanning space defined during Gram-Schmidt/

~5 Salient operations.

Z 7 Figure 8. is a representation of the 3-dimensional spanning space showing the salient vectors.
~8 19 Figure 9. is a representation of a hypertriangle convex manifold.
so 31 Figure loa. is a representation of a minimized hypertriangle defined using shrink wrap method s9 1.

35 Figure lOb. is a representation of a minimized hypertriangle defined using shrink wrap method s7 1.

39 Figure 11. is a representation of a minimized hypertriangle defined using shrink wrap method 41 3.

4s Figure 12. is a logic flowchart of the operation of the adaptive learning module.

Figure 13. is a flowchart of the operation of the demixer module.

47 Figure 14. is a block diagram of the ALM processor pipeline employing multithreaded 49 operation.
SUBSTITUTE SHEET (RULE 26) 1 Figure 15. is a representation of a reference vector and its exemplar projections.
3 Figure 16. is a logic flowchart of the Pop-up Stack search method.

Figure 1 ~ . is a representation of the Single Bullseye search method in vector space.
Figure 18. is a representation of the Double Bullseye search method in vector space.
g , 9 Figure 19. is a logic flowchart of the CHOMPS wavespace compression mode.
11 Figure 20. is a logic flowchart of the CHOMPS endmember compression mode.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

1 ~ Referring now to the Figures, wherein like reference characters indicate like elements 8 throughout the several views, Figure 1 is a representation of the data cube and the orientation 19 of the spatial and wavelength information. IHPS forms a series of pattern vectors through 10 the concatenation of the outputs of multiple sensors. Each sensor measures a different Q 1 attribute of the system being observed, and has a consistent relationship to all the other ZR sensors.
23 In a preferred embodiment the optical system may be employed on an aircraft or ~4 spacecraft. As the craft flies over or in close proximity to an area of interest, Hypersensor to scans the scene or area of interest by taking successive snapshots of the scene below. Each snapshot constitutes a frame of spectral data. The spectral data is scanned frame by frame 210 2 7 and displayed as variations in intensity. In the optical example, a frame ~ 10 is the diffracted Q8 image on a two dimensional focal plane of a narrow slit which accepts light from a narrow linear strip of the scene. Variations of the optical sensor layout are possible. Each frame 110 comprises multiple lines X05; each line X05 being the spectral characteristic for a specific s 1 point in the scene which correlates to a specific coordinate of the area scanned. Each frame 31 110 is configured such that the spatial information is expressed along the X axis and ss wavelength information is contained in the ~, direction. Data cube X00, as illustrated in Figure 34 1, is created by the concatenation of successive frames z 10 (different spatial strips) and s SUBSTITUTE SHEET (RULE 26) WO 99/45492 PC'T/US99/00627 1 represents the observed spectral data of the scene provided by the hypersensor. The observed 2 spectral data, which is used to create data cube 200 is expressed in vector form, and processed S one spatial pixel, i.e. one spectrum, at a time. Each pixel is fed into a preprocessor (not 4 shown) which performs normalization and purges bad spectral data, bad spectral data being data corrupted or otherwise useless due to incomplete spectral information.
6 Referring now to figure °?, which shows a block diagram of the basic system 7 architecture, the data stream from the sensors is entered into a processing system 100 which 8 employs a parallel-pipeline architecture. The data stream is simultaneously sent to two 9 separate processor pipes one for demixing 140, and one for adaptive learning operations 150.
Demixer Module 20, decomposes each pattern vector into a convex combination of a set of 11 fundamental patterns which are the constituents of the mixture. The decomposition is 1R accomplished using projection operations called 'Filter Vectors' generated in the second 13 pipeline by the Adaptive Learning Module s0. Hypersensor 10 collects data and transmits the 14 collected data to prescreener 50. Prescreener 50 constructs a survivor set by extracting exemplars, or data collected by hypersensor 10 which may contain new or useful information.
16 The signature pattern of a weak constituent or an unresolved small target is separated from 1 ~ background patterns which may hide the target pattern in the unmixed data.
A priori 18 knowledge about the signatures of known targets can be used, and approximate signatures of 19 unknown constituents are determined automatically. Information detailing the composition of the demixed data patterns is sent to Output Module 40, along with information about the 11 fundamental patterns and Filter Vectors. Learning module 30, performs minimization operations and projects the exemplar set information into a reduced dimensional space, 1s generating endmembers and filter vectors.
14 For other types of hypersensors, the spectral vectors produced by the sensor array would be replaced by a vector of other types of data elements, such as the amplitudes of different frequencies of sound. The organization of input data vectors may also vary somewhat SUBSTITUTE SHEET (RULE 26) 1 depending on the type of sensor. Aside from these sensor-dependent v ariations in the type and 2 organization of the input data, the operation, capabilities, and output of the processing system g would remain the same.
Again referring to figure ~, the vector data d is then simultaneously fed into the adaptive learning and demixer processor pipes. This parallel processing architecture 6 illustrated in the figure is a preferred structure, however, this system, algorithms and hardware contained herein may be employed in a system with a traditional architecture. The 8 demixer processor pipeline 14o comprises demixer module 20 which decomposes each data 9 vector dk into a convex combination of a set of fundamental patterns, which are endmembers or t 0 constituents of the mixture. The decomposition is accomplished using projection operators 11 called 'Filter Vectors' generated by the adaptive learning module s0.
1~ The filter vectors F; are a set of optimal matched filters: they are the smallest vectors 1S which will satisfy the condition:
14 F; ~ E~=S;~
where is 8;~ the Kronecker delta function (8;~ equals 1 if i = j and o otherwise) and the vectors 1s E~ are the endmembers of the scene, thus, each data vector dk (spectrum) is a convex i ~ combination of endmembers plus a noise vector Nk.
n dk=~ c~iEi+Nk c~ZO
~ _;
is 19 The dot product of dk with filter vector F~, where .1 is the dch endmember, yields the coefficient of endmember E~ in the sum, plus a small error due to noise.
n F~' ~:_~ ck~F~' Ei+FJ. Nk~cw s SUBSTITUTE SHEET (RULE 26) 1 This determines the contribution which endmember J makes to the spectrum dk.
The filter vectors are optimal matched filters which are found by solving the minimization problem 3 (smallest magnitude) described above, subject to the constraint imposed by the orthogonality condition. The solution has the form F=M 'E
6 where the filter vectors are the rows of the matrix F, the endmembers (assumed to have unit magnitude) are the rows of the matrix E and the matrix ~I has the elements M;~
= E; ~ E~ .
8 Filter vectors allow the signature pattern of a weak constituent or unresolved small 9 target to be separated from background patterns which may hide the target pattern in the spectral data. Filter vectors demix the spectrum by projecting out one endmember at a time.
11 CHOMPS is not limited to filter vector manipulations. There are several methods 11 known in the art to demix the spectrum, such as the use of spectral matched filers (SMF) or 13 pseudoiverses, any of which may be suitable for compression use.
14 Information ck~ detailing the composition of the demixed data spectrum is sent to Display/Output module 40 from the demixer module through the demixer pipeline 140, along i 6 with information E~ about the fundamental spectra patterns, and filter vectors from adaptive 17 learning module 3o via the adaptive learning pipeline 150. Display/Output module 40, 18 displays the distribution of the constituents of the scene, transmits, or stores the demixed data 19 for later analysis.
$0 The adaptive learning or second processor pipeline 150 comprises prescreener 50, and R i adaptive learning module s0. The object of prescreener 50, is to generate the exemplar ~Q ensemble {S} to a user specified precision by comparing each incoming spectral vector d to the up-to-date exemplar set {S}.
Q4 Referring now to figure s, prescreener 50 receives data vectors d from preprocessor ~5 (not shown) and generates a reduced set of vectors or exemplar ensemble called the survivor 16 or exemplar set 55. Exemplar set 55 is then transmitted to adaptive learning module 30.

SUBSTITUTE SHEET (RULE 26) 1 Prescreener 50 reduces the amount of data processed by discarding spectral signatures which 1 contain redundant information. This reduces the computational burden on the other elements s of the learning pipeline. The exemplar set 55 is generally less than 1°~ of the total input 4 stream, but its size varies depending upon the conditions and applications.
In CHOMPS, prescreener 50 is implemented in several different ways, depending on 6 the application's timeliness requirements and the processing system hardware. In a preferred embodiment the prescreener module is implemented using multithreaded or messaging 8 multiprocessor operation. Referring now to figure 14 which shows a block diagram of ALM
9 processor pipeline 150 employing multithreaded operation, data generated by sensor 10 passes t0 through multiple processor pipes. The data set is segmented so that each prescreener 50 11 processes a different section of the input stream. The prescreeners are coupled to allow each 1~ prescreener module 50 to share exemplar set information, updating and receiving updates from 13 the other prescreeners. This allows each prescreener to build a complete exemplar set {S}
14 which reflects the exemplars extracted from vector data set {d} by each prescreener 50. In multithreaded operation the prescreeners 50 can also be decoupled, for independent operation.
16 In this mode, each prescreener processes the complete data set , and a separate software 17 module reconciles the output of the parallel prescreeners into a single exemplar set.
18 The complete and updated exemplar set {S} is then passed to the ALM s0 for 19 endmember generation. The endmember data and filter vectors are passed to the output 10 module.
11 CHOMPS is thus able to quickly and efficiently sample a very large data set and generate a exemplar set by prescreening large "chunks" of the data set simultaneously.
Coupled with the compression and computational management techniques discussed 24 throughout this text, the ability to employ parallel processing at the prescreener level results ~5 in a substantial improvement in the efficiency of IHPS and CHOMPS
significantly extending 26 the real time operation envelope.
SUBSTITUTE SHEET (RULE 26) 1 The multiprocessor configuration illustrated in figure 1~ may also be employed using 2 multiple prescreeners distributed in a network of machines. CHOMPS is also capable of being s employed using a single threaded operation, where each vector is processed sequentially by a single processor.
Figure s illustrates the flowchart of the logical operation of prescreener 50.
6 Prescreener 50 generates the exemplar set {S} by comparing 54 the data spectra of the most recent pixel sampled with existing members of the exemplar set 55. The exemplar set is s generated by performing dot operations in accordance to the relation:

t 0 d; ~ S~ > t-s 12 where, d; is the ich newly sampled data vector, S~ is an existing exemplar set vector, and s is 13 variable controlling threshold sensitivity. Here, the vectors d; and S~ are assumed to be 14 normalized to unit magnitude. Thus, the condition d; ~ S~ = 1 means that the two vectors are identical and the condition d; ~ S~ > 1-s means that they are almost identical if s is small.
16 Vectors for d; which meet the above condition for any exemplar S.~ are discarded 52 and the t ~ next vector is examined. Discarded vectors are not included in the exemplar set. The 1s rejection count for each exemplar is stored in an array for future recall.
The value of s, which 19 is set by the operator or a control system, is a function of the exemplar set size desired, memory length for exemplar set values, desired thruput of data and the noise in the signal 56.
21 Generally, as the value of E is increased, the sensitivity of the system is decreased. The pruner 22 31 is a memory management device which determines when a exemplar set vector S~ should be 23 purged from the memory. Pruner 51 monitors the exemplar set 55, and adjusts the control 24 parameters to control exemplar set size. This is accomplished by setting the value for s 5~, and the maximum allowed age of a exemplar, which determines the threshold for additions to SUBSTITUTE SHEET (RULE 26) 1 the exemplar set and the time a exemplar is allowed to remain in the exemplar set without 2 being regenerated.
s The exemplar set vector, S~ , used in the d; ~ S~ > 1-E comparison is preferably chosen 4 via the use of a structured search technique to minimize the number of dot operations while offering thorough analysis of the exemplar set 55. This may be accomplished by comparing 54 6 the newly sampled data vector with the most recent vectors entered into the exemplar set.
Other search methods which minimize the number of operations necessary for thorough 8 matching are also suitable. CHOMPS employs several such structured techniques designed to 9 minimize the number of operations necessary for effective prescreener operation discussed in detail below.
11 The computational management techniques employed by CHOMPS is referred to as 1~ Focus searching. Focus searching includes priority searching and zone searching which 1 S generally relate to focusing the comparisons of the incoming data vector d; to the exemplars S~
14 so that relevant exemplars can be quickly located, and to reduce the number of exemplar comparisons necessary, before a determination can be made as to whether a new data vector d;
16 is similar to an existing exemplar S~.
1 ~ In a priority search CHOMPS will assign a higher priority to some exemplars than 18 others on the basis of an exemplars age, past comparisons or some other statistical basis. The 19 exemplars designated with the higher priority are initially used in the d ~
S > 1- E prescreener comparison. Exemplars with a lower priority are used later if necessary.
~ 1 In a preferred embodiment, one such computational management technique CHOMPS
11 employs is in the form of a priority search technique called a Pop-up test.
The pop up test uses 23 a data structure called a pop up stack. The pop-up stack is a subset of the exemplar set {S}
containing only the exemplar vectors S~ that most recently rejected an incoming spectral data $5 vector d;. These vectors include the most recent exemplar entry, and the exemplar that most 26 recently rejected a candidate exemplar. When a data vector is determined to be dissimilar 1 ~l SUBSTITUTE SHEET (RULE 26) 1 from the other exemplar set members it is entered into the pop up stack.
2 Referring now to figure 16, which shows a flowchart of the pop up stack test, s prescreener first receives a data vector dN from the sensor 556, where N is just the order in which the vector is received. Prescreener then performs the standard dot product comparison to determine if dN is similar to S~ . If the condition in the inequality is met, dN is rejected as 6 similar to S~ or repetitive data 565 and the next data vector dN+, is retrieved, 566. S~ is then placed at the top of the stack 56 r, resulting in the next data vector sampled being compared to 8 S~ first. If the condition is not met, the prescreener retrieves the next exemplar in the stack S~+, 9 555, and repeats the dN ~ S~ > 1-a comparison 564 where S~ = S~+,. If the condition is met, prescreener again rejects dN as similar 565, places S~;, at the top of the stack 56 ~ and retrieves 11 the next data vector dN+, for comparison 566, 556. If the condition is not met, the prescreener 1~ will go on to the next text 570.
19 The pop up stack thus gives priority to the last exemplar to reject a data vector as 14 similar. Called a rejector, this exemplar is statistically more likely to be similar to the next incoming data vector than an exemplar with a more distant rejection history.
Thus the pop up 16 test improves the chances of quickly classifying a vector as similar with a single comparison 17 operation. Other variations of the pop up stack are possible, for instance the newest rejector 18 may not be assigned highest priority (used in first comparison), it may just be given a higher 19 priority the majority of the other exemplars.
10 Preferably, the pop up test is performed for no more than ~ or 9 cycles, however, 11 multiple iterations of the pop up test may be performed. In the preferred embodiment, after ~
z2 or s cycles, CHOMPS next performs a Zone search.
13 In a Zone search CHOMPS chooses a point of reference, and defines the exemplar set 24 {S} and the incoming data vectors d according to their relation to the reference. The exemplar data set {S} is labeled and stored according to some defined relationship with the Z6 reference point structured to allow the prescreener to quickly recall exemplars S~ which meet lg SUBSTITUTE SHEET (RULE 26~

1 parameter in relation to data vector d;. Thus CHOMPS can use the incoming data vectors 1 relation to the reference point to define a zone of exemplars to compare with d; and determine s whether the incoming data vector d; contains new information.
4 In a preferred embodiment CHOMPS designates a vector as a reference point.
These vectors are referred to as reference vectors. The reference vector is used as a reference point 6 from which the vectors of the exemplar set {S} and data set {d} are defined.
Reference vectors may be arbitrary, however a further reduction in the number of computations 8 necessary is achieved if the reference vectors are selected such that the exemplar projections 9 yield the maximum possible spread in each reference direction. Reference vectors may be chosen as approximations to the largest principal components of the current exemplar set. In a 11 preferred embodiment the reference vector is initially chosen arbitrarily, and once the 11 prescreener constructs the exemplar set, CHOMPS uses a Principal Components Analysis to 18 designate one or more reference vectors.
14 In the Principal Components Analysis, CHOMPS uses the exemplar vectors as well as the rejection count for each exemplar to estimate the covariance matrix. As more image 16 pixels arrive, the reference vectors are updated, based on mission requirements and 1 ~ considerations as to whether an update is beneficial. In CHOMPS, the data vectors are 18 addressed on the basis of their projection onto the reference vector.
19 Referring now to figure 15 which illustrates a reference vector R and exemplar set vectors SN projected onto it, the angle 6N between the reference vector and each incoming ~ 1 spectral vector is computed. The reference angle for each exemplar is stored in an array or 22 table along with a notation designating which vector the angle 6N
represents for future recall.
In a preferred embodiment the exemplar set data, the exemplar number and its Q4 orientation to the reference vector is contained in a linked list or hash table, known in the art, l5 such that each exemplar can be quickly located and recalled by the prescreener as a function of 16 the reference angle 8;~. Thus, the actual reference angle need not be stored, the reference angle SUBSTITUTE SHEET (RULE 26) _ WO 99/45492 PCT/US99/00627 1 is determined as a function of which address the entry occupies. As new entries to the linked 2 list are made the address pointers are changed, rather than shuffling and reloading the entire 9 array each time an exemplar is changed.
A preferred embodiment of CHOMPS also uses a form of the Bullseye test to locate the exemplars in the current exemplar set which are likely to reject the new data vector d as 6 similar.
In the Bullseye test, CHOMPS uses the data vectors projection on the reference vector 8 in concert with the exemplars projection on the reference vector to eliminate all existing 9 exemplars that could not possibly match the incoming vector. CHOMPS
accomplishes this by constructing a cone which defines the zone in which possible matches may occur and excluding 11 exemplars which do not fall within that range.
1~ The bullseye test uses the angle defined by the new data vector and the reference is vector to locate all exemplars in the current set that match the new spectral vector to the 14 specified angular precision (9~) where A~=cos-'e. Those exemplars are then used by the standard prescreener comparison inequality (d; ~ S~ > 1-s) to make a determination as to 16 whether d; is similar to S~.
1 ~ Figure 1 ~ illustrates reference vector R 600 and cone 610 defining a range in which a 18 possible match (rejecting exemplar) may occur. Only exemplars within match zone, 6~0- 6Q~
19 will be compared to the new data vector. These exemplars, contained in match zone 650, which 10 form an annulus having a center at q; 660 with maximum thickness 2~e, are representative of 21 those vectors which have similar projections onto the reference vector within the selected precision. If an exemplars projection does not fall into the match zone, there is no need to 2s compare that exemplar the to data vector d; because there the data vector d; cannot be similar 24 within the desired precision, defined by the thickness of match zone 650.
In the bullseye test CHOMPS determines which vectors lie within the annulus by 26 keeping an indexed list of entries in the exemplar table, ordered by angle to the r eference SUBSTITUTE SHEET (RULE 26) 1 vectors. Once the zone is computed, a zig-zag search is performed where the candidate vector 2 is compared to the exemplars with angles starting at 8o and deviating in both directions. If the 3 entire zone does not produce a match the vector is determined to contain new information, designated as a new exemplar and added to exemplar set {S}.
Thus through the use of reference vector R and the bullseye test, the number of 6 comparisons necessary to test the entire exemplar set are reduced from several thousand, to something on the order of 10.
8 In an alternative embodiment of the bullseye test a substantial increase the precision 9 and speed of the search is realized by designating multiple reference vectors, each one having to its own associated bullseye. One such embodiment is depicted in figure i8, which shows what 11 may be described as a double bullseye cone. A first cone 610 is defined in relation to reference 12 vector R and the second cone ~ 10 is defined in reference to second reference vector R" .00.
19 Exemplars that are candidates to match the new data vector are located within the intersection 14 of the match zones for each reference vector 7 7 7 ~ and 888. Through the use of multiple i5 bullseyes defined by multiple reference vectors, the "hot zone" or area in which a likely 16 matching exemplars must reside is reduced. Only the vectors which simultaneously occupy 1 ~ both match zones are possible matches and need be compared to the new data vector.
18 Referring to the figure 18, the only vector which meets this test requirement is exemplar 6R0.
19 Thus, CHOMPS is able to conclude that all exemplars in the data set are not rejectors (dissimilar to data vector d), with the possible exception of exemplar 6R0. By employing 21 CHOMPS and a variation of the bullseye test, the prescreener is able to exclude the majority of the exemplar set from the prescreener's comparison test (every exemplar except exemplar 2s 6Q0), and designate the incoming data vector as similar or dissimilar to the exemplar set ~4 exemplars after a comparison of the data vector to exemplar 620, a single comparison.
25 Excluding the z or 9 comparisons for the pop us stack, the only one dot product 26 operation need be processed before the prescreener can determine whether the new data vector SUBSTITUTE SHEET (RULE 26) 1 contains new data and should be added to the exemplar set.
2 Alternative embodiments of the zone type test may be employed to minimize the 3 computational load on the prescreener depending on mission requirement and the precision 4 desired. For example an avatar bullseye test may be employed. In the avatar bullseye test, only the exemplars with the highest vector rejection totals are used rather than the complete 6 exemplar set.
In yet another version of the bullseye test, the match zone width can be diminished by a significant factor with minimal generation of redundant exemplars but with an increase of 9 algorithm speed. For a candidate exemplar which makes an angle 6o with the reference vector the width of the match zone is given by 8 where:
S=2sin6o 1-e2 11 The data set describing a scene may be processed using either of three modes which 1~ include the Automatic scene segmentation mode, the Full scene mode or the is Adaptive/continuous mode. .
14 In the Automatic scene segmentation mode, the scene is divided into a series of segments and is processed by the prescreener one segment at a time. Segment size is i6 determined as a function of the exemplar set size. This variation offers several advantages, 1 r particularly when attempting to resolve dynamic scenes. With a dynamic or complex scene, in 18 general, a larger exemplar set is required to maintain a specified precision. As the number of 19 exemplars increases, the processing time increases very steeply and the compression ratio l0 decreases.
Q 1 Automatic segmentation may also increase compression efficiency because the dimensionality QR and the exemplars added into a scene segment in a complex setting, only affect the segment 23 that produced them, and not the possibly simpler segments that follow.
1.
SUBSTITUTE SHEET (RULE 26) The full scene mode is identical to the Automatic segmentation mode, except that a 2 scene is treated as one continuous dataset rather than a number of smaller segments.
s While operating in the single or multiple segmentation modes, the prescreener feeds the 4 exemplar set to the Adaptive Learning Module (ALM) for endmember computation at the end of each segment or block of data.
6 In the Adaptive mode the same efficiency is accomplished by injecting a finite but renewable lifetime to the exemplars, that is, they are pushed to long term system memory, if 8 they have not produced a spectral match with new vectors for predefined time/frame intervals.
9 The lifetime of the individual exemplars depends on their relationship to the endmembers and the priority assigned to the importance of their spectral patterns. For example, exemplars that 11 contain important subpixel objects are assigned longer lifetimes than ones containing 1~ variations of background substances.
13 In the adaptive/continuous mode an ALM cycle is triggered when new exemplars 14 change the dimensionality of the enclosing subspace or when a new exemplar is defined that lies outside the subspace defined by the current endmember set or other criteria, which are 16 mission dependent.
1. When a new exemplar is found, it can be projected into the orthogonal basis set .8 produced by the ALM, to confirm whether the new spectrum produced a subspace 19 dimensionality change. A projection in the endmember space, utilizing the current matched 10 filter vectors, indicates whether the subspace has expanded past the bounds of the current 21 simplex, determined by the shrinkwrap component of the ALM. Either test will signal a ~1 trigger to the ALM. Additional criteria, such as the number of image frames since the last 29 ALM, are used to avoid triggering excessive numbers of learning cycles.
~4 Referring again to figure 2, the exemplar set data, as computed by prescreener 50 is R5 input 58 into adaptive learning module 30. Learning module s0, computes from the exemplar set, a set of endmembers {E}which together span the current scene. Endmembers are a set of SUBSTITUTE SHEET (RULE 26) 1 fundamental patterns (vectors) from which any pattern vector in the data set can be 2 reconstructed as a convex combination in reduced dimensional space, to within an error s determined by the noise or other error criteria. The requirement that all of the observed spectral vectors dk be representable as convex combinations of conventional basis v ectors, insures that the decomposition makes sense as a physical mixture of constituents, since any 6 such mixture must have this property. The resulting patterns conform as closely as possible to signatures of actual constituents of the scene.
8 Referring now to figure 12, learning module S0, may employ an ordered Gram-Schmidt 9 analysis using salients to construct a reduced dimensional spanning space 125, while retaining 1o the spectral information contained in the exemplar set. The spanning space is constructed 11 based on a spectral uniqueness hierarchy. The observed spectra of the exemplar set, expressed 1Z as vector data are then projected into the spanning space iQ6. A pixel purity determination 13 algorithm may also be employed to define a subset of the salients, followed by a Gram-Schmidt 14 analysis to complete the set. Computation of the endmembers is performed by learning module 90 by projecting the exemplar set data into a reduced dimensional spanning space 16 using a Gram-Schmidt/Salient analysis of the exemplar set data, and employing shrink wrap 1. minimization 12 ~, to minimize the spanning space volume defined using Gram-.8 Schmidt/Salient analysis. The endmembers are defined by the vertices of the hyper triangle 19 defined by the minimized spanning space 1~8, as illustrated in figure 8.
Gram-Schmidt/ Salient Analysis The spanning space is defined by using a Gram-Schmidt / Salient analysis of the 23 exemplar set vectors. In the parameter vector space which contains the exemplar set data, one 24 first determines the two vectors which are furthest apart in the space, then, in the plane formed ~5 by these two vectors, select two mutually orthogonal vectors which lie in the plane. These 16 mutually orthogonal vectors are for convenience called basis vectors, for reasons made 2 ~ apparent below. Next, select the vector in the data cube which is furthest from the plane and SUBSTITUTE SHEET (RULE 26j WO 99/45492 PC'T/US99/00627 1 identify the hyperplane in which the basis vectors, and the newly selected vector, lie, and select 1 a third basis vector such that it lies in the hyperplane and is mutually orthogonal to the other s two basis vectors. This process is repeated, and one accumulates more and more mutually 4 orthogonal basis vectors, until the most distant remaining vector is found to be within a preselected distance of the hyperplane containing all the basis vectors. At this point, the 6 exemplar set vectors are projected onto the reduced dimensional space defined by these basis vectors.
8 Through the reduction of the dimension of the vector space in which one must work, 9 CHOMPS correspondingly reduces the number of operations one must do to perform any 1o calculation. Since none of the data vectors lie very far outside the hypervolume spanned by the 11 basis vectors, projecting the vectors into this subspace will change their magnitude or 12 direction very little, i.e. projection merely sheds components of each vector which were small is already. Furthermore, because such components are necessarily too small to correspond to 14 significant image features, these components are disproportionately likely to be noise, and discarding them will increase the system's signal to noise ratio.

lr 18 Gram-Schmidt \ Salient analysis of the exemplar set data is performed in accordance ?0 with the following algorithm:
_1 12 a) Designate the two exemplar vectors farthest apart, V, and Vy. Figure 4 illustrates ~s 24 the orientation of V, and V1 and the plane that V, and V.= define.
26 b) Generate a 1 dimensional orthogonal set of basis vectors from V, and V,, labeled V,o 28 and Vso in the plane defined by V, and V3, labeled as PVo,Q as illustrated in Figure 5.

s0 c) Determine the salient vector (vector displaced farthest from plane) in relation to si 3R plane PVo,~, defined in Figure 6 as ~S,.
ss 34 d) The salient ~S, can be represented as a sum of vectors S,land S, ~~, where S,1 is s6 orthogonal to the plane PVo", and S,~~ lies in the plane. Use the Gram-Schmidt procedure to s;
~o SUBSTITUTE SHEET (RULE 26) WO 99/45492 PCTNS99l00627 1 find S,1, and call this V9o. V,o, V~o and V9o now define a subspace in s dimensions. See the s figure 7. representation of the subspace created by this step.

e) Select the salient ~S2 which is the exemplar vector farthest from the subspace defined by step (d).

9 f) Generate a new orthogonal direction from ~S$ defined as V,~. V,~ coupled with V,o, 11 V,,o, and Vgo now defines a subspace of 4 dimensions.
lQ
is g) Steps (e) and (f) are repeated to define a spanning space of Ndimensions. The distance out of the current subspace of the salient selected at each step is the maximum 17 residual error which would be incurred by projecting all of the exemplars into the subspace.

19 This decreases at each stage, until the remaining error is within a specified error tolerance. At .0 Q 1 this point the subspace construction process is complete. The value of N
is the number of ~s dimensions necessary to allow the projection of the exemplar set data vectors into the subspace ~4 Q5 while at the same time preserving important but infrequent signatures.

Q7 h) Project all of the exemplar set data into the spanning space defined in steps (a~(g).
Q8 The Gram-Schmidt \ Salient analysis is the preferred subspace determination for CHOMPs, Q9 however CHOMPs is not limited to the Gram-Schmidt \ Salient method for subspace so determination. Other methods known in the art may be employed including but not limited to a 1 PCA methods, and Pixel Purity methods.
sQ
ss Sbrink Wrap S4 Once the N-dimensional spanning space is defined, a convex manifold in the form of a s5 s7 sg SUBSTITUTE SHEET (RULE 2fi) 1 hypertriangle within the spanning space is generated using shrink wrap minimization. Shrink 2 wrap minimization of the spanning space is a simple minimization operation, in which the 3 volume of the manifold is reduced, while maintaining the condition that all of the data vectors 4 projected into the reduced dimensional space are contained within the hypertriangle. The vertices of the hypertriangle are the endmembers, and the volume defined by the hypertriangle 6 itself is the locus of all possible mixtures (convex combinations) of endmembers. The shrink wrap process determines good approximations of the physical constituents of the scene 8 (endmembers), by insuring that the shape and orientation of the hypertriangle conforms as 9 closely as possible to the actual distribution of the data vectors (exemplars). The exemplars are assumed to be mixtures of the actual constituents. The number of endmembers is equal to 11 the dimension of the spanning space.
12 The salients are used to guide the shrink wrap process. Referring to figure 8, 18 hypertriangle Ts,s,,s~ is defined by salient vectors, however,~other vectors which include data 14 may not be within the spanning space which Ts,s~s9 defines, as depicted in figure 9. The shrink wrap operation must satisfy the condition that all of the data vectors projected into the 16 spanning space be contained within the volume defined by convex manifold TE,E3Ey . The 17 shrink wrap operation starts with TE,~3~= Ts,s~s9 and then expands or contracts the triangle 18 TE1~3~9 bY manipulating the vertices, E" E~ and E9 or by manipulating the orientation of planes 19 that define TE,~~ , by the minimal amount to fulfill the above stated condition.
0 For purposes of example, the method described above and the following methods have 21 been found effective, however, a combination of one or more of the disclosed methods, or any ~1 other minimization method which maintains the condition that the majority of the data vectors ~3 be contained within the minimized space is suitable.
l4 Adaptive learning module 30 generates a set of filter vectors {F;} and endmembers {E"

1 EQ, E'...EN}in accordance with one of the following procedures, or variants thereof 2 Method ~
8 With reference to figure 10a, find a set of endmembers {E;} such that each endmember E; is 4 matched to a salient vector ~S;, and is as close as possible to its salient, subject to the condition that all the data vectors are inside the hypertriangle with vertices {E;}.
Le., minimize Np C -~ {Et _s'~z 6 subject to the constraints F; ~ dk > o for all i and k. The filter vectors are computed from the candidate endmembers as described above. This constraint condition means that all the 8 coefficients of the decomposition of dk into endmembers are non-negative, which is equivalent 9 to saying that all dx are inside TE~EQ...EN ~ This is a nonlinear constrained optimization problem which can be solved approximately and quickly using various iterative constrained gradient 11 methods.
1~ ethod 1 13 Compute a set of filter vectors {F,;} from the salients {~S;}, using the formulas previously 14 provided. These vectors will not, in general satisfy the shrink wrapping constraints see figure lOb. Find a new set of Filter vectors {F;} such that each Filter vector F; is matched to a 16 salient Filter vector FS;, and is as close as possible to its salient filter, subject to the condition 17 that all the data vectors are inside the hypertriangle. Le., minimize is C = (F. Fs~)3 19 subject to the constraints F; ~ dk > 0 for all k. This is a set of independent quadratic programming problems with linear constraints, which can be solved in parallel using standard 21 methods. The decoupling of the individual filter vector calculations increases computational 1 to manipulating the plane faces of the triangle instead of the vertices.
Given solutions for the 2 Filter vectors, find the endmembers using the same procedure used to compute Filter vectors S from endmembers (the defining relationships are symmetric except for a normalization 4 constant).
Method s 6 With reference to figure 11, find an approximate centroid Cd of the set of exemplar vectors, and then find the hyperplane of dimension one less than the dimension of the enclosing space 8 which is closest to the centroid. Hyperplane 120 divides the complete subspace into two 9 halves, and the minimization is subject to the constraint that all the exemplar vectors d,~ must be in the same half space as the centroid {cd}. The normal to the optimal hyperplane 110, is 11 F" the first filter vector. and the condition that all the exemplars are in the same half-space is 11 equivalent to the constraint that F, ~ dk > 0 for all k. This process is equivalent to finding a 13 vector F, with a fixed magnitude which minimizes the dot product F, ~ Cd subject to the 14 constraint F, ~ dk > O for all k. As such it is amenable to solution using conventional constrained optimization methods. The hypertriangle TE,E~~ can be constructed out of a set of 16 suitably chosen optimal (locally minimal distance to the centroid) bounding hyperplanes which 1'r form the faces of the convex manifold. The normal to each face defines the associated filter 18 vector. Again, the endmembers can be determined from the Filter vectors at the end of the 19 shrink wrapping process.
Referring to figure 12 and figure z, once the endmembers and filter vectors are 21 computed adaptive learning module 30 stores this endmember and filter vector data, along with data reflecting the exemplar set, and source vectors 3S for future recall. The adaptive 23 learning module s0 then searches the exemplar set for any changes 148. If the system detects 24 change in the exemplar set 99, the basis and shrink wrap processes are repeated 141. This _ WO 99/45492 PCT/US99/00627 1 change in the exemplar set 99, the basis and shrink wrap processes are repeated 141. This process allows the system to continually learn and adapt to changes in the environment.
s Endmember data and the accompanying exemplar set and source data can be labeled as being 4 consistent with a particular threat or target allowing the system to learn and remember the signature of specific targets in real time s4.
6 Again, referring to figure ~, the filter vectors and endmember data stream are 7 transmitted from learning module 30, to demixer module 40, for computation of the 8 endmember coefficients. The original data set from the sensor is also transmitted to demixer 9 module 2o through the first processor pipe.
Demixer module RO may contain several processors, each of which convolves the i i unprocessed data vector with a different filter vector. These operations could be performed i1 sequentially on a single fast processor, but in the best mode they are performed in parallel.
iS The output ofdemixer module 20, is a vector called the endmember coefficient vector, the jth 14 element of which indicates the fraction of the jth fundamental pattern which is present in the unprocessed data vector. The endmember coefficients indicate the amplitude of the signal from 16 the associated endmember, in a mixed spectrum.
17 Demixer module 20 convolves the unprocessed data vector and computes the 18 endmember coefficient in accordance with the equation;
n F~ d =~c F~ E+F~ N =c J k j_~ jk J j J k Jk 19 where FJ= said filter vector, dk = said data set, c~k = said endmember coefficient, Nk = noise vector and E~= said endmember.
~ i Demixer module RO next computes the fraction coefficient 131, which tells what 1 percentage of the photons from the given pixel are associated with the endmember in accordance to the equation:
cjA(E)j Cjk fraction A(dk) 8 where A(dk) is the area under vector dk, i.e. the sum of the elements of dk.
4 Figure 13 illustrates the flowchart for demixer module operation including the demixer module's ~0 function in the system's learning process. If the pixel information passed from the s preprocessor indicates bad data or an unlearned spectrum lss, ls4 demixer module ~0 routes that information to the display/ output module 40 with a notation of the status of the pixel 8 data 1s5, iss.
9 The spectral information, filter vectors, and endmember coefficients is passed to the display/ output module 40 for display and further processing 188. The spectral characteristics 11 of the data is stored, transmitted or displayed in terms of endmembers and endmember 11 coefficients maintaining the relation:
is n d =~c E+N
k j_~ kj j k 14 where dk = said data set, ck~ = said endmember coefficient and c is > 0, Nk = noise and E~=
said endmember.
16 CHOMPS feature three compression modes which provide significant compression of 1 r the massive data stream coming from the sensor. These compression modes in a preferred 18 embodiment take advantage of the priority and none searching techniques disclosed above.

WO 99/45492 PC'T/US99/00627 1 employs the basic IHPS algorithm, using the computational management techniques described in this text, and produces a compressed data set which expresses the scene as, a set of fraction 3 planes belonging to each endmember E. Compression results from the low dimensional space 4 in comparison to the number of bands employed by the sensor.
For purposes of example, assume a sensor operates in 100 bands, the data set produced 6 by that sensor as input for IHPS will thus tend to have what approaches ~00,000,00o scalar values. This assumes only 1,000,000 pixels in a scene. IHPS expresses the scene as fraction 8 planes. The IHPS endmember form compresses the data stream by the ratio:
Ne NE
9 where NB is the number of bands and NE is the number of endmembers necessary to describe the scene.
11 For many natural scenes at a reasonably small loss rate, IHPS requires on the order of ll 10 endmembers to describe the scene which results in a compression ratio of 20:1 for a 200 13 band sensor.
14 Following each ALM cycle, all exemplars for the segment are either left uncompressed, are projected into the orthogonal basis space or are projected to the current shrinkwrap.
16 Referring to figure 19 which shows a simple logical flowchart of the second or 1'r wavespace compression packaging mode in which the data is expressed in terms of the 18 exemplars and two maps used for reconstruction of the data set d. The complete data set is 19 received from the sensor .ol, containing for purposes of example approximately 200 million 2o scalar values describing a scene. CHOMPS next uses the prescreener to perform the 21 prescreener comparison 701, and reduces the complete data set to an exemplar set 7 os. The 1 exemplar set size can be adjusted by the user on the basis or the precision desired or other 2 mission requirements, but for purposes of example may be 10,000 vectors in size (1,000,000 s scalars). CHOMPS uses information from the prescreener comparisons to compile a rejector 4 index or rejector plane 704. The rejector plane simply records which exemplar rejected each new data vector received by the prescreener. CHOMPS also builds an intensity map ~ 05, 6 which records the intensity of each new data vector input into the prescreener. The intensity map and rejector index is downloaded along with the complete exemplar set, expressed in 8 wavespace r 06.
9 Thus the data stream of X00 million scalars is expressed as approximately 10,000 exemplars each having X00 elements and the map information, each map containing 1 scalar 11 for each of the 1 million data set vector, rather than the ~0o scalar values for each data vector.
1Q This represents a data set compression of the approaching 50:1 using the wavespace mode.
18 The third compression packaging mode is similar to the wavespace compression mode 14 except that CHOMPS projects the exemplars into endmember space for additional compression.
16 Referring to figure 10 which is a flowchart of the endmember compression mode, the data set 17 is received in the prescreener from the sensor 555 and compared to the existing exemplars 556, 18 just as in the wavespace compression mode i00. The prescreener builds the exemplar set 557, 19 rejector index 558, and an intensity map 559 identical to that in the wavespace compression l0 mode 700. However, in the endmember compression mode CHOMPS sends the exemplar set 21 to the ALM 560, for computation of endmembers 561. The exemplar set is thus projected into 22 endmember space 562, realizing additional compression. The compressed exemplar set is 8 expressed in endmember space, each exemplar now expressed as 10 or so scalars rather than 24 the X00 or so scalar values necessary in wavespace, assuming a sensor having 200 bands. The 1 endmembers expressed in wavespace are added to facilitate reconstruction of the scene. The 2 rejector index and intensity map are also output for reconstruction of the data set.
s Endmember compression realizes a 100:1 reduction in the data transfer necessary to describe a 4 scene. The output packet of the compressed segment data consists of the exemplar list in any of the projections above, the endmembers or basis vectors and the two decoding maps.
6 Additional information can be injected, such as the number of vectors replaced by each 'r exemplar.
8 CHOMPS thus uses the Adaptive Learning Module Pipeline to construct a compressed 9 data set, along with the necessary scene mapping data, facilitating the efficient storage, download and the later reconstruction of the complete data set with minimal deterioration of 11 signal information.
1~ In a preferred embodiment the CHOMPS employs the following algorithm:
is a} Receive data vector d; into the prescreener.
14 b) Designate priority exemplars from the existing exemplar set and perform priority testing according to the relation (d ~ S = 1 >e) of the data vector d; using the exemplars S~
16 designated with the higher priority.
1.
18 c) If priority testing generates a match condition go to (a), if no match condition produced 19 after N iterations, continue.
11 d) Select at least one reference point or reference vector.

2s e} Designate at least one match zone and perform zone testing on data vector d; according ~4 to the relation (d ~ S = 1») using only those exemplars S~ which are contained (to the desired precision) within the match zone.
l6 f) If zone testing generates a match condition go the step (a), if no match condition is 28 produced add d; to the exemplar set and go get next data vector d;+, exemplars are exhausted.
q9 so s 1 g) Perform compression packaging via full IHPS mode, wavespace mode, exemplar mode s2 or a combination thereof.
ss 34 Obviously, many modifications and variations of the present invention are possible in 1 light of the above teachings. For example this invention may be practiced without the use of a 2 parallel processing architecture, different combinations of zone and priority testing may be 8 employed, ie., zone testing may be employed without priority testing or standard compression 4 algorithms may be employed to compress the intensity map and rejectors index.
It is therefore understood that, within the scope of the appended claims, the invention 6 may be practiced otherwise than as specifically described.

1:3 lr ~1 ~8

Claims (6)

We Claim:
1. A system comprising:
sensor means for generating a plurality of data vectors;
means for producing endmember vectors responsive to said generating;
wherein said means for producing comprises a module adapted to reduce the spanning space of said data vectors by application of the Graham-Schmidt algorithm using salients.
2. A sensor system comprising:
means for collecting data, an exemplar set, said exemplar set comprising exemplar set members;
means for generating a compressed data set, said data set comprising data set members wherein;
said means for generating compares said data set members with respective ones of said exemplar set members effective to characterize said data set members as similar or dissimilar to said exemplar set members according to a preselected criterion, and means for including said data set members characterized as similar in said exemplar set;
means for generating the endmember vectors of said exemplar set;
means for demixing said data set.
3. A system comprising:
sensor means for generating a plurality of data vectors;

means for producing endmember vectors responsive to said means for generating;
wherein said means for producing comprises a module adapted to reduce the spanning space of said data vectors by application of the Graham-Schmidt algorithm using salients;
means for compression said plurality of data vectors by employing the CHOMPS
algorithm.
4. A method for processing hyperspectral data comprising:
generating a plurality of data vectors;
producing endmember vectors responsive to said generating;
wherein said producing comprises reducing the spanning space of said data vectors by application of the Graham-Schmidt algorithm using salients;
compressing said plurality of data vectors by application of the CHOMPs algorithm.
5. A method for processing mufti dimensional data comprising:
collecting data, generating a exemplar set, said exemplar set comprising exemplar set members;
generating a compressed data set, said data set comprising data set members wherein;
comparing said data set members with respective ones of said exemplar set members effective to characterize said data set members as similar or dissimilar to said exemplar set members according to a preselected criterion, and including said data set members characterized as similar in said exemplar set.
generating the endmember vectors of said exemplar set;
demixing said data set.
6. A method for compressing a data stream comprising:

collecting data, generating a exemplar set, said exemplar set comprising exemplar set members;
generating a compressed data set, said data set comprising data set members wherein;
said generating compares said data set members with respective ones of said exemplar set members effective to characterize said data set members as similar or dissimilar to said exemplar set members according to a preselected criterion, and including said data set members characterized as similar in said exemplar set.
generating the endmember vectors of said exemplar set.
CA002322892A 1998-03-06 1999-01-11 Compression of hyperdata with orasis multisegment pattern sets (chomps) Abandoned CA2322892A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US09/035,909 1998-03-06
US09/035,909 US6167156A (en) 1996-07-12 1998-03-06 Compression of hyperdata with ORASIS multisegment pattern sets (CHOMPS)
PCT/US1999/000627 WO1999045492A2 (en) 1998-03-06 1999-01-11 Compression of hyperdata with orasis multisegment pattern sets (chomps)

Publications (1)

Publication Number Publication Date
CA2322892A1 true CA2322892A1 (en) 1999-09-10

Family

ID=21885497

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002322892A Abandoned CA2322892A1 (en) 1998-03-06 1999-01-11 Compression of hyperdata with orasis multisegment pattern sets (chomps)

Country Status (7)

Country Link
US (1) US6167156A (en)
EP (1) EP1068586A4 (en)
JP (1) JP2002506299A (en)
KR (1) KR20010041681A (en)
AU (2) AU764104B2 (en)
CA (1) CA2322892A1 (en)
WO (2) WO1999045492A2 (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE60003557T2 (en) * 1999-05-12 2004-01-08 Microsoft Corp., Redmond SPLIT AND MIX FLOWING DATA FRAMES
US7007096B1 (en) 1999-05-12 2006-02-28 Microsoft Corporation Efficient splitting and mixing of streaming-data frames for processing through multiple processing modules
US6970598B1 (en) * 2000-01-21 2005-11-29 Xerox Corporation Data processing methods and devices
US6804400B1 (en) * 2000-11-01 2004-10-12 Bae Systems Mission Solutions Inc. Adaptive hyperspectral data compression
US6701021B1 (en) * 2000-11-22 2004-03-02 Canadian Space Agency System and method for encoding/decoding multidimensional data using successive approximation multi-stage vector quantization
US6587575B1 (en) * 2001-02-09 2003-07-01 The United States Of America As Represented By The Secretary Of Agriculture Method and system for contaminant detection during food processing
US6529769B2 (en) * 2001-03-08 2003-03-04 Apti, Inc. Apparatus for performing hyperspectral endoscopy
US6728396B2 (en) * 2002-02-25 2004-04-27 Catholic University Of America Independent component imaging
US6947869B2 (en) * 2002-03-29 2005-09-20 The United States Of America As Represented By The Secretary Of The Navy Efficient near neighbor search (ENN-search) method for high dimensional data sets with noise
US7200243B2 (en) * 2002-06-28 2007-04-03 The United States Of America As Represented By The Secretary Of The Army Spectral mixture process conditioned by spatially-smooth partitioning
US7136809B2 (en) * 2002-10-31 2006-11-14 United Technologies Corporation Method for performing an empirical test for the presence of bi-modal data
WO2004061702A1 (en) * 2002-12-26 2004-07-22 The Trustees Of Columbia University In The City Of New York Ordered data compression system and methods
AU2003280610A1 (en) * 2003-01-14 2004-08-10 The Circle For The Promotion Of Science And Engineering Multi-parameter highly-accurate simultaneous estimation method in image sub-pixel matching and multi-parameter highly-accurate simultaneous estimation program
FR2853748A1 (en) * 2003-04-11 2004-10-15 France Telecom METHOD FOR TATOTING A VECTOR-APPROACHING COLOR IMAGE, METHOD FOR DETECTING A TATTOO MARK, DEVICES, IMAGE AND CORRESPONDING COMPUTER PROGRAMS
US7251376B2 (en) * 2003-08-29 2007-07-31 Canadian Space Agency Data compression engines and real-time wideband compressor for multi-dimensional data
US20070279629A1 (en) 2004-01-07 2007-12-06 Jacob Grun Method and apparatus for identifying a substance using a spectral library database
US7792321B2 (en) * 2004-07-28 2010-09-07 The Aerospace Corporation Hypersensor-based anomaly resistant detection and identification (HARDI) system and method
US7583819B2 (en) 2004-11-05 2009-09-01 Kyprianos Papademetriou Digital signal processing methods, systems and computer program products that identify threshold positions and values
US7680337B2 (en) * 2005-02-22 2010-03-16 Spectral Sciences, Inc. Process for finding endmembers in a data set
US8437563B2 (en) * 2007-04-04 2013-05-07 Telefonaktiebolaget L M Ericsson (Publ) Vector-based image processing
US8819288B2 (en) 2007-09-14 2014-08-26 Microsoft Corporation Optimized data stream compression using data-dependent chunking
US8108325B2 (en) * 2008-09-15 2012-01-31 Mitsubishi Electric Research Laboratories, Inc. Method and system for classifying data in system with limited memory
US8897571B1 (en) * 2011-03-31 2014-11-25 Raytheon Company Detection of targets from hyperspectral imagery
RU2476926C1 (en) * 2011-06-16 2013-02-27 Борис Антонович Михайлов Apparatus for processing panchromatic images (versions)
CN102521591B (en) * 2011-11-29 2013-05-01 北京航空航天大学 Method for fast recognition of small target in complicated background
US8655091B2 (en) 2012-02-24 2014-02-18 Raytheon Company Basis vector spectral image compression
US10337318B2 (en) * 2014-10-17 2019-07-02 Schlumberger Technology Corporation Sensor array noise reduction
CA2985378C (en) * 2015-05-29 2023-10-10 Schlumberger Canada Limited Em-telemetry remote sensing wireless network and methods of using the same
CN113220651B (en) * 2021-04-25 2024-02-09 暨南大学 Method, device, terminal equipment and storage medium for compressing operation data

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5491487A (en) * 1991-05-30 1996-02-13 The United States Of America As Represented By The Secretary Of The Navy Slaved Gram Schmidt adaptive noise cancellation method and apparatus
US5832131A (en) * 1995-05-03 1998-11-03 National Semiconductor Corporation Hashing-based vector quantization
US6038344A (en) * 1996-07-12 2000-03-14 The United States Of America As Represented By The Secretary Of The Navy Intelligent hypersensor processing system (IHPS)

Also Published As

Publication number Publication date
WO1999045492A2 (en) 1999-09-10
AU2454999A (en) 1999-09-20
AU764104B2 (en) 2003-08-07
US6167156A (en) 2000-12-26
JP2002506299A (en) 2002-02-26
EP1068586A4 (en) 2001-12-19
AU2455099A (en) 1999-09-20
WO1999045492A3 (en) 1999-11-04
WO1999045497A1 (en) 1999-09-10
EP1068586A2 (en) 2001-01-17
KR20010041681A (en) 2001-05-25

Similar Documents

Publication Publication Date Title
AU764104B2 (en) Compression of hyperdata with orasis multisegment pattern sets (chomps)
He et al. Computing nearest-neighbor fields via propagation-assisted kd-trees
Yang et al. Deephoyer: Learning sparser neural network with differentiable scale-invariant sparsity measures
US6208752B1 (en) System for eliminating or reducing exemplar effects in multispectral or hyperspectral sensors
CA2277308C (en) Intelligent hypersensor processing system (ihps)
US20120269432A1 (en) Image retrieval using spatial bag-of-features
EP2880596A1 (en) System and method for reduced incremental spectral clustering
Winter Comparison of approaches for determining end-members in hyperspectral data
US6947869B2 (en) Efficient near neighbor search (ENN-search) method for high dimensional data sets with noise
Inselberg et al. The automated multidimensional detective
Kurz et al. Improving underwater object classification: BC-ViT
Nhaila et al. New wrapper method based on normalized mutual information for dimension reduction and classification of hyperspectral images
Devi et al. A Novel Fuzzy Inference System-Based Endmember Extraction in Hyperspectral Images.
Garcia et al. Searching high-dimensional neighbours: Cpu-based tailored data-structures versus gpu-based brute-force method
Singh et al. Spectral-spatial hyperspectral image classification using deep learning
Arab et al. Band and quality selection for efficient transmission of hyperspectral images
Vadiraja et al. Leveraging reinforcement learning for evaluating robustness of knn search algorithms
Kuester et al. Impact of different compression rates for hyperspectral data compression based on a convolutional autoencoder
Darling Neural network-based band selection on hyperspectral imagery
Skubalska-Rafajłowicz Clustering of data and nearest neighbors search for pattern recognition with dimensionality reduction using random projections
Carvalho Exploration of unsupervised machine learning methods to study galaxy clustering
Li et al. Class-Specific Auto-augment Architecture Based on Schmidt Mathematical Theory for Imbalanced Hyperspectral Classification
Li et al. Lightweight CNNs Under A Unifying Tensor View
Gillis et al. Parallel implementation of the ORASIS algorithm for remote sensing data analysis
Bui et al. Density-Softmax: Efficient Test-time Model for Uncertainty Estimation and Robustness under Distribution Shifts

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued