WO1994023386A2 - Probabilistic information retrieval networks - Google Patents

Probabilistic information retrieval networks Download PDF

Info

Publication number
WO1994023386A2
WO1994023386A2 PCT/US1994/002579 US9402579W WO9423386A2 WO 1994023386 A2 WO1994023386 A2 WO 1994023386A2 US 9402579 W US9402579 W US 9402579W WO 9423386 A2 WO9423386 A2 WO 9423386A2
Authority
WO
WIPO (PCT)
Prior art keywords
documents
document
probability
collection
query
Prior art date
Application number
PCT/US1994/002579
Other languages
French (fr)
Other versions
WO1994023386A3 (en
Inventor
Howard R. Turtle
Gerald J. Morton
F. Kinley Larntz
Original Assignee
West Publishing Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by West Publishing Company filed Critical West Publishing Company
Priority to AU64450/94A priority Critical patent/AU6445094A/en
Publication of WO1994023386A2 publication Critical patent/WO1994023386A2/en
Publication of WO1994023386A3 publication Critical patent/WO1994023386A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3346Query execution using probabilistic model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3335Syntactic pre-processing, e.g. stopword elimination, stemming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/93Document management systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • Y10S707/99935Query augmenting and refining, e.g. inexact access

Definitions

  • This invention relates to information retrieval, and particularly to document retrieval from a computer database using probability techniques. More particularly, the invention concerns a method and apparatus for establishing probability thresholds in probabilistic information retrieval systems and for estimating representation frequencies in document databases for representations having no pre-computed frequency.
  • algebraic systems logically match terms and their positions in a stored information (such as a document) to terms in a query;
  • Boolean systems are examples of algebraic systems.
  • Probabilistic systems match representations (concepts) in a stored information to concepts in a query to retrieve information based on probabilities rather than algebraic or Boolean logic.
  • Boolean systems Another difficulty with Boolean systems is that all documents meeting the query are retrieved, regardless of number. If an unmanageable number of documents are retrieved, the searcher must reformulate the search query to more narrowly define the information need, thereby narrowing the retrieved documents to a more manageable number. However, in narrowing the search, the researcher risks missing relevant documents partially meeting the information need. Moreover, Boolean systems will not retrieve documents only partially meeting the query, which themselves are often important secondary documents to the query. More recently, probabilistic systems employing hypertext databases have been developed which emphasize flexible organizations of multimedia "nodes" through connections made with user-specified links and interfaces which facilitate browsing in the network. Early networks employed query-based retrieval strategies to form a ranked list of candidate "starting points" for hypertext browsing.
  • Network structures employing hypertext databases have used automatically and manually generated links between documents and the concepts or terms that are used to represent their content. For example, “document clustering” employs links between documents that are automatically generated by comparing similarities of content. Another technique is “citations” wherein documents are linked by comparing similar citations in them. “Term clustering” and “manually-generated thesauri” provide links between terms, but these have not been altogether suitable for document searching on a reliable basis.
  • Deductive databases have been developed employing facts about the nodes, and current links between the nodes.
  • deductive databases have not been successful in information retrieval.
  • uncertainty associated with natural language affects the deductive database, including the facts, the rules, and the query. For example, a specific concept may not be an accurate description of a particular node; some rules may be more certain than others; and some parts of a query may be more important than others.
  • a Bayesian network is a probabilistic network which employs nodes to represent the document and the query. If a proposition represented by a parent node directly implies the proposition represented by a child node, an implication line is drawn between the two nodes. If-then rules of Bayesian networks are interpreted as conditional probabilities. Thus, a rule A- ⁇ B is interpreted as a probability P(B
  • the set of matrices pointing to a node characterizes the dependence relationship between that node and the nodes representing propositions naming it as a consequence.
  • the compiled network is used to compute the probability or degree of belief associated with the remaining nodes.
  • An inference network is one which is based on a plausible or non- deductive inference.
  • One such network employs a Bayesian network, described by Turtle et al. in "Inference Networks for Document Retrieval", SIGIR 90, pp. 1-24 September 1990 (Association for Computing Machinery), incorporated herein by reference.
  • the Bayesian inference network described in the Turtle et al. article comprises a document network and a query network.
  • the document network represents the document collection and employs document nodes, text representation nodes and content representation nodes.
  • a document node corresponds to abstract documents rather than their specific representations, whereas a text representation node corresponds to a specific text representation of the document.
  • a set of content representation nodes corresponds to a single representation technique which has been applied to the documents of the database.
  • the query network of the Bayesian inference network described in the Turtle et al. article employs an information node identifying the information need, and a plurality of concept nodes corresponding to the concepts that express that information need.
  • a plurality of intermediate query nodes may also be employed where multiple queries are used to express the information requirement.
  • the Bayesian inference network described in the Turtle et al. article has been quite successful for small, general purpose databases. However, it has been difficult to formulate the query network to develop nodes which conform to the document network nodes. More particularly, the inference network described in the Turtle et al. article did not use domain-specific knowledge bases to recognize phrases, such as specialized, professional terms, like jargon traditionally associated with specific professions, such as law or medicine.
  • One important aspect to probabilistic retrieval networks is the identification of the frequency of occurrence of a representation in each document and in the entire document collection.
  • a representation that occurs frequently in a document is more likely to be a good descriptor of that document's content.
  • a representation that occurs infrequently in the collection is more likely to be a good discriminator than one that occurs in many documents. Consequently, when creating a database for a probabilistic network, care is taken to identify the representations (content concepts) in the documents, as well as their frequencies.
  • certain representations such as phrases, proximities and thesaurus or synonym classes
  • phrases are usually comprised of multiple words which themselves are individual concepts or representations.
  • the concept or representation of a phrase might be different from the concepts or representations of the individual words forming the phrase.
  • the phrase "independent contractor” is a different concept than either of the constituent words "independent” and "contractor”. Since it is not always possible to identify all possible phrases, or their frequency of occurrence, during creation of the database, the use of phrases as a matching term in probabilistic networks has not been altogether successful.
  • Proximities such as citations
  • thesaurus and synonym classes have likewise not been successful identifiers because of the inability to identify all synonyms, proximities and thesaurus classes during creation of the database or to pre-assign their frequencies.
  • probabilistic networks Another difficulty with probabilistic networks is that for large databases, for example databases containing about one-half million documents or more, the processing resources required to evaluate a query have been too great to be commercially feasible. More particularly, probabilistic networks required that all representations for all documents in the collection containing at least one query term must be examined against all of the concepts in the query. Hence, probabilistic networks required extensive computing resources. While such computing resources might be reasonable for small collections of documents, they were not for large databases. There is, accordingly, a need to improve the processing of probabilistic networks to more efficiently employ the processing resources.
  • d allocate d 2 ,...d; Document node in a document network.
  • D Number of documents to be selected or identified to result list.
  • fy Frequency of concept i in document j.
  • I Information need in query network i
  • Concept an item of an information need.
  • id Inverse document frequency for concept i.
  • idf imax Probable maximum inverse document frequency for concept i.
  • Sj A calculated number equal to greater of Xj/n; and sd. sd Standard deviation.
  • V Number of duplicate terms removed from query. w-, w ,...w n Term weights for parent nodes where w g is maximum. w g Maximum term weight for child node Q, 0 ⁇ w g ⁇ 1.
  • the frequency of occurrence of a selected representation in a collection of documents is estimated by identifying the frequency of occurrence of the representation in a sample of documents selected from the collection. Probable maximum and probable minimum frequencies of occurrence of the representation in the entire collection are calculated, and the midpoint of the probable maximum and minimum frequencies is selected. The estimated frequency of occurrence of the selected representation is set equal to the selected midpoint when the calculated difference between the probable maximum and minimum frequencies does not exceed a preselected limit. If the preselected limit is exceeded, the sample of documents is adjusted to include additional documents from the collection, the sampling and calculating being repeated until the calculated difference between the probable maximum and minimum frequencies is within the preselected limit.
  • a sample is selected and the one document with the highest probability of meeting the information need defined by the query is identified from the sample of documents from the collection.
  • a probability threshold is set equal to the probability that the selected document meets the information need.
  • the threshold is reset to the probability of the selected document with the lowest calculated probability.
  • the documents with the lowest probabilities are correspondingly removed.
  • the predetermined number of documents identified as having the highest probabilities are retrieved, preferably in probability order.
  • successive samples are iteratively selected, each successive sample containing documents different from each previous sample.
  • the documents Up to a predetermined number of documents having the highest probabilities of meeting the information need are identified during each iteration, the documents being selected from a group consisting of the sample of documents selected for the respective iteration and the documents identified during the previous iteration.
  • the predetermined number is equal to the number of the respective iteration, so there are as many iterations as there are documents to be selected.
  • Figure 1 is a block diagram representation of a Bayesian inference network with which the present invention is used.
  • Figure 2 is a block diagram representation of a simplified Bayesian inference network as in Figure 1.
  • Figure 3 is a block diagram of a computer system for carrying out the invention.
  • Figures 4 A and 4B, taken together, are a flowchart and example illustrating the steps of creating a search query for a probabilistic network.
  • Figure 5 is a flowchart and example of the steps for determining a key number for inclusion in the search query described in connection with Figure 4.
  • Figures 6A-6D are block diagram representations of illustrating different techniques for handling phrases.
  • FIGS. 7A and 7B taken together, are a detailed flowchart identifying the steps for calculating the estimated inverse document frequency for a specific concept according to the present invention.
  • Figure 8 is a flowchart illustrating the manner by which partial phrases are handled in a document retrieval system.
  • Figure 9 is a graph illustrating the principles of certain aspects of threshold estimating according to the present invention.
  • Figure 10 is a detailed flowchart identifying the steps for setting probability thresholds and optimizing document retrieval according to the present invention.
  • Figure 11 is a detailed flowchart illustrating the maximum score optimization techniques according to the present invention.
  • Figure 12 is a detailed flowchart of the process for creating the query network for a probabilistic information retrieval network.
  • Figure 13 is a detailed flowchart of the process for evaluating a document network used with the query network shown in Figure 12.
  • Inference probability networks employ a predictive probability scheme in which parent nodes provide support for their children.
  • the degree to which belief exists in a proposition depends on the degree to which belief exists in the propositions which potentially caused it. This is distinct from a diagnostic probability scheme in which the children provide support for their parents, that is belief in the potential causes of a proposition increases with belief in the proposition. In either case, the propagation of probabilities through the network is done using information passed between adjacent nodes.
  • Figure 1 illustrates a Bayesian inference network as described in the aforementioned Turtle et al. article.
  • the Bayesian network shown in Figure 1 is a directed, acyclic dependency graph in which nodes represent propositional variables or constraints and the arcs represent dependence relations between propositions.
  • An arc between nodes represents that the parent node "causes" or implies the proposition represented by the child node.
  • the child node contains a link matrix or tensor which specifies the probability that the child node is caused by any combination of the parent nodes. Where a node has multipl parents, the link matrix specifies the dependence of that child node on the set o parents and characterizes the dependence relationship between the node and al nodes representing its potential causes.
  • the inference network is graphically illustrated in Figure 1 and consist of two component networks: a document network 10 and a query network 12.
  • the document network consists of document nodes d-, d 2 ,...d i _ 1 , d i? interior tex representation nodes ti, t ⁇ ... ⁇ , t j , and leaf nodes r,, r 2 , r 3 ,...r k .
  • the documen nodes d correspond to abstract documents rather than their physica representations.
  • the interior nodes t are text representation nodes whic correspond to specific text representations within a document.
  • the presen invention will be described in connection with the text content of documents, bu it is understood that the network can support document nodes with multipl children representing additional component types, such as audio, video, etc Similarly, while a single text may be shared by more than one document, suc as journal articles that appear in both serial issue and reprint collections, an parent/divisional patent specifications, the present invention shall be described i connection with a single text for each document. Therefore, for simplicity, th present invention shall assume a one-to-one correspondence between document and texts.
  • the leaf nodes r are content representation nodes. There are severa subsets of content representation nodes r,, r 2 , r 3 ,...r k , each corresponding to single representation technique which has been applied to the document texts. I a document collection has been indexed employing automatic phrase extractio and manually assigned index terms, then the set of representation nodes wil consist of distinct subsets or content representation types with disjoint domains For example, if the phrase "independent contractor" has been extracted an "independent contractor" has been manually assigned as an index term, then two content representation nodes with distinct meanings will be created, one corresponding to the event that "independent contractor" has been automatically extracted from the subset of the collection, and the other corresponding to the event that "independent contractor” has been manually assigned to a subset of the collection. As will become clear hereinafter, some concept representation nodes may be created based on the content of the query network.
  • Each document node has a prior probability associated with it that describes the probability of observing that document.
  • the document node probability will be equal to l/(collection size) and will be small for most document collections.
  • Each text node contains a specification of its dependence upon its parent. By assumption, this dependence is complete ft is true) when its parent document is observed (d, is true).
  • Each representation node contains a specification of the conditional probability associated with the node given its set of parent text nodes.
  • the representation node incorporates the effect of any indexing weights (for example, term frequency in each parent text) or term weights (inverse document frequency) associated with the concept.
  • the query network 12 is an "inverted" directed acyclic graph with a single node I which corresponds to an information need.
  • the root nodes c,, c_, Cj, . . .c m are the primitive concept nodes used to express the information requirement.
  • a query concept node, c contains the specification of the probabilistic dependence of the query concept on its set of parent representation content nodes, r.
  • the query concept nodes c, ...c m define the mapping between the concepts used to represent the document collection and the concepts that make up the queries.
  • a single concept node may have more than one parent representation node.
  • concept node c_ may represent the query concept "independent contractor” and have as its parents representation nodes r 2 and r 3 which correspond to "independent contractor" as a phrase and as a manually assigned term.
  • Nodes q,, q 2 are query nodes representing distinct query representations corresponding to the event that the individual query representation is satisfied.
  • Each query node contains a specification of the query on the query concept it contains.
  • the intermediate query nodes are used in those cases where multiple query representations express the information need I. As shown in Figure 1, there is a one-to-one correspondence between document nodes, d, and text nodes, t.
  • the network representation of Figure 1 may be diagrammatically reduced so that the document nodes dj , d 2 ,...d i _ 1 , d; are parents to the representation nodes r,, r 2 , r 3 ,...r k .
  • each child node carries a probability that the child node is caused by the parent node.
  • EQ 1 bel or iO) - 1 - (l- i. - .l- :) - . . . • il-p n )
  • Query network 12 changes for each input query defining a document request. Therefore, the concept nodes c of the search network are created with each search query and provide support to the query nodes q and the information need, node I ( Figure 1).
  • Document searching can be accomplished by a document-based scan or a concept-based scan.
  • a document-based scan is one wherein the text of each document is scanned to determine the likelihood that the document meets the information need, I. More particularly, the representation nodes , r 2 , r 3 ,...r k of a single document are evaluated with respect to the several query nodes q, , q 2 to determine a probability that the document meets the information need. The top
  • D-ranked documents are then selected as potential information need documents.
  • the scan process reaches a point, for example after assigning a probability for more than D documents of a large document collection, that documents can be eliminated from the evaluation process after evaluating subsets of the representation nodes. More particularly, if a given document scores so low of a probability after only evaluating one or two representation nodes, determination can be made that even if the evaluation continued the document still would not score in the top D-ranked documents. Hence, most documents of a large collection are discarded from consideration without having all their representation nodes evaluated.
  • a concept-based scan is one wherein all documents containing a given representation node are evaluated. As the process continues through several representation nodes, a scorecard is maintained of the probabilities that each document meets the information need, I.
  • a single representation node r is evaluated for each document in the collection to assign an initial probability that the document meets the concept.
  • the process continues through the several representation nodes with the probabilities being updated with each iteration.
  • the top D-ranked documents are then selected as potential information need documents. If at some point in the process it can be determined that evaluation of additional representation concepts will not alter the ranking of the top D-ranked documents, the scan process can be terminated.
  • representation nodes r onion r 2 , r 3 ,...r k are nodes dependent on the content of the texts of the documents in the collection. Most representation nodes are created in the document database. Other representation nodes, namely those associated with phrases, synonyms and citations, are not manifest in any static physical embodiment and are created based on each search query. Because the user can define phrases and thesaurus relationships when creating the query, it is not possible to define all combinations in a static physical embodiment.
  • a query manifesting the concept "employee” may be represented by one or more of “actor”, “agent”, “attendant”, “craftsman”, “doer”, “laborer”, “maid”, “servant”, “smith”, “technician” and “worker”, to name a few.
  • These various representation nodes may be created from the query node at the time of the search, such as through the use of thesauri and other tools to be described, as well as through databases.
  • a query node q,, q 2 , etc. can be manifest in one or more representations.
  • the Search Query The present invention will be described in connection with a database for searching legal documents, but it is to be understood the concepts of the invention may be applied to databases for searching other types or classes of documents.
  • ROM 24 may be any form of read only memory, such as a CD ROM, write protected magnetic disc or tape, or a ROM, PROM or EPROM chip encoded for the purposes described.
  • Computer 20 may be a personal computer (PC) and may be optionally connected through modem 26, telephone communication network 28 and modem 30 to a central computer 32 having a memory 34.
  • the document network 10 and the document database containing the texts of documents represented by the document network are contained in the central computer 32 and its associated memory 34.
  • the entire network and database may be resident in the memory of personal computer 20 and ROM 24.
  • the documents may comprise, for example, decisions and orders of courts and government agencies, rules, statutes and other documents reflecting legal precedent.
  • legal researchers may input documents into the document database in a uniform manner.
  • there may be a plurality of computers 20, each having individual ROMs 24 and input/output devices 22, the computers 20 being linked to central computer 32 in a time-sharing mode.
  • the search query is developed by each individual user or researcher and input via the respective input/output terminal 22.
  • input/output terminal 22 may comprise the input keyboard and display unit of PC computer 20 and may include a printer for printing the display and/or document texts.
  • ROM 24 contains a database containing phrases unique to the specific profession to which the documents being searched are related. In a legal search and retrieval system as described herein, the database on ROM 24 contains stemmed phrases from common legal sources such as Black's or Statsky's Law Dictionary, as well as common names for statutes, regulations and government agencies. ROM 24 may also contain a database of basic and extended stopwords comprising words of indefinite direction which may be ignored for purposes of developing the concept nodes of the search query. For example, basic stopwords included in the database on ROM 24 includes indefinite articles such as "a", "an”, "the”, etc.
  • Extended stopwords include prepositions, such as “o , "under”, “above”, “for”, “with”, etc., indefinite verbs such as “is”, “are”, “be”, etc. and indefinite adverbs such as "what", “why", “who”, etc.
  • the database on ROM 24 may also include a topic and key database such as the numerical keys associated with the well-known West Key Digest system.
  • Figures 4A and 4B are a flow diagram illustrating the process steps and the operation on the example given above in the development of the concept nodes c.
  • the natural language query is provided by input through input terminal 22 to computer 20.
  • the natural language input query is:
  • a corresponding WESTLAW Boolean query might be:
  • the natural language query shown in block 40 is inputted at step 50 to computer 20 via input/output terminal 22.
  • the individual words of the natural language query are parsed into a list of words at step 50, and at step 54 each word is compared to the basic stopwords of the database in ROM 24.
  • the basic stopwords such as "the” are removed from the list.
  • the extended stopwords are retained for phrase recognition and remaining extended stopwords will be removed after phrase recognition, described below.
  • the remaining words are stemmed to reduce each word to its correct morphological root.
  • One software routine for stemming the words is based on that described by Porter "An Algorithm for Suffix Stripping", Program,
  • step 56 a list of words is developed as shown in block 42, the list comprising the stems of all words in the query, except the basic stopwords.
  • Previous systems recognized linguistic structure (for example, phrases) by statistical or syntactic techniques. Phrases are recognized using statistical techniques based on the occurrence of phrases in the document collection itself; thus, proximity, co-occurrence, etc. were used. Phrases are recognized using syntactic techniques based on the word/term structure and grammatical rules, rather than statistically. Thus, the phrase "independent contractor” could be recognized statistically by the proximity of the two words and the prior knowledge that the two words often appeared together in documents. The same term could be recognized syntactically by noting the adjective form "independent” and the noun form "contractor” and matching the words using noun phrase grammatical rules. (Manual selection systems have also been used wherein the researcher manually recognizes a phrase during input.)
  • Previous inference networks employed a two-term logical AND modeled as the product of the beliefs for the individual terms. Beliefs (probabilities) lie in the range between 0 and 1, with 0 representing certainty that the proposition is false and 1 representing certainty that the proposition is true.
  • the belief assigned to a phrase is ordinarily lower than that assigned to either component term.
  • experiments reveal that the presence of phrases represents a belief higher than the belief associated with either component term. Consequently, separately identifying phrases as independent representation nodes significantly increases the performance of the information retrieval system.
  • single terms of an original query are retained because many of the concepts contained in the original query are not described by phrases. Experimentation has suggested that eliminating single terms significantly degrades retrieval performance even though not all single terms from an original query are required for effective retrieval.
  • phrase relationships in the search query are recognized by domain-knowledge based techniques (e.g., the phrase database), and by syntactic relationships.
  • domain-knowledge based techniques e.g., the phrase database
  • syntactic relationships The primary reason to solely select syntactical and domain-based phrases for purposes of the query network is to reduce user involvement in identifying phrases for purposes of creating a query.
  • An example of a domain-knowledge database is a database containing phrases from a professional dictionary. This type of phrase handling is particularly suitable for professional information retrieval where specialized phrases are often employed.
  • computer 20 returns to the database in ROM 24 to determine the presence of phrases within the parsed and stemmed list 42.
  • the phrase database in ROM 24 comprises professional, domain-specific phrases (such as from Black's Law Dictionary) which have been stemmed in accordance with the same procedure for stemming the words of a search query.
  • Computer 20 compares the first and second words of list 42 to the database of phrases in ROM 24 to find any phrase having at least those two words as the first words of a phrase. Thus, comparing the first two terms "WHAT" and "IS” to the database of phrases (such as Black's Law Dictionary), no match is found. Thus, as shown in block 44, "WHAT" is retained for the search query.
  • the phrase lookup is accomplished one word at a time.
  • the current word and next word are concatenated and used as a key for the phrase database query. If a record with the key is found, the possible phrases stored under this key are compared to the next word(s) of the query. As each phrase is found, a record of the displacement and length of each found phrase is recorded.
  • the extended stopwords are included in the phrase matching technique because the phrases themselves contain such stopwords.
  • phrases like "doctrine of equivalents” and “tenancy at will” contain prepositions which are stopwords.
  • Hyphenated terms in search queries are handled in much the same manner as citations.
  • the hyphen is removed and the component words are searched using an adjacency operation which finds all adjacent occurrences of the component words.
  • Synonyms comprise equivalent words and misspellings and are created from a predefined database stored in ROM 24 ( Figure 3). Examples of equivalencies include 2d/2nd/second whereas examples of misspellings include donas/habeus.
  • a search query includes a word having a synonym
  • a new representation node r ( Figure 2) is created for each synonym.
  • the weight associated with the node is based on the frequency of the entire class of nodes comprising all synonyms, rather than any one term of the class.
  • the word, term or phrase is evaluated only once.
  • the duplicate word, term or phrase is simply dropped from the search query.
  • the component probability score for each document containing a term duplicated in the query is multiplied by the query frequency, and the query normalization factor is increased by that frequency.
  • Thesauri are employed to identify words of similar or related meaning, as opposed to synonyms having identical meaning.
  • the thesauri are used to suggest broader, narrower and related terms to the researcher for inclusion in the search query. These relationships can be drawn from the machine readable dictionaries
  • Document Retrieval One feature of probabilistic information retrieval systems is that the documents in the document collection are ranked in accordance with the probability that the document meets the information need identified in the query.
  • a probabilistic information retrieval network can identify for retrieval the 20 documents having the highest probability of meeting the information need.
  • Phrases, synonyms, proximities and thesaurus classes are not separately permanently identified in the document network. Instead, the representation nodes in the document network are created for the phrase, synonym, proximity or thesaurus class by those concept nodes ( Figure 1) which themselves are a function of the phrase or term in the query.
  • Figures 6A-6D illustrate different treatments of phrases in the document network of an inference network.
  • Representation concepts r, and r 2 shown in Figures 6A-6D correspond to two words in the text of document d ⁇
  • Representation concept r 3 corresponds to the phrase in the text consisting of the two words.
  • Q represents the query.
  • r, and r 2 may correspond to the occurrence of the terms "independent” and “contractor", respectively, while r 3 corresponds to the occurrence of the phrase "independent contractor”.
  • the phrase is treated as a separate representation concept, independent of the concepts corresponding to the component words.
  • the belief in the phrase concept can be estimated using evidence about component words and the relationship between them, including linguistic relationships.
  • the presence of the query phrase concept in the document increases the probability that the document satisfies the query (or information need).
  • the model of Figure 6B illustrates the case where the belief in the phrase concept depends on the belief in the concepts corresponding to the two component words.
  • Figure 6C illustrates a term dependence model where the phrase is not represented as a separate concept, but as a dependence between the concepts corresponding to the component words. A document that contains both words will more likely satisfy the query associated with the phrase due to the increase belief coming from the component words themselves. However, experimentation has revealed that the model of Figure 6C is less appropriate for phrases and more appropriate for thesauri and synonyms.
  • the probabilities for individual concepts are based on the frequency with which a concept occurs in document j (tf y ) and the frequency (Q with which documents containing the concept (i) occur in the entire collection.
  • the collection frequency may also be expressed as an inverse document frequency (id .
  • the inference network operates on two basic premises:
  • ⁇ A concept that occurs frequently in a document (a large tfy) is more likely to be a good descriptor of that document's content, and ⁇ A concept that occurs infrequently in the collection (a large idf ; ) is more likel to be a good discriminator than a concept that occurs in many documents.
  • n c is the number of documents in the collection fj j is the frequency of concept i in document j, f ; is the frequency of document in the collection containing term i (i.e., the number of documents in which ter i occurs), and max f- is the maximum frequency for any term occurring i document j. If f ⁇ is not less than max f j5 then tf y is set to 1.
  • Most document networks for search and retrieval are represented b a word index containing words from the documents to be matched to query terms
  • relationships were determined from the word index an offset data therein to locate documents meeting the logical criteria of the query
  • the present invention employs a probabilistic network in which the same databas and word index may be employed to calculate the probabilities set forth i Equation 5 for many of the query concepts.
  • the number of documents in th coUection, n * is known from the document addresses associated with words in the word index.
  • To calculate f gut the number of documents in the collection containing concept i is determined by locating and counting the addresses of all documents in the database containing the concept.
  • the document addresses associated with each word in the word index corresponding to the concept are compared to remove duplicate addresses and the remaining number of document addresses is summed.
  • the resulting sum is f,.
  • the frequency or number of times, f M that concept i appears in document j can be calculated from the number of offset codes for the word (and its synonyms) associated with the document.
  • the terms idfj and tf ⁇ can be calculated, thereby leading to the probability factor, P(cjd), for the concept for the document in accordance with Equation 5.
  • this technique is useful only for those concepts whose concept frequency is represented in the word index. Certain concepts, such as phrases, are not ordinarily so represented, so it is an aspect of the present invention to provide a technique to estimate the representation concept frequency for such concepts.
  • ) is predetermined for each representation concept in the document collection, except certain representations such as phrases, synonyms, proximities and thesaurus classes.
  • the inverse document frequency is computed for each search. Identifying the inverse document frequency for a given phrase, synonym, proximity or thesaurus class requires processing through each document in the collection. In small collections, the computation of the inverse document frequency of a phrase, synonym, proximity, or thesaurus class may be performed without significant difficulty by examination of the word index to determine f dirt n c and f ⁇ as described above. Hence, the inverse document frequency for the phrase may be calculated using equation 7.
  • the range of the inverse document frequency, idf ⁇ lies between about 0.02000 and 0.02002, which is too small to significantly affect the result ranking.
  • the frequency is in the range of 10,000 and 14,000, leaving a 28.6% frequency difference and a range of document inverse frequencies between 0.02000 and 0.02800, which is significant.
  • One aspect of the present invention concerns the estimation of the inverse document frequency for a selected representation, such as a phrase, proximity, synonym or thesaurus class. More particularly, the representation frequency is estimated from a sample of the collection with sufficient accuracy, while avoiding extended computational resources in the evaluation of the entire collection.
  • a sample of a plurality of documents is selected from the collection, and the representations in the sample documents are processed to identify the frequency that the selected representation occurs in the sample.
  • the "gaps,” or the number of documents (g) occurring between occurrences of documents containing the selected representation are identified, and the sum of the squares of the gaps (sq) are employed to estimate the correct representation frequency.
  • the gaps are identified from the successive addresses of documents containing the concept as determined from the word index of the document database.
  • the sequence of observed gaps are employed to estimate the maximum and minimum bounds (f ⁇ and f detoxJ of the true frequency within a preselected error rate.
  • the frequency bounds are employed to compute the range of the probable inverse document frequency. When that range becomes sufficiently narrow as to insignificantly affect the result ranking, the midpoint of the frequency range is selected as the estimated frequency of occurrence of the selected representation.
  • the sample is enlarged to include additional documents, and the frequency bounds are again computed.
  • mean and variance estimations are computed on the basis that each sample is independent, but in the present case the samples may not be independent because samples are taken sequentially, rather than randomly.
  • the variation for the frequency bounds is estimated in two ways: first based on random sampling, and second based on gaps (numbers of documents found between documents containing the representation).
  • the probable maximum frequency, f, ⁇ and the probable minimum frequency, f ⁇ , are computed in accordance with the following algorithms:
  • n is the number of documents (or gaps between documents) in the sample containing the selected representation
  • n c is the number of documents in the collection
  • X j is the number of documents in the sample
  • S is the greater of x/n; or sd of the n- gaps
  • z is the standard critical value for normal distribution for preselected reliability, and where sd is the standard deviation and is represented by
  • sq is the sum of the squares of the gaps, or the sum of the squares of th numbers of documents found between documents containing the representation
  • the reliability of the estimation be within 0.95 (i.e. the maximum error rate should not exceed 5%). It can be shown that th standard critical value (z) for a normal distribution of the documents of th collection, within a 0.95 reliability, is 2.8070.
  • f aatx and f ⁇ There are several constraints on the calculation of f aatx and f ⁇ . First if f m i n is smaller than the a priori minimum, then f ⁇ is set equal to the a prio minimum, and if f ⁇ ,. is greater than the a priori maximum, then f max is set equ to the a priori maximum.
  • a priori minimums and maximums assume a synonym class containing terms A and B where term A appears i 10,000 documents and term B appears in 4,000 documents. Terms A and could appear in the same or overlapping documents, meaning that term B coul appear in as many as 4,000 documents with term A. Conversely, term B mig appear in documents exclusive of term A.
  • the a priori maximums and minimums are derived from th pre-identified frequencies f ; of individual terms (which form or are part of th concept) in the collection, and the type of concept (synonym, phrase, thesauru or proximity).
  • Another constraint concerning the calculation of ⁇ is that if th calculated f ⁇ is smaller than ⁇ (the number of documents in the sampl containing the representation), f, ⁇ is set equal to .
  • f, ⁇ is set equal to n ; + (n c - x ; ) (th number of documents in the sample containing the representation plus the numbe of documents of the collection yet to be considered).
  • the number of documents x ; in the sample necessary to estimate th frequency of the selected representation is increased until the difference betwee the inverse document frequencies of the maximum and minimum bounds i smaller than some prescribed amount. While the specific limit of the difference between the maximum and minimum inverse document frequencies is heuristic, it has been found that when the range of frequency values between f ⁇ and f ⁇ , is so small that further refinement would not significantly alter the ranking of the ultimately selected documents, further computation of an estimated probable frequency for th selected representation may be halted.
  • an inverse document frequency (id ) difference of 0.05 or less as an empirically selected stopping point, provides good results.
  • the estimated inverse documen frequency for the selected representation is thereupon selected at the mean between the maximum and minimum bounds. If the maximum and minimum bounds are accurate, they would each be located at a maximum error of 0.025 which is deemed acceptable for the present purposes. In practice, the corre frequency error is usually smaller than 0.025 because the correct frequency ten to lie in the center of the estimated range more often than near either th maximum or minimum bound. Tests have indicated that the average error for th estimated frequency for the selected representation is about 0.01.
  • Figures 7A and 7B taken together, comprise a detailed flowcha illustrating the steps of estimating the frequency of a selected concept, such as phrase, synonym, proximity or thesaurus class.
  • the process illustrated in Figur 7A and 7B is carried out by a computer, which calculates the probable maximu and minimum frequencies f ⁇ and f ⁇ shown in Equations 8 and 9 and calculat the estimated inverse document frequency, idfj, for the selected concept.
  • the number of documents in the sample (X;), the numb of documents in the sample containing the selected representation (nj), the ga size (g), and the sum of the squares of the gaps (sq), are each initialized to
  • 1 is added to x ; and at step 74 the increased X; is compared to n c , th number of documents in the entire collection. If X j is smaller than n c , the fir document j is examined at step 76 to determine whether or not concept i appea in the document. If the concept does not appear in the first document, 1 is adde to g at step 78 and the sequence loops back through point 80 to increment x ; b 1.
  • f ⁇ and f min n be calculated each time a document is located containing concept i. Instead, is preferred that a decision be made at step 90 which inhibits calculation of f and f ⁇ until after only a predetermined number of documents containing th concept are identified. This has two effects: first, it conserves computin resources, and second, it permits use of the actual inverse document frequency (idf;) for those concepts not appearing often in the collection. More particularly, it is preferred that a fixed number of documents, such as 25, be found containing concept i between each calculation of f ⁇ and f ⁇ .
  • n 5 is divided by 25 and if the result is a whole number (indicating that H; is 25, 50, 75, etc.), then the process continues through steps 92, 94 and 96 to calculate f ⁇ and f ⁇ .,. On the other hand, if n ; is not equal to 25, 50, 75, etc., the process loops back through point 80 to continue to identify concept i in additional documents.
  • Xj/rij and sd are calculated, sd being calculated in accordance with equation 10.
  • Sj is set to the greater of j/n- or sd.
  • f m ⁇ X and f ⁇ are calculated.
  • g is the size of the gap or the number of successive documents not containing the concept between documents that do contain the concept. Thus, g is incremented at step 78 for each document not containing the concept and is reset at step 88 upon finding a document which does contain the concept.
  • Term sq calculated at step 86 is the sum of the squares of the gaps g.
  • f- ⁇ and f ⁇ , are computed, maximum and minimum inverse document frequencies for the concept, idf irnax and idf irrun > are calculated at step 98.
  • idf imin is within 0.05 of idf imax
  • the mean frequency f ⁇ n is computed from f ⁇ x and f ⁇ at step 102
  • the estimated inverse document frequency, idf- is computed at step 104 for the concept.
  • the computer determines that the number of documents in the sample (xj is equal to the number of documents in the collection (n,.), in which case the actual inverse document frequency for the concept is computed at step 106.
  • Equation 4 the probability is computed for each concept/document pair, and the probabilities are summed. The result is normalized by the number of concepts in the query to determine the overall probability estimate that the document satisfies the information requirement set forth in the query.
  • phrases are treated in a manner similar to proximity terms, except that a document which does not contain the full phrase receives a partial score for a partial phrase. For example, if a query contains the phrase "FEDERAL TORT CLAIMS ACT" and a document contains the phrase “tort claims” but not
  • FIG. 8 is a flow diagram illustrating the process of handling partial matches.
  • the full phrase is evaluated against the collection as heretofore described.
  • the inverse document frequency (idQ is determined for the full phrase (step 122), and if idf s is greater than a predetermined threshold (e.g., 0.3) the maximum belief achieved for any subphrase or single term is selected as the belief for the partial phrase (step 124). If idfj is smaller or equal to the threshold value (0.3), the preselected default belief (0.4) is assigned to the documents containing the partial phrase (step 126).
  • a predetermined threshold e.g., 0.3
  • the probability estimate for the partial phrase would generally be lower than that assigned to documents containing the complete phrase.
  • idf For phrases which occur extremely often (for example, where idf; is less than 0.3) it is preferred to dispense with the partial matching strategy, and treat the phrase as a pure proximity term by assigning the default belief (0.4) to all documents containing the partial phrase but not the full phrase (step 126).
  • idfj For phrases which appear less often (where idfj is greater than 0.3), the maximum belief achieved by any single word of the partial phrase is assigned to the belief for the partial phrase.
  • duplicate terms are purged from the search query.
  • the component probability score for each document containing the term is multiplied by the query frequency. For example, if a document contains a term which appears twice in a natural language query receives a component probability of 0.425, the probability score is multiplied by 2 (to 0.850) for that term.
  • the normalization factor is increased to reflect the frequency of the duplicated term (increased by 1 in this example).
  • the duplicated term is treated as if it had been evaluated multiple times as dictated by the query, but in a computationally simpler manner.
  • the probability estimates for each document/concept pair are summed and the result is normalized by the number of concepts in the query.
  • the search query shown in block 46 employs eleven concepts, so the total probability for each document will be divided by 11 to determine the overall probability that the given document meets the overall query. For example, assume for a given document that the eleven probabilities are:
  • the overall probability is the sum of the individual probabilities (5.033) divided by the number of concepts (11) for a total probability of 0.458. This indicates a probability of 0.458 that the document meets the full query shown in block 40 in Figure 4.
  • the probability is determined for each document represented in the database, whereupon they are ranked in accordance with the value of the probability estimate to identify the top D documents.
  • the ranking or identification is provided by computer 32 ( Figure 3) to computer 20 for display and/or printout at output terminal 22. Additionally, the document texts may be downloaded from computer 32 to computer 20 for display and/or printout at output terminal 22.
  • the probabilistic document retrieval system retrieves a predetermined number (D) of documents having the highest probability of meeting the information need set forth in the query. These probabilities are identified by the normalized sum of the probabilities of each representation in the document matching the concept in the query. Significant processor resources are required to compute these probabilities for each document in a large document database, for example about 500,000 documents or more. To reduce processing resources, it is desirable to limit probability computations to a reasonable number.
  • One technique to reduce processing resources is to employ a probability threshold against which the probabilities of documents are compared to determine whether or not the probability of a given document meets or exceeds the threshold. For example, in a document retrieval network designed to retrieve
  • the probability threshold may be set equal to the probability of the lowest ranked document of 10 selected documents. To identify 10 documents from a database of 500,000 documents, the first 10 documents of the database are listed to a result list (making the initial ranking of the top 10). A probability threshold is set equal to the probability of the lowest-ranked document of the first
  • the probability of the 11th document is computed and compared against the probability threshold. If the probability of the 11th document exceeds that lowest ranked document of the original 10, the 11th document is entered into the result list of 10 selected documents and the prior lowest ranked document is removed. A new probability threshold is set to the probability of the new lowest ranked document of the original 10 selected documents. Hence, the probability threshold is a "running" threshold, constantly updated and increased in value as additional documents are identified which exceed the previous threshold.
  • the threshold becomes so high that many documents may be discarded from consideration after consideration of only a few of the representation probabilities.
  • a query containing eleven concepts and a probability threshold of 0.8965 well into the document identification process.
  • For a document to meet the threshold it must have a minimum sum of individual probabilities of 9.8615 (11 x 0.8965).
  • a low representation probability amongst the first few representations may result in a mathematical impossibility of meeting the threshold. For example, if the first two representations of a document have probabilities of 0.311 and 0.400, giving a sum of 0.711, it will not be possible for that document to make the result list of 10.
  • Figure 9 is a graph illustrating a threshold setting technique as described above.
  • the process commences with a probability threshold of zero, following curve 130.
  • the initial threshold is established as the lowest probability of the initial 10 documents, and subsequent documents are compared against the threshold.
  • the threshold value follows curve 130 approaching maximum threshold level 132. It can be shown that the documents requiring examination against the probability is high at the early stages of the process and decreases as the process advances.
  • the area of the graph of Figure 9 above the curve of line 130 is representative of the number of documents requiring processing and is representative of the required processing resources.
  • One feature of the present invention resides in the early estimations of the probability threshold for documents meeting the information need of the query. More particularly, by selecting a sample of documents and setting the initial probability threshold as equal to the probability of the document in the sample having the highest probability, an initial threshold may be established against which further documents may be compared as previously described. This "running start” is shown in Figure 9 as the initial threshold for the process.
  • cs is the collection size (equal to _ ., the number of documents in the collection)
  • gs is the goal size (equal to D, the number of documents to be selected or identified)
  • me is the maximum error sought.
  • the first sample comprises documents 1 through 309
  • the second sample comprises documents 310 through 11095
  • the third sample comprises documents 11096 through 25070, etc.
  • the one document having the highest probability of meeting the information need defined by the query is selected from documents 1 through 309.
  • two documents having the two highest probabilities are selected from the group consisting of the sample of documents (documents 310 through 11095) plus the one document selected from the previous iteration.
  • three documents having the three highest probabilities are selected from the group consisting of documents 11096 through 25070 plus the two documents selected during the second iteration.
  • the process continues through all iterations (10 in the example) to identify the predetermined number D of documents (10 in the example).
  • the algorithm may be used to provide the parameters for databases of other sizes, selection of other numbers of documents, and tolerance within other maximum error rates.
  • the algorithm may be modified to fit other examples in other situations, and, in fact, other algorithms are possible to define the sampling technique. It may be desirable to employ the probability threshold technique described above with the statistical optimization selection described above.
  • the probability threshold may be set from the first sample requiring that documents selected during successive iterations also equal or exceed the probability threshold. As the processing continues, if the document of the first sample is ultimately replaced (that is, for a given iteration the probability of the first sample document is exceeded by the probabilities of at least the number of documents required by the iteration), a new threshold is established as the threshold of the new lowest document. Consequently, the probability threshold level continues to advance as documents are continued to be identified.
  • Figure 10 is a flowchart of the steps of the statistical optimization selection technique of developing the probability threshold and document distribution optimization for the present invention. More particularly, at step 150 the document distribution table of Table I is initialized to meet the criteria for error, numbers of documents sought, and collection size in accordance with the above-described software algorithm. At step 152, the probability threshold value is initialized to 0 and the number of documents sought to be identified, D, is initialized to one. At step 154, a document from the collection is scored utilizing the maximum score optimization technique, explained below in connection with Figure 11. At the same time, the number of documents processed since the previous document was scored is identified. At step 156, a count is incremented identifying the total number of documents from the collection which had been processed.
  • the thirty-first document is the first document of the collection having representations which meet concepts of the query, that document is located and scored at step 154 using the maximum score optimizations described below. At the same time, a count of 31 is entered, representative of the number of documents processed ( _). Since the thirty-first document is the only document in the result list, it is placed at the top of the result list.
  • step 158 the value from the table corresponding to O_ is compared against the number of documents x, counted at step 156. If the number of document, x i5 is smaller than the number Dj, the process continues to step 160.
  • each scored document is entered into the result list stored in the memory of the computer in descending order of probabilities. Thus, the document with the highest probability appears at the top of the result list whereas the document meeting the maximum score optimizations having the lowest probability is at the bottom of the list.
  • the probability threshold is set at step 162 to the score for the Dth document in the result list, which in the example is the thirty-first document.
  • the number of documents processed, Xj is compared to the total number of documents in the collection, n., and if the number of documents processed is smaller than the number of documents in the collection, the process loops back through point 166 to return to step 154. Any further documents which have probabilities less than the threshold probability (or which cannot mathematically achieve a probability greater than the probability threshold after calculation of less than all representation probabilities) are excluded (or not scored) at step 154.
  • step 154 Assume document one hundred eighty has a probability greater than the probability threshold established by document thirty one. Hence, document one hundred eighty is identified at step 154 and inserted into the result list in probability order, which is greater than document thirty one.
  • step 156 x. is incremented to indicate the count, 180, of the number of documents thus far processed, which count is still smaller than 309, the number in Table I associated with Di. Consequently, the sequence proceeds to step 160 to insert document one hundred eighty into the result list.
  • step 162 the probability threshold is set to the score of the Dth document in the result list. Since Dj is 1 , the probability threshold is set to the score of document one hundred eighty.
  • the process continues through the remainder of the database, incrementally increasing the value from Table I against which the document number is processed at step 158, the process continuing until 10 documents are identified and all documents in the database have been processed.
  • Xi equals n c at step 164 and the final result list is retrieved at step 168.
  • the probabilities of documents added to the result list must exceed the initial probability threshold, at least until the preselected number of documents is added to the result list. Thereafter, the probability threshold is increased as additional documents having higher probabilities are added to the list and documents with the lowest probabilities are removed from the list.
  • a new probability threshold may be established slightly below the probability of the document on the result list with the lowest probability and the entire collection re-scored as described above.
  • Figure 11 illustrates the iterative loops for scoring documents employed at step 154 in Figure 10.
  • Each document in the document database has a document number associated with it.
  • the maximum score optimization commences with the concept i, in the query having the highest idf,.
  • a lower bound document number is chosen (such as the lowest document number in the database).
  • the first document d j whose document number is greater than the lower bound document number and which contains the concept i, is selected as a candidate document.
  • a remainder score is initialized to the maximum possible score less the value that document d 3 - scores for the concept i, being examined.
  • the remainder score value represents the maximum score which each document which does not contain concept i, could achieve without concept i,.
  • the process continues by iterating through each of the concepts i 2 , i 3 , etc.
  • the concepts are processed in descending order of concept id ⁇ value.
  • the concept with the highest idf is the concept which appears least frequently in the collection and is more likely to be a good discriminator than a concept which appears more often.
  • the processing for each concept commences with the document having a document number greater than or equal to the lower bound document number. In the processing, three conditions can occur.
  • the candidate document contains the concept and no change is made to the maximum score. Instead, the process continues to the next concept.
  • the current document does not contain the concept and the value of the current concept is subtracted from the maximum score for the candidate document and the remainder score is adjusted. If the maximum score is still high enough that the candidate document might still be selected, the processing will continue to the next concept. If not, the candidate document is discarded and the processing starts over with the next higher document number as the candidate document.
  • the remainder score tabulated for each document represents the maximum score that document can achieve based on the concepts processed up to that point and the possibility that it contains all the subsequent concepts. As each concept is processed, the remainder score for the document is reduced by the value of the concept for each document in which the concept does not appear. In considering the remainder score, two possibilities exist.
  • the remainder score is less than the minimum document score necessary to remain in the result list, then that document, and all other documents up to the candidate document number, can be discarded, since it is not possible for any of them to achieve a document score high enough to remain in the result list. In this situation, the next document number which is greater than or equal to the candidate document number is selected for the concept and the processing continues as described above. 2. If the remainder score is not less than the minimum document score necessary to remain in the result list, then the document is considered as a candidate for the result list. In this case, the document score for the document is set to the current remaining score and the candidate document number is reset. The process continues until a candidate is found having a maximum possible score greater than the probability threshold required to remain in the result list.
  • the process of the maximum score optimization may be explained with reference to the flowchart of Figure 11.
  • the lower bound document number, probability threshold (from step 152 or 162 in Figure 10) and the maximum possible score are inputted.
  • the probability threshold is initialized to 0 at step 152 in Figure 10 and the maximum possible score is initialized.
  • the lower bound document number is set to the first document in the database desired to be reviewed.
  • the first document having a document number greater than or equal to the lower bound document number and which contains the concept having the highest idf ; is identified as a candidate document.
  • the document number is identified for the first document containing the concept.
  • the remainder score for all other documents having a lower number is initialized to be equal to the maximum possible score less the incremental concept value from the missing concept i, having the highest idfj.
  • a decision is made as to whether all the concepts have been processed, and if they have not, the current concept is set to the concept i 2 whose idfj is next highest in value below the first concept i,, at step 188.
  • the document number is set to the document number of the next document greater than or equal to the lower bound document number for the current (second) concept i 2 .
  • step 192 if the document number of the document containing the concept is less than the current candidate document number, then the decision is made at step 194 whether the remainder score is smaller than the probability threshold initialized at step 152 or set at step 162 in Figure 10. If the remainder score is smaller than the minimum probability threshold, then the lower bound document number is set to the current candidate document number and the document number of the next document containing the concept i 2 currently being processed is set to the next document number greater than or equal to the current lower bound document number for the current concept. The concept incremental value is subtracted at step 200 from the remainder score.
  • the candidate document number is set, at step 202, to the document number of the next document containing the concept, and the candidate document score is set, at step 204, to the remainder score.
  • the process then continues to step 200 to subtract the concept incremental value from the remainder score for the documents not containing the concept.
  • step 192 If at step 192 the document number containing the concept is greater than or equal to the candidate document number, then the process continues directly to step 200 where the concept incremental value is subtracted from the remainder score for the documents not containing the concept.
  • step 206 if the document number containing the concept is equal to the candidate document number, then the candidate document is found to contain the concept, and the process returns to step 186 and processes through the loop again for the next concept. If the document number containing the concept is not equal to the candidate document number, then the concept incremental value is subtracted from the candidate document score at step 208. If the resulting candidate document score is greater than the probability threshold, the process loops back through step 186 again. On the other hand, if the candidate document score is not greater than the probability threshold, the lower bound document number is set to the candidate document number plus 1 and the process reloops to step 182.
  • step 186 identifies that all concepts have been processed and returns the document at step 214 for insertion into the full result list in sorted order at step 156 in Figure 10.
  • the process terminates for a given threshold value only when a candidate is found, after all concepts have been examined, which has a maximum possible score greater than the probability threshold required to remain in the result list.
  • the process iterates through the loops illustrated in Figure 10 until the required number of documents for the result list is identified.
  • the documents may then be retrieved from database using the result list at step 170, the scoring of each document occurring through the iterations of the loops of Figure 11.
  • Figures 12 and 13 are flowcharts detailing the construction and evaluation of an inference network, Figure 12 being a detailed flowchart for constructing the query network 12 and Figure 13 being a detailed flowchart for evaluation the query network in the context of the document network 10.
  • an input query written in natural language is loaded into the computer, such as into a register therein, and is parsed (step 220) compared to the stopwords in database 222 (step 224) and stemmed at step 226.
  • the result is the list 42 illustrated in Figure 4.
  • synonym database 2208 the list is compared at step 230 to the synonym database and synonyms are added to the list.
  • the handling of synonyms may actually occur after handling of the phrases.
  • Citations are located at step 232 as heretofore described. More particularly, a proximity relationship is established showing the page number within five words of the volume number, without regard to the reporter system employed.
  • the handling of citations may be accomplished after phrase resolution, if desired.
  • the shared term is accorded to the first phrase and denied to the second phrase.
  • the resulting phrase substitution occurs at step 246.
  • the process loops back to step 236 to determine if phrases are still present, and if they are the process repeats until no further phrases are present.
  • all duplicate terms are located, mapped, counted and removed, with a count V representing the number of duplicate terms removed.
  • the search query illustrated at block 46 in Figure 4 is developed. As heretofore described, the handling of synonyms and citations may occur after resolution of the phrases, rather than before.
  • the resulting search query is provided to the document network where, at step 250 the number of terms T is counted, at step 252 i is set to 0 and at step 254 1 is added to i.
  • document database 256 which also contains the text of the documents, the inverse document frequency (idfj) is determined and the probability estimate (t ⁇ ) is determined at step 258.
  • are calculated from addresses, document numbers and offset data in the word index of the document database.
  • the estimated inverse document frequency (idQ is also added to the database by a temporary memory or register.
  • the component probability is determined at step 260 as heretofore described and is accumulated with other component probabilities at step 262.
  • the probability for such terms is multiplied by the number of duplicates deleted, thereby weighing the probability in accordance with the frequency of the term in the original input query. Consequently, at step 266, it is necessary to divide the accumulated component probability for the document by V + T (where V is the number of duplicate terms deleted from the input query) to thereby normalize the probability.
  • the probability for each document is stored at step 268 and the process repeated at step 270 for the other documents.
  • the documents are ranked in accordance with the determined probabilities, and the top ranked documents are printed out or displayed at step 274.
  • the scan technique may be a concept-based scan, rather than the document-based scan described. Further, as previously described, the scan may be aborted after less than complete scan of any given document if the probabilities result in a determination that the document will not reach the cutoff for the D top-ranked documents to be displayed or printed.
  • the present invention has been described in connection with a time-shared computer system shown in Figure 3 wherein search queries are generated by PC computers or dumb terminals for transmission to and time- shared processing by a central computer containing the document network
  • the document database would be supplied on the same ROM 24 as the databases used with the search query, or on a separately supplied ROM for use with computer 20.
  • updated ROMs containing the document database could be supplied periodically on a subscription basis to the user.
  • the stopwords, phrases and key numbers would not be changed often, so it would not be necessary to change the ROM containing the databases of stopwords, phrases and key numbers.

Abstract

The frequency of occurrence of a representation in a collection of documents is estimated for document retrieval purposes by identifying the actual frequency of occurrence (actual fi) of the representation in a sample (ni) of documents and calculating the difference between the maximum (fmax) and minimum (fmin) probable frequencies of occurrence of the representation in the collection. If the difference does not exceed a limit, a midpoint of the maximum and minimum probable frequencies (fmean) is the estimated frequency of occurrence of the representation. Document distribution probabilities are optimized and probability thresholds are established for the identification of documents. An initial probability threshold is established and is adjusted as the probabilities are scored for documents in samples. The document result list (170) is iteratively adjusted through the samples.

Description

PROBABILISΗC INFORMATION RETRIEVAL NETWORKS
BACKGROUND OF THE INVENTION This invention relates to information retrieval, and particularly to document retrieval from a computer database using probability techniques. More particularly, the invention concerns a method and apparatus for establishing probability thresholds in probabilistic information retrieval systems and for estimating representation frequencies in document databases for representations having no pre-computed frequency.
There are, in theory, two categories of information retrieval systems: algebraic systems and probabilistic systems. Algebraic systems logically match terms and their positions in a stored information (such as a document) to terms in a query; Boolean systems are examples of algebraic systems. Probabilistic systems match representations (concepts) in a stored information to concepts in a query to retrieve information based on probabilities rather than algebraic or Boolean logic.
Presently, document retrieval is most commonly performed through use of Boolean search queries to search the texts of documents in the database. These retrieval systems specify strategies for evaluating documents with respect to a given query by logically comparing search queries to document texts. One of the problems associated with text searching is that for a single natural language description of an information need, different Boolean researchers will formulate different Boolean queries to represent that need. Because the queries are different, different documents will be retrieved for each search.
Another difficulty with Boolean systems is that all documents meeting the query are retrieved, regardless of number. If an unmanageable number of documents are retrieved, the searcher must reformulate the search query to more narrowly define the information need, thereby narrowing the retrieved documents to a more manageable number. However, in narrowing the search, the researcher risks missing relevant documents partially meeting the information need. Moreover, Boolean systems will not retrieve documents only partially meeting the query, which themselves are often important secondary documents to the query. More recently, probabilistic systems employing hypertext databases have been developed which emphasize flexible organizations of multimedia "nodes" through connections made with user-specified links and interfaces which facilitate browsing in the network. Early networks employed query-based retrieval strategies to form a ranked list of candidate "starting points" for hypertext browsing. Some systems employed feedback during browsing to modify the initial query and to locate additional starting points. Network structures employing hypertext databases have used automatically and manually generated links between documents and the concepts or terms that are used to represent their content. For example, "document clustering" employs links between documents that are automatically generated by comparing similarities of content. Another technique is "citations" wherein documents are linked by comparing similar citations in them. "Term clustering" and "manually-generated thesauri" provide links between terms, but these have not been altogether suitable for document searching on a reliable basis.
Deductive databases have been developed employing facts about the nodes, and current links between the nodes. A simple query in a deductive database, where N is the only free variable in formula W, is of the form {N | W(N)} , which is read as "Retrieve all nodes N such that W(N) can be shown to be true in the current database." However, deductive databases have not been successful in information retrieval. Particularly, uncertainty associated with natural language affects the deductive database, including the facts, the rules, and the query. For example, a specific concept may not be an accurate description of a particular node; some rules may be more certain than others; and some parts of a query may be more important than others. For a more complete description of deductive databases, see Croft et al. "A Retrieval Model for Incorporating Hypertext Links", Hypertext '89 Proceedings, pp 213-224, November 1989 (Association for Computing Machinery), incorporated herein by reference. A Bayesian network is a probabilistic network which employs nodes to represent the document and the query. If a proposition represented by a parent node directly implies the proposition represented by a child node, an implication line is drawn between the two nodes. If-then rules of Bayesian networks are interpreted as conditional probabilities. Thus, a rule A-→B is interpreted as a probability P(B | A), and the line connecting A with B is logically labeled with a matrix that specifies P(B | A) for all possible combinations of values of the two nodes. The set of matrices pointing to a node characterizes the dependence relationship between that node and the nodes representing propositions naming it as a consequence. For a given set of prior probabilities for roots of the network, the compiled network is used to compute the probability or degree of belief associated with the remaining nodes.
An inference network is one which is based on a plausible or non- deductive inference. One such network employs a Bayesian network, described by Turtle et al. in "Inference Networks for Document Retrieval", SIGIR 90, pp. 1-24 September 1990 (Association for Computing Machinery), incorporated herein by reference. The Bayesian inference network described in the Turtle et al. article comprises a document network and a query network. The document network represents the document collection and employs document nodes, text representation nodes and content representation nodes. A document node corresponds to abstract documents rather than their specific representations, whereas a text representation node corresponds to a specific text representation of the document. A set of content representation nodes corresponds to a single representation technique which has been applied to the documents of the database.
The query network of the Bayesian inference network described in the Turtle et al. article employs an information node identifying the information need, and a plurality of concept nodes corresponding to the concepts that express that information need. A plurality of intermediate query nodes may also be employed where multiple queries are used to express the information requirement.
The Bayesian inference network described in the Turtle et al. article has been quite successful for small, general purpose databases. However, it has been difficult to formulate the query network to develop nodes which conform to the document network nodes. More particularly, the inference network described in the Turtle et al. article did not use domain-specific knowledge bases to recognize phrases, such as specialized, professional terms, like jargon traditionally associated with specific professions, such as law or medicine.
One important aspect to probabilistic retrieval networks, such as a Bayesian inference network, is the identification of the frequency of occurrence of a representation in each document and in the entire document collection. A representation that occurs frequently in a document is more likely to be a good descriptor of that document's content. A representation that occurs infrequently in the collection is more likely to be a good discriminator than one that occurs in many documents. Consequently, when creating a database for a probabilistic network, care is taken to identify the representations (content concepts) in the documents, as well as their frequencies. However, it is not always possible to identify certain representations (such as phrases, proximities and thesaurus or synonym classes) or their frequency when creating the database. More particularly, phrases are usually comprised of multiple words which themselves are individual concepts or representations. The concept or representation of a phrase might be different from the concepts or representations of the individual words forming the phrase. For example, the phrase "independent contractor" is a different concept than either of the constituent words "independent" and "contractor". Since it is not always possible to identify all possible phrases, or their frequency of occurrence, during creation of the database, the use of phrases as a matching term in probabilistic networks has not been altogether successful. Proximities (such as citations) and thesaurus and synonym classes have likewise not been successful identifiers because of the inability to identify all synonyms, proximities and thesaurus classes during creation of the database or to pre-assign their frequencies.
Techniques have been developed to identify phrases, synonyms, proximities and thesaurus classes as concepts in the query, and to find phrases, synonyms, proximities and thesaurus classes as representations in the documents. However, no satisfactory technique exists for identifying the frequencies of occurrence of representations in the documents and in the collection when the document collection is large and the frequencies of occurrence are not included in the database.
Another difficulty with probabilistic networks is that for large databases, for example databases containing about one-half million documents or more, the processing resources required to evaluate a query have been too great to be commercially feasible. More particularly, probabilistic networks required that all representations for all documents in the collection containing at least one query term must be examined against all of the concepts in the query. Hence, probabilistic networks required extensive computing resources. While such computing resources might be reasonable for small collections of documents, they were not for large databases. There is, accordingly, a need to improve the processing of probabilistic networks to more efficiently employ the processing resources.
For a more general discussion concerning inference networks, reference may be made to Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference by J. Pearl, published by Morgan Kaufmann Publishers, Inc. , San Mateo, California, 1988, and to Probabilistic Reasoning in Expert Systems by R. E. Neapolitan, John Wiley & Sons, New York, NY, 1990.
GLOSSARY
As used herein, the following alpha-numeric characters refer to the following terms:
Character Term a, b, A, B Term or word in a query or document. c,, C2,...cm Root or concept node in query network. d„ d2,...d; Document node in a document network. D Number of documents to be selected or identified to result list. f; Concept frequency in collection (frequency, or number, of documents in collection containing concept i). fy Frequency of concept i in document j. fmΛX Probable maximum frequency of documents in collection containing specific concept (maximum bound).
fmin Probable minimum frequency of documents in collection containing specific concept
(minimum bound). g Number of documents in collection between documents containing a representation (gaps).
I Information need in query network. i Concept (an item of an information need). id Inverse document frequency for concept i. idfimax Probable maximum inverse document frequency for concept i.
idfimin Probable minimum inverse document frequency for concept i.
j Specific document (dj). max fj The maximum frequency for any term occurring in document j. n; Number of documents in sample containing selected representation.
nc Number of documents in collection.
P, , P2,...Pn Parent nodes to child node Q. q., q2,... Query nodes in query network.
Q Child node to parent nodes P.
Tj, r2,...rk Leaf or concept representation nodes in document network.
Sj A calculated number equal to greater of Xj/n; and sd. sd Standard deviation.
sq Sum of squares of gaps g. tj, t2,...tj Interior text nodes in document network. tf|j Probability estimate based on the frequency that concept i appears in document j (based on f-j).
T Number of terms in query.
V Number of duplicate terms removed from query. w-, w ,...wn Term weights for parent nodes where wg is maximum. wg Maximum term weight for child node Q, 0 < wg < 1.
X; Number of documents in sample. z Standard critical value.
π Parent Set (P„ P2,...Pn)
SUMMARY OF THE INVENTION According to one aspect of the present invention the frequency of occurrence of a selected representation in a collection of documents is estimated by identifying the frequency of occurrence of the representation in a sample of documents selected from the collection. Probable maximum and probable minimum frequencies of occurrence of the representation in the entire collection are calculated, and the midpoint of the probable maximum and minimum frequencies is selected. The estimated frequency of occurrence of the selected representation is set equal to the selected midpoint when the calculated difference between the probable maximum and minimum frequencies does not exceed a preselected limit. If the preselected limit is exceeded, the sample of documents is adjusted to include additional documents from the collection, the sampling and calculating being repeated until the calculated difference between the probable maximum and minimum frequencies is within the preselected limit.
The advantage provided by estimation of the frequency of representations such as phrases, synonyms, proximities and thesaurus classes is that the representations can be identified from the query itself and the frequencies can be accurately estimated without significantly affecting processing resources or the search results. Consequently, representations such as phrases, synonyms, proximities and thesaurus classes can be employed as representation concepts, even in large databases.
According to another aspect of the invention a sample is selected and the one document with the highest probability of meeting the information need defined by the query is identified from the sample of documents from the collection. In one form of the invention, a probability threshold is set equal to the probability that the selected document meets the information need. When a predetermined number of additional documents of the collection are identified as having a probability of meeting the information need which is greater than the probability threshold, the threshold is reset to the probability of the selected document with the lowest calculated probability. Thereafter, as documents with higher probabilities are identified, the documents with the lowest probabilities are correspondingly removed. Upon completion of the search, the predetermined number of documents identified as having the highest probabilities are retrieved, preferably in probability order.
In another form of the invention, instead of employing the probability of the document selected from the first sample as a probability threshold, successive samples are iteratively selected, each successive sample containing documents different from each previous sample. Up to a predetermined number of documents having the highest probabilities of meeting the information need are identified during each iteration, the documents being selected from a group consisting of the sample of documents selected for the respective iteration and the documents identified during the previous iteration. Preferably, the predetermined number is equal to the number of the respective iteration, so there are as many iterations as there are documents to be selected.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a block diagram representation of a Bayesian inference network with which the present invention is used.
Figure 2 is a block diagram representation of a simplified Bayesian inference network as in Figure 1.
Figure 3 is a block diagram of a computer system for carrying out the invention. Figures 4 A and 4B, taken together, are a flowchart and example illustrating the steps of creating a search query for a probabilistic network.
Figure 5 is a flowchart and example of the steps for determining a key number for inclusion in the search query described in connection with Figure 4. Figures 6A-6D are block diagram representations of illustrating different techniques for handling phrases.
Figures 7A and 7B, taken together, are a detailed flowchart identifying the steps for calculating the estimated inverse document frequency for a specific concept according to the present invention.
Figure 8 is a flowchart illustrating the manner by which partial phrases are handled in a document retrieval system. Figure 9 is a graph illustrating the principles of certain aspects of threshold estimating according to the present invention.
Figure 10 is a detailed flowchart identifying the steps for setting probability thresholds and optimizing document retrieval according to the present invention.
Figure 11 is a detailed flowchart illustrating the maximum score optimization techniques according to the present invention.
Figure 12 is a detailed flowchart of the process for creating the query network for a probabilistic information retrieval network. Figure 13 is a detailed flowchart of the process for evaluating a document network used with the query network shown in Figure 12.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
The Probability Network
Inference probability networks employ a predictive probability scheme in which parent nodes provide support for their children. Thus, the degree to which belief exists in a proposition depends on the degree to which belief exists in the propositions which potentially caused it. This is distinct from a diagnostic probability scheme in which the children provide support for their parents, that is belief in the potential causes of a proposition increases with belief in the proposition. In either case, the propagation of probabilities through the network is done using information passed between adjacent nodes.
Figure 1 illustrates a Bayesian inference network as described in the aforementioned Turtle et al. article. The Bayesian network shown in Figure 1 is a directed, acyclic dependency graph in which nodes represent propositional variables or constraints and the arcs represent dependence relations between propositions. An arc between nodes represents that the parent node "causes" or implies the proposition represented by the child node. The child node contains a link matrix or tensor which specifies the probability that the child node is caused by any combination of the parent nodes. Where a node has multipl parents, the link matrix specifies the dependence of that child node on the set o parents and characterizes the dependence relationship between the node and al nodes representing its potential causes. Thus, for all nodes there exists a estimate of the probability that the node takes on a value given any set of value for its parent nodes. If a node a has a set of parents πt = {pi , ...pn} , the estimate probabilities P(a|p1,...pn) are determined.
The inference network is graphically illustrated in Figure 1 and consist of two component networks: a document network 10 and a query network 12. The document network consists of document nodes d-, d2,...di_1, di? interior tex representation nodes ti, t^...^, tj, and leaf nodes r,, r2, r3,...rk. The documen nodes d correspond to abstract documents rather than their physica representations. The interior nodes t are text representation nodes whic correspond to specific text representations within a document. The presen invention will be described in connection with the text content of documents, bu it is understood that the network can support document nodes with multipl children representing additional component types, such as audio, video, etc Similarly, while a single text may be shared by more than one document, suc as journal articles that appear in both serial issue and reprint collections, an parent/divisional patent specifications, the present invention shall be described i connection with a single text for each document. Therefore, for simplicity, th present invention shall assume a one-to-one correspondence between document and texts.
The leaf nodes r are content representation nodes. There are severa subsets of content representation nodes r,, r2, r3,...rk, each corresponding to single representation technique which has been applied to the document texts. I a document collection has been indexed employing automatic phrase extractio and manually assigned index terms, then the set of representation nodes wil consist of distinct subsets or content representation types with disjoint domains For example, if the phrase "independent contractor" has been extracted an "independent contractor" has been manually assigned as an index term, then two content representation nodes with distinct meanings will be created, one corresponding to the event that "independent contractor" has been automatically extracted from the subset of the collection, and the other corresponding to the event that "independent contractor" has been manually assigned to a subset of the collection. As will become clear hereinafter, some concept representation nodes may be created based on the content of the query network.
Each document node has a prior probability associated with it that describes the probability of observing that document. The document node probability will be equal to l/(collection size) and will be small for most document collections. Each text node contains a specification of its dependence upon its parent. By assumption, this dependence is complete ft is true) when its parent document is observed (d, is true). Each representation node contains a specification of the conditional probability associated with the node given its set of parent text nodes. The representation node incorporates the effect of any indexing weights (for example, term frequency in each parent text) or term weights (inverse document frequency) associated with the concept.
The query network 12 is an "inverted" directed acyclic graph with a single node I which corresponds to an information need. The root nodes c,, c_, Cj, . . .cm are the primitive concept nodes used to express the information requirement. A query concept node, c, contains the specification of the probabilistic dependence of the query concept on its set of parent representation content nodes, r. The query concept nodes c, ...cm define the mapping between the concepts used to represent the document collection and the concepts that make up the queries. A single concept node may have more than one parent representation node. For example, concept node c_ may represent the query concept "independent contractor" and have as its parents representation nodes r2 and r3 which correspond to "independent contractor" as a phrase and as a manually assigned term. Nodes q,, q2 are query nodes representing distinct query representations corresponding to the event that the individual query representation is satisfied. Each query node contains a specification of the query on the query concept it contains. The intermediate query nodes are used in those cases where multiple query representations express the information need I. As shown in Figure 1, there is a one-to-one correspondence between document nodes, d, and text nodes, t. Consequently, the network representation of Figure 1 may be diagrammatically reduced so that the document nodes dj , d2,...di_1, d; are parents to the representation nodes r,, r2, r3,...rk. In practice, it is possible to further reduce the network of Figure 1 due to an assumed one-to- one correspondence between the representation nodes r-, r2, r3,...rk, and the concept nodes c- , c^, c^.-.c-t,. The simplified inference network is illustrated in Figure 2 and is more particularly described in the article by Turtle et al., "Efficient Probabilistic Inference for Text Retrieval," RIAO 91 Conference Proceedings, pp. 644-661, April, 1991 (Recherche d'Informaion Assistέe par Ordinateur, Universitat Autδnoma de Barcelona, Spain), which article is herein incorporated by reference.
As described above, each child node carries a probability that the child node is caused by the parent node. The estimates of the dependence of a child node Q
on its set of parents, P, , P2,...Pn, are encoded using the following expressions:
EQ 1 beloriO) - 1 - (l- i. - .l- :) - . . . il-pn)
EQ 2
Figure imgf000015_0001
EQ 3 bel∞t {0) - 1-p. EQ 4
Figure imgf000016_0001
where P(P1 =true)=p1, P(P2 *=true)-=p2,...P(Pn-=true)=pn, wl 5 w2,...wn are the term weights for each term P,, P2,...Pn, and wg is the maximum probability that the child node can achieve, 0 < wg < 1.
As described above, all child nodes carry a probability that the child was caused by the identified parent nodes. The structure of document network 10 is not changed, except to add documents to the database. The document nodes d and text nodes t do not change for any given document once the document representation has been entered into document network 10. Most representation nodes are created with the database and are dependent on the document content. Some representation nodes (representing phrases and the like) are created for the particular search being conducted and are dependent on the search query.
Query network 12, on the other hand, changes for each input query defining a document request. Therefore, the concept nodes c of the search network are created with each search query and provide support to the query nodes q and the information need, node I (Figure 1). Document searching can be accomplished by a document-based scan or a concept-based scan. A document-based scan is one wherein the text of each document is scanned to determine the likelihood that the document meets the information need, I. More particularly, the representation nodes , r2, r3,...rk of a single document are evaluated with respect to the several query nodes q, , q2 to determine a probability that the document meets the information need. The top
D-ranked documents are then selected as potential information need documents. The scan process reaches a point, for example after assigning a probability for more than D documents of a large document collection, that documents can be eliminated from the evaluation process after evaluating subsets of the representation nodes. More particularly, if a given document scores so low of a probability after only evaluating one or two representation nodes, determination can be made that even if the evaluation continued the document still would not score in the top D-ranked documents. Hence, most documents of a large collection are discarded from consideration without having all their representation nodes evaluated. A concept-based scan is one wherein all documents containing a given representation node are evaluated. As the process continues through several representation nodes, a scorecard is maintained of the probabilities that each document meets the information need, I. More particularly, a single representation node r, is evaluated for each document in the collection to assign an initial probability that the document meets the concept. The process continues through the several representation nodes with the probabilities being updated with each iteration. The top D-ranked documents are then selected as potential information need documents. If at some point in the process it can be determined that evaluation of additional representation concepts will not alter the ranking of the top D-ranked documents, the scan process can be terminated.
It can be appreciated that the representation nodes r„ r2, r3,...rk are nodes dependent on the content of the texts of the documents in the collection. Most representation nodes are created in the document database. Other representation nodes, namely those associated with phrases, synonyms and citations, are not manifest in any static physical embodiment and are created based on each search query. Because the user can define phrases and thesaurus relationships when creating the query, it is not possible to define all combinations in a static physical embodiment. For example, a query manifesting the concept "employee" may be represented by one or more of "actor", "agent", "attendant", "craftsman", "doer", "laborer", "maid", "servant", "smith", "technician" and "worker", to name a few. These various representation nodes may be created from the query node at the time of the search, such as through the use of thesauri and other tools to be described, as well as through databases. A query node q,, q2, etc. can be manifest in one or more representations.
The Search Query The present invention will be described in connection with a database for searching legal documents, but it is to be understood the concepts of the invention may be applied to databases for searching other types or classes of documents.
The invention will be described in connection with a specific search query as follows:
"What is the liability of the United States under the Federal Tort Claims Act for injuries sustained by employees of an independent contractor working under contract with an agency of the United States government?"
The present invention is carried out through use of a computer system, such as illustrated in Figure 3 comprising a computer 20 connected to an input/output terminal 22 and a read only memory (ROM) 24. ROM 24 may be any form of read only memory, such as a CD ROM, write protected magnetic disc or tape, or a ROM, PROM or EPROM chip encoded for the purposes described. Computer 20 may be a personal computer (PC) and may be optionally connected through modem 26, telephone communication network 28 and modem 30 to a central computer 32 having a memory 34. In one form of the invention, the document network 10 and the document database containing the texts of documents represented by the document network are contained in the central computer 32 and its associated memory 34. Alternatively, the entire network and database may be resident in the memory of personal computer 20 and ROM 24. In a legal database and document information retrieval network the documents may comprise, for example, decisions and orders of courts and government agencies, rules, statutes and other documents reflecting legal precedent. By maintaining the document database and document network at a central location, legal researchers may input documents into the document database in a uniform manner. Thus, there may be a plurality of computers 20, each having individual ROMs 24 and input/output devices 22, the computers 20 being linked to central computer 32 in a time-sharing mode. The search query is developed by each individual user or researcher and input via the respective input/output terminal 22. For example, input/output terminal 22 may comprise the input keyboard and display unit of PC computer 20 and may include a printer for printing the display and/or document texts.
ROM 24 contains a database containing phrases unique to the specific profession to which the documents being searched are related. In a legal search and retrieval system as described herein, the database on ROM 24 contains stemmed phrases from common legal sources such as Black's or Statsky's Law Dictionary, as well as common names for statutes, regulations and government agencies. ROM 24 may also contain a database of basic and extended stopwords comprising words of indefinite direction which may be ignored for purposes of developing the concept nodes of the search query. For example, basic stopwords included in the database on ROM 24 includes indefinite articles such as "a", "an", "the", etc. Extended stopwords include prepositions, such as "o , "under", "above", "for", "with", etc., indefinite verbs such as "is", "are", "be", etc. and indefinite adverbs such as "what", "why", "who", etc. The database on ROM 24 may also include a topic and key database such as the numerical keys associated with the well-known West Key Digest system.
Figures 4A and 4B are a flow diagram illustrating the process steps and the operation on the example given above in the development of the concept nodes c. The natural language query is provided by input through input terminal 22 to computer 20. In the example shown in Figure 4, the natural language input query is:
"What is the liability of the United States under the Federal Tort Claims Act for injuries sustained by employees of an independent contractor working under contract with an agency of the United States government?"
By way of example, a corresponding WESTLAW Boolean query might be:
"UNITED STATES" U.S. GOVERNMENT (FEDERAL 12 GOVERNMENT) /P TORT 12 CLAIM /P INJUR! /P EMPLOYEE WORKER CREWMAN CREWMEMBER /P
INDEPENDENT /2 CONTRACTOR.
As shown in Figure 4A, the natural language query shown in block 40 is inputted at step 50 to computer 20 via input/output terminal 22. The individual words of the natural language query are parsed into a list of words at step 50, and at step 54 each word is compared to the basic stopwords of the database in ROM 24. At step 54, the basic stopwords such as "the" are removed from the list. The extended stopwords are retained for phrase recognition and remaining extended stopwords will be removed after phrase recognition, described below.
At step 56, the remaining words are stemmed to reduce each word to its correct morphological root. One software routine for stemming the words is based on that described by Porter "An Algorithm for Suffix Stripping", Program,
Vol. 14, pp 130-137 (1980). As a result of step 56 a list of words is developed as shown in block 42, the list comprising the stems of all words in the query, except the basic stopwords.
Phrases
Previous systems recognized linguistic structure (for example, phrases) by statistical or syntactic techniques. Phrases are recognized using statistical techniques based on the occurrence of phrases in the document collection itself; thus, proximity, co-occurrence, etc. were used. Phrases are recognized using syntactic techniques based on the word/term structure and grammatical rules, rather than statistically. Thus, the phrase "independent contractor" could be recognized statistically by the proximity of the two words and the prior knowledge that the two words often appeared together in documents. The same term could be recognized syntactically by noting the adjective form "independent" and the noun form "contractor" and matching the words using noun phrase grammatical rules. (Manual selection systems have also been used wherein the researcher manually recognizes a phrase during input.)
Previous inference networks employed a two-term logical AND modeled as the product of the beliefs for the individual terms. Beliefs (probabilities) lie in the range between 0 and 1, with 0 representing certainty that the proposition is false and 1 representing certainty that the proposition is true. The belief assigned to a phrase is ordinarily lower than that assigned to either component term. However, experiments reveal that the presence of phrases represents a belief higher than the belief associated with either component term. Consequently, separately identifying phrases as independent representation nodes significantly increases the performance of the information retrieval system. However, single terms of an original query are retained because many of the concepts contained in the original query are not described by phrases. Experimentation has suggested that eliminating single terms significantly degrades retrieval performance even though not all single terms from an original query are required for effective retrieval.
As previously described, the phrase relationships in the search query are recognized by domain-knowledge based techniques (e.g., the phrase database), and by syntactic relationships. The primary reason to solely select syntactical and domain-based phrases for purposes of the query network is to reduce user involvement in identifying phrases for purposes of creating a query.
An example of a domain-knowledge database is a database containing phrases from a professional dictionary. This type of phrase handling is particularly suitable for professional information retrieval where specialized phrases are often employed.
At step 58 in Figure 4B, computer 20 returns to the database in ROM 24 to determine the presence of phrases within the parsed and stemmed list 42. The phrase database in ROM 24 comprises professional, domain-specific phrases (such as from Black's Law Dictionary) which have been stemmed in accordance with the same procedure for stemming the words of a search query. Computer 20 compares the first and second words of list 42 to the database of phrases in ROM 24 to find any phrase having at least those two words as the first words of a phrase. Thus, comparing the first two terms "WHAT" and "IS" to the database of phrases (such as Black's Law Dictionary), no match is found. Thus, as shown in block 44, "WHAT" is retained for the search query. The next two words "IS" and "LIABL" are compared to the database of phrases and no phrase is found. When "UNITE" and STATE" are compared to the database, a phrase match is found. The next word "FEDERAL" is then compared to the database to determine if it corresponds to the third word of any phrase commencing with "UNTTE STATE". In this case no phrase is found, so both "UNITE" and "STATE" are removed from the list 44 and substituted with a phrase representing the term "UNITE STATE". When the terms "FEDERAL" and "TORT" are compared to the database a match is found to phrases in the database. The third and fourth words "CLAIM" and "ACT" also compare to at least one phrase commencing with "FEDERAL" and "TORT". Consequently, each of the terms "FEDERAL", "TORT", "CLAIM" and "ACT" are substituted with the phrase "FEDERAL TORT CLAIM ACT". (As explained below, if a word is found to be included in a successive phrases, the common word would be assigned to the longer phrase, if they have an unequal number of terms, or to the first phrase of the succession, if the number of terms in the phrases are equal.) The process continues to substitute phrases from the database for sequences of stemmed words from the parsed list 42, thereby deriving the list 44.
The phrase lookup is accomplished one word at a time. The current word and next word are concatenated and used as a key for the phrase database query. If a record with the key is found, the possible phrases stored under this key are compared to the next word(s) of the query. As each phrase is found, a record of the displacement and length of each found phrase is recorded.
The extended stopwords are included in the phrase matching technique because the phrases themselves contain such stopwords. For example, phrases like "doctrine of equivalents" and "tenancy at will" contain prepositions which are stopwords.
As indicated above, once successive terms have been identified as a phrase, the individual terms do not appear in the query shown at block 44 in Figure 4B. In rare cases two phrases might seemingly overlap (i.e., share one or more of the same words). In such a case, the common word is not repeated for each phrase, but instead preference in the overlap is accorded to the longer phrase. For example, if a natural language search query contained "...tenancy at will, the power of which...", the parsed and stemmed list (with basic stopwords removed) would appear as: "tenan", "at", "will", "power", "of , " which". The database could identify two possible phrases: "tenan at will" and "will power" with "will" in both phrases. As will be explained below, preference is accorded to the longest possible phrase, so the identified phrase will be "tenan at will". With the phrases identified, as at 44, the remaining extended stopwords
("what", "is", "o , "under", "for", "by", "with") are removed at step 62, and any duplicate terms are removed at step 64, to be described in greater detail below. The result is the final query shown at block 46 in Figure 4B.
Citations
Case citations, U.S. Code citations and citations to the Code of Federal Regulations (CFR) are handled as exact terms. Other citations, including subsection citations, are handled syntactically using word-level proximity as single terms or query nodes comprising numeric tokens. For example, a citation to Volume 78 Columbia Law Review page 1587 is encoded as 78 +4 1587 (meaning 78 within four words of 1587), and the citation to 17 U.S.C. 106A(e)(l) is encoded as 17 +2 106A(e)(l). To encompass most citations, it is preferred to encode all citations as within five words. Hence, the above two citations will be encoded as 78 +5 1587 and 17 +5 106A(e)(l).
Hyphenations
Hyphenated terms in search queries are handled in much the same manner as citations. The hyphen is removed and the component words are searched using an adjacency operation which finds all adjacent occurrences of the component words.
Synonyms
Synonyms comprise equivalent words and misspellings and are created from a predefined database stored in ROM 24 (Figure 3). Examples of equivalencies include 2d/2nd/second whereas examples of misspellings include habeas/habeus. Where a search query includes a word having a synonym, a new representation node r (Figure 2) is created for each synonym. However, the weight associated with the node is based on the frequency of the entire class of nodes comprising all synonyms, rather than any one term of the class.
Duplicate terms
Where a single word, term or phrase occurs more than once in a query, the word, term or phrase is evaluated only once. After the word, term or phrase has been processed for phrase identification as heretofore described, the duplicate word, term or phrase is simply dropped from the search query. As will be explained hereinafter, the component probability score for each document containing a term duplicated in the query is multiplied by the query frequency, and the query normalization factor is increased by that frequency. Thus, the effect is that the duplicated term is evaluated multiple times as dictated by the query, but in a computationally simpler manner.
Thesaurus Classes
Thesauri are employed to identify words of similar or related meaning, as opposed to synonyms having identical meaning. The thesauri are used to suggest broader, narrower and related terms to the researcher for inclusion in the search query. These relationships can be drawn from the machine readable dictionaries
(such as Black's Law Dictionary) encoded in databases, or from manually recorded domain knowledge.
Document Retrieval One feature of probabilistic information retrieval systems is that the documents in the document collection are ranked in accordance with the probability that the document meets the information need identified in the query.
This permits selection of a predetermined number of documents having the highest probabilities for identification and retrieval. For a given information need, for example, it may be desirable to retrieve 20 documents from a document collection of 500,000 documents. A probabilistic information retrieval network can identify for retrieval the 20 documents having the highest probability of meeting the information need. Phrases, synonyms, proximities and thesaurus classes are not separately permanently identified in the document network. Instead, the representation nodes in the document network are created for the phrase, synonym, proximity or thesaurus class by those concept nodes (Figure 1) which themselves are a function of the phrase or term in the query.
Figures 6A-6D illustrate different treatments of phrases in the document network of an inference network. Representation concepts r, and r2 shown in Figures 6A-6D correspond to two words in the text of document d^ Representation concept r3 corresponds to the phrase in the text consisting of the two words. Q represents the query. For example, r, and r2 may correspond to the occurrence of the terms "independent" and "contractor", respectively, while r3 corresponds to the occurrence of the phrase "independent contractor". In the model illustrated in Figure 6A (which is the preferred model), the phrase is treated as a separate representation concept, independent of the concepts corresponding to the component words. The belief in the phrase concept can be estimated using evidence about component words and the relationship between them, including linguistic relationships. The presence of the query phrase concept in the document increases the probability that the document satisfies the query (or information need). The model of Figure 6B illustrates the case where the belief in the phrase concept depends on the belief in the concepts corresponding to the two component words. Figure 6C illustrates a term dependence model where the phrase is not represented as a separate concept, but as a dependence between the concepts corresponding to the component words. A document that contains both words will more likely satisfy the query associated with the phrase due to the increase belief coming from the component words themselves. However, experimentation has revealed that the model of Figure 6C is less appropriate for phrases and more appropriate for thesauri and synonyms. In Figure 6D belief in the phrase concept is established from evidence from the document text itself, whereas belief in the concepts representing component words are derived from belief in the phrase itself. The model of Figure 6D makes explicit the conditional dependence between the component concepts and addresses the practice of some authors that all component words of a phrase might not always be used in the text representation of a document. For the present purposes, it is preferred that document network 10 employ the phrase model of Figure 6A so that the representation concepts for the phrases are independent of the corresponding words. Hence, a match between the concept node of a search query and the concept node of a documentation representation is more likely to occur where the search query contains only the phrase, and not the component words. It is understood that the other models (Figures 6B-6D) could be employed with varying results.
Thus far, there has been described techniques for obtaining lists containing single words, phrases, proximity terms (hyphenations and citations) and key numbers. These elements represent the basic concept nodes contained in the query. The phrases, hyphenations and citations create representation nodes of the document network. Computer 20 (Figure 3) forwards the search query to computer 32, which determines the probability that a document containing some subset of these concepts matches the original query. For each single document, the individual concepts represented by each single word, phrase, proximity term, and key number of the query are treated as independent evidence of the probability that the document meets the information need, I. The probability for each concept is determined separately and combined with the other probabilities to form an overall probability estimate.
The probabilities for individual concepts are based on the frequency with which a concept occurs in document j (tfy) and the frequency (Q with which documents containing the concept (i) occur in the entire collection. The collection frequency may also be expressed as an inverse document frequency (id . The inference network operates on two basic premises:
■ A concept that occurs frequently in a document (a large tfy) is more likely to be a good descriptor of that document's content, and ■ A concept that occurs infrequently in the collection (a large idf;) is more likel to be a good discriminator than a concept that occurs in many documents.
It can be shown that the probability, P(c; | d-) that concept c; is "correct" descriptor for document d- may be represented as
EQ
Figure imgf000027_0001
where
EQ tfi -J J . ..5 t Q.5 . J log^ maxV fd )
and
EQ log idfi log n,
if f^ is less than max fj, where nc is the number of documents in the collection fjj is the frequency of concept i in document j, f; is the frequency of document in the collection containing term i (i.e., the number of documents in which ter i occurs), and max f- is the maximum frequency for any term occurring i document j. If f^ is not less than max fj5 then tfy is set to 1.
Most document networks for search and retrieval are represented b a word index containing words from the documents to be matched to query terms In Boolean networks, relationships were determined from the word index an offset data therein to locate documents meeting the logical criteria of the query The present invention employs a probabilistic network in which the same databas and word index may be employed to calculate the probabilities set forth i Equation 5 for many of the query concepts. The number of documents in th coUection, n*, is known from the document addresses associated with words in the word index. To calculate f„ the number of documents in the collection containing concept i is determined by locating and counting the addresses of all documents in the database containing the concept. More particularly, the document addresses associated with each word in the word index corresponding to the concept are compared to remove duplicate addresses and the remaining number of document addresses is summed. The resulting sum is f,. The frequency or number of times, fM, that concept i appears in document j can be calculated from the number of offset codes for the word (and its synonyms) associated with the document. Hence, the terms idfj and tfβ can be calculated, thereby leading to the probability factor, P(cjd), for the concept for the document in accordance with Equation 5. However, this technique is useful only for those concepts whose concept frequency is represented in the word index. Certain concepts, such as phrases, are not ordinarily so represented, so it is an aspect of the present invention to provide a technique to estimate the representation concept frequency for such concepts.
Representation Concept Frequency Estimation
The inverse document frequency (idf|) is predetermined for each representation concept in the document collection, except certain representations such as phrases, synonyms, proximities and thesaurus classes. For phrases, synonyms, proximities and thesaurus classes, the inverse document frequency is computed for each search. Identifying the inverse document frequency for a given phrase, synonym, proximity or thesaurus class requires processing through each document in the collection. In small collections, the computation of the inverse document frequency of a phrase, synonym, proximity, or thesaurus class may be performed without significant difficulty by examination of the word index to determine f„ nc and f^ as described above. Hence, the inverse document frequency for the phrase may be calculated using equation 7. However, in the case of large collections (of the order of 500,000 documents), computation of the inverse document frequency for a phrase, synonym, proximity or thesaurus class representation requires significant processing, if all documents containing a query concept are to be examined. Moreover, in many circumstances the computation may lead to a result which is too insignificant to affect the ranking. Consider, for example, a synonym class containing terms A and B where term A occurs in 10,000 documents in the collection of 500,000 documents and term B occurs in 10 documents. The frequency of the synonym class lies in the range of 10,000 to 10,010, resulting in a frequency difference of 10 documents in 10,010 or about 0.1%. Consequently, the range of the inverse document frequency, idfέ, lies between about 0.02000 and 0.02002, which is too small to significantly affect the result ranking. However, if term A appears in 10,000 documents and term B appears in 4,000 documents, the frequency is in the range of 10,000 and 14,000, leaving a 28.6% frequency difference and a range of document inverse frequencies between 0.02000 and 0.02800, which is significant.
One aspect of the present invention concerns the estimation of the inverse document frequency for a selected representation, such as a phrase, proximity, synonym or thesaurus class. More particularly, the representation frequency is estimated from a sample of the collection with sufficient accuracy, while avoiding extended computational resources in the evaluation of the entire collection. A sample of a plurality of documents is selected from the collection, and the representations in the sample documents are processed to identify the frequency that the selected representation occurs in the sample. Specifically, the "gaps," or the number of documents (g) occurring between occurrences of documents containing the selected representation, are identified, and the sum of the squares of the gaps (sq) are employed to estimate the correct representation frequency. The gaps are identified from the successive addresses of documents containing the concept as determined from the word index of the document database. The sequence of observed gaps are employed to estimate the maximum and minimum bounds (f and f„J of the true frequency within a preselected error rate. The frequency bounds are employed to compute the range of the probable inverse document frequency. When that range becomes sufficiently narrow as to insignificantly affect the result ranking, the midpoint of the frequency range is selected as the estimated frequency of occurrence of the selected representation.
After computing the frequency bounds for the given sample, if the difference between the bounds is too large that the selection of the midpoint as the estimated frequency of occurrence is likely to affect the result ranking, the sample is enlarged to include additional documents, and the frequency bounds are again computed. Ordinarily, mean and variance estimations are computed on the basis that each sample is independent, but in the present case the samples may not be independent because samples are taken sequentially, rather than randomly. To adjust for possible non-random sampling, the variation for the frequency bounds is estimated in two ways: first based on random sampling, and second based on gaps (numbers of documents found between documents containing the representation). The probable maximum frequency, f,^^ and the probable minimum frequency, f^,,, are computed in accordance with the following algorithms:
EQ 8
Figure imgf000030_0001
and
EQ 9
Figure imgf000030_0002
where n; is the number of documents (or gaps between documents) in the sample containing the selected representation, nc is the number of documents in the collection, Xj is the number of documents in the sample, S; is the greater of x/n; or sd of the n- gaps, and z is the standard critical value for normal distribution for preselected reliability, and where sd is the standard deviation and is represented by
EQ 1 sd s . te)'
where sq is the sum of the squares of the gaps, or the sum of the squares of th numbers of documents found between documents containing the representation
It is preferred that the reliability of the estimation be within 0.95 (i.e. the maximum error rate should not exceed 5%). It can be shown that th standard critical value (z) for a normal distribution of the documents of th collection, within a 0.95 reliability, is 2.8070.
There are several constraints on the calculation of faatx and f^. First if fmin is smaller than the a priori minimum, then f^ is set equal to the a prio minimum, and if f^,. is greater than the a priori maximum, then fmax is set equ to the a priori maximum. To illustrate the a priori minimums and maximums assume a synonym class containing terms A and B where term A appears i 10,000 documents and term B appears in 4,000 documents. Terms A and could appear in the same or overlapping documents, meaning that term B coul appear in as many as 4,000 documents with term A. Conversely, term B mig appear in documents exclusive of term A. Consequently, although the actu occurrences of the synonym class is unknown, the synonym class appears in th range of 10,000 to 14,000 documents. Hence, an a priori minimum number occurrences can be established at 10,000 (the number of occurrences of the mo common term A), and an a priori maximum number of occurrences can b established at 14,000 (the sum of occurrences of both terms A and B). Similarly in the case of a phrase containing two terms A and B (such as "independen contractor"), if A appears in 10,000 documents and B appears in 4,00 documents, an a priori maximum exists of 4,000 (the number of occurrences o the least common term B) because that is the maximum that the two terms coul appear together.
Hence, the a priori maximums and minimums are derived from th pre-identified frequencies f; of individual terms (which form or are part of th concept) in the collection, and the type of concept (synonym, phrase, thesauru or proximity). Another constraint concerning the calculation of ϊ^ is that if th calculated f^ is smaller than η (the number of documents in the sampl containing the representation), f,^ is set equal to . Likewise, if the calculate f^ is smaller than zero or is less than n-, f,^ is set equal to n; + (nc - x;) (th number of documents in the sample containing the representation plus the numbe of documents of the collection yet to be considered).
The number of documents x; in the sample necessary to estimate th frequency of the selected representation is increased until the difference betwee the inverse document frequencies of the maximum and minimum bounds i smaller than some prescribed amount. While the specific limit of the difference between the maximum and minimum inverse document frequencies is heuristic, it has been found that when the range of frequency values between f^ and f^,, is so small that further refinement would not significantly alter the ranking of the ultimately selected documents, further computation of an estimated probable frequency for th selected representation may be halted. For purposes of the present invention, an inverse document frequency (id ) difference of 0.05 or less as an empirically selected stopping point, provides good results. The estimated inverse documen frequency for the selected representation is thereupon selected at the mean between the maximum and minimum bounds. If the maximum and minimum bounds are accurate, they would each be located at a maximum error of 0.025 which is deemed acceptable for the present purposes. In practice, the corre frequency error is usually smaller than 0.025 because the correct frequency ten to lie in the center of the estimated range more often than near either th maximum or minimum bound. Tests have indicated that the average error for th estimated frequency for the selected representation is about 0.01.
Figures 7A and 7B, taken together, comprise a detailed flowcha illustrating the steps of estimating the frequency of a selected concept, such as phrase, synonym, proximity or thesaurus class. The process illustrated in Figur 7A and 7B is carried out by a computer, which calculates the probable maximu and minimum frequencies f^ and f^ shown in Equations 8 and 9 and calculat the estimated inverse document frequency, idfj, for the selected concept.
At step 70, the number of documents in the sample (X;), the numb of documents in the sample containing the selected representation (nj), the ga size (g), and the sum of the squares of the gaps (sq), are each initialized to At step 72, 1 is added to x; and at step 74 the increased X; is compared to nc, th number of documents in the entire collection. If Xj is smaller than nc, the fir document j is examined at step 76 to determine whether or not concept i appea in the document. If the concept does not appear in the first document, 1 is adde to g at step 78 and the sequence loops back through point 80 to increment x; b 1. The process continues to loop until a document is identified containin concept i at step 76. By that point, the value of g has been incremented and equal to the number of documents not containing concept i since identifying th previous document containing concept i. At step 82, n; is incremented by 1, an at step 84 g: is calculated and is added to sq at step 86. At step 88 g is reset t 0.
To conserve computing resources, it is preferred that f^ and fmin n be calculated each time a document is located containing concept i. Instead, is preferred that a decision be made at step 90 which inhibits calculation of f and f^ until after only a predetermined number of documents containing th concept are identified. This has two effects: first, it conserves computin resources, and second, it permits use of the actual inverse document frequency (idf;) for those concepts not appearing often in the collection. More particularly, it is preferred that a fixed number of documents, such as 25, be found containing concept i between each calculation of f^ and f^. Thus, at step 90 n5 is divided by 25 and if the result is a whole number (indicating that H; is 25, 50, 75, etc.), then the process continues through steps 92, 94 and 96 to calculate f^ and f^.,. On the other hand, if n; is not equal to 25, 50, 75, etc., the process loops back through point 80 to continue to identify concept i in additional documents.
At step 92, Xj/rij and sd are calculated, sd being calculated in accordance with equation 10. At step 94, Sj is set to the greater of j/n- or sd. At step 96, fmΛX and f^ are calculated.
It should be noted that g is the size of the gap or the number of successive documents not containing the concept between documents that do contain the concept. Thus, g is incremented at step 78 for each document not containing the concept and is reset at step 88 upon finding a document which does contain the concept. Term sq calculated at step 86 is the sum of the squares of the gaps g.
After the maximum and minimum estimated bounds, f-^ and f^,,, are computed, maximum and minimum inverse document frequencies for the concept, idfirnax and idfirrun> are calculated at step 98. At step 100, if idfimin is within 0.05 of idfimax, the mean frequency f^nis computed from f^x and f^ at step 102, and the estimated inverse document frequency, idf-, is computed at step 104 for the concept. As shown at step 100, if the range between the maximum and minimum inverse document frequencies is greater than 0.05, the process loops back to point 80 to expand the sample and the number of documents until the bounds of the estimates are within 0.05 at step 100 or until the entire collection has been examined (Xj = nc) at step 74.
As indicated above, it is possible that the entire collection could be examined before determining an estimated inverse document frequency for the selected concept. This might occur, for example, where a concept very rarely appears in the documents. In such a case, at step 74, the computer determines that the number of documents in the sample (xj is equal to the number of documents in the collection (n,.), in which case the actual inverse document frequency for the concept is computed at step 106.
Partial Concepts (Phrases and Proximities'.
As shown by Equation 4, the probability is computed for each concept/document pair, and the probabilities are summed. The result is normalized by the number of concepts in the query to determine the overall probability estimate that the document satisfies the information requirement set forth in the query.
Phrases are treated in a manner similar to proximity terms, except that a document which does not contain the full phrase receives a partial score for a partial phrase. For example, if a query contains the phrase "FEDERAL TORT CLAIMS ACT" and a document contains the phrase "tort claims" but not
"Federal Tort Claims Act", the document will receive a score based on the frequency distribution associated with "TORT CLAIMS". Figure 8 is a flow diagram illustrating the process of handling partial matches. As shown at step 120, the full phrase is evaluated against the collection as heretofore described. The inverse document frequency (idQ is determined for the full phrase (step 122), and if idfs is greater than a predetermined threshold (e.g., 0.3) the maximum belief achieved for any subphrase or single term is selected as the belief for the partial phrase (step 124). If idfj is smaller or equal to the threshold value (0.3), the preselected default belief (0.4) is assigned to the documents containing the partial phrase (step 126).
Since the frequency of "TORT CLAIMS" must equal or exceed that of the longer phrase, the probability estimate for the partial phrase would generally be lower than that assigned to documents containing the complete phrase. For phrases which occur extremely often (for example, where idf; is less than 0.3) it is preferred to dispense with the partial matching strategy, and treat the phrase as a pure proximity term by assigning the default belief (0.4) to all documents containing the partial phrase but not the full phrase (step 126). For phrases which appear less often (where idfj is greater than 0.3), the maximum belief achieved by any single word of the partial phrase is assigned to the belief for the partial phrase.
As previously explained, duplicate terms are purged from the search query. However, where duplicate terms appear in the search query, the component probability score for each document containing the term is multiplied by the query frequency. For example, if a document contains a term which appears twice in a natural language query receives a component probability of 0.425, the probability score is multiplied by 2 (to 0.850) for that term. When the probabilities are summed and normalized as described above, the normalization factor is increased to reflect the frequency of the duplicated term (increased by 1 in this example). Thus, the duplicated term is treated as if it had been evaluated multiple times as dictated by the query, but in a computationally simpler manner.
As described above, the probability estimates for each document/concept pair are summed and the result is normalized by the number of concepts in the query. For the example given in Figure 4 the search query shown in block 46 employs eleven concepts, so the total probability for each document will be divided by 11 to determine the overall probability that the given document meets the overall query. For example, assume for a given document that the eleven probabilities are:
0.400 0.430 0.466
0.543 0.436 0.433
0.512 0.400 0.481
0.460 0.472
The overall probability is the sum of the individual probabilities (5.033) divided by the number of concepts (11) for a total probability of 0.458. This indicates a probability of 0.458 that the document meets the full query shown in block 40 in Figure 4. The probability is determined for each document represented in the database, whereupon they are ranked in accordance with the value of the probability estimate to identify the top D documents. The ranking or identification is provided by computer 32 (Figure 3) to computer 20 for display and/or printout at output terminal 22. Additionally, the document texts may be downloaded from computer 32 to computer 20 for display and/or printout at output terminal 22.
Probability Thresholds As previously described, the probabilistic document retrieval system retrieves a predetermined number (D) of documents having the highest probability of meeting the information need set forth in the query. These probabilities are identified by the normalized sum of the probabilities of each representation in the document matching the concept in the query. Significant processor resources are required to compute these probabilities for each document in a large document database, for example about 500,000 documents or more. To reduce processing resources, it is desirable to limit probability computations to a reasonable number.
One technique to reduce processing resources is to employ a probability threshold against which the probabilities of documents are compared to determine whether or not the probability of a given document meets or exceeds the threshold. For example, in a document retrieval network designed to retrieve
10 documents, the probability threshold may be set equal to the probability of the lowest ranked document of 10 selected documents. To identify 10 documents from a database of 500,000 documents, the first 10 documents of the database are listed to a result list (making the initial ranking of the top 10). A probability threshold is set equal to the probability of the lowest-ranked document of the first
10 selected documents. The probability of the 11th document is computed and compared against the probability threshold. If the probability of the 11th document exceeds that lowest ranked document of the original 10, the 11th document is entered into the result list of 10 selected documents and the prior lowest ranked document is removed. A new probability threshold is set to the probability of the new lowest ranked document of the original 10 selected documents. Hence, the probability threshold is a "running" threshold, constantly updated and increased in value as additional documents are identified which exceed the previous threshold.
It will be appreciated that at some point in the document identification process, the threshold becomes so high that many documents may be discarded from consideration after consideration of only a few of the representation probabilities. Assume, for example, a query containing eleven concepts and a probability threshold of 0.8965 (well into the document identification process). For a document to meet the threshold, it must have a minimum sum of individual probabilities of 9.8615 (11 x 0.8965). Under such circumstances, a low representation probability amongst the first few representations may result in a mathematical impossibility of meeting the threshold. For example, if the first two representations of a document have probabilities of 0.311 and 0.400, giving a sum of 0.711, it will not be possible for that document to make the result list of 10. Even if the representation probabilities matching the other nine concepts each had a probability of 1.0, the maximum sum of probabilities would be 9.711 which is normalized to a maximum probability of 0.8828, below the probability threshold. Consequently, it is unnecessary to calculate the additional representation probabilities for the document or to further process the document's probabilities.
It can be appreciated from the foregoing that comparing the document's probabilities against the threshold can provide a significant savings in processing resources.
While the foregoing probability thresholds provide significant savings in processing resources, particularly well into the search, very little savings is realized at the early stages of the search. Figure 9 is a graph illustrating a threshold setting technique as described above. The process commences with a probability threshold of zero, following curve 130. When the predetermined number of documents D are initially identified, the initial threshold is established as the lowest probability of the initial 10 documents, and subsequent documents are compared against the threshold. As additional documents are processed and the threshold value increases, it can be appreciated from Figure 9 that the threshold value follows curve 130 approaching maximum threshold level 132. It can be shown that the documents requiring examination against the probability is high at the early stages of the process and decreases as the process advances. Hence, the area of the graph of Figure 9 above the curve of line 130 is representative of the number of documents requiring processing and is representative of the required processing resources.
One feature of the present invention resides in the early estimations of the probability threshold for documents meeting the information need of the query. More particularly, by selecting a sample of documents and setting the initial probability threshold as equal to the probability of the document in the sample having the highest probability, an initial threshold may be established against which further documents may be compared as previously described. This "running start" is shown in Figure 9 as the initial threshold for the process.
As the search continues through the collection, fewer documents have their probabilities scored and the probability threshold increases. Hence, document selection follows curve 134 in Figure 9. The establishment of an initial threshold as described, results in a smaller area above line 134; the shaded area 136 represents a reduction in processing resources required for conducting the search. It can be statistically shown that a document retrieval system, seeking to retrieve 10 documents meeting an information need defined by a query from a document collection of 500,000 documents, will, with a 5% maximum probable error rate, find one document in the first 309 documents, two documents in the first 11,095 documents, three documents in the first 25,070 documents, and so on in accordance with the following Table I: TABLE I
Sequence Limit (D)
309 1
11,095 2
25,070 3
48,843 4
80,269 5
118,159 6
161,889 7
211,278 8
266,579 9
500,000 10
The software algorithm for selecting the sequence of numbers for Table I is set forth below, where cs is the collection size (equal to _ ., the number of documents in the collection), gs is the goal size (equal to D, the number of documents to be selected or identified) and me is the maximum error sought. For
Table I, cs is 500,000, gs is 10 and me is 0.05.
SOFTWARE ALGORITHM me = me ÷ ((gs - 1) * 100) conf = 1.0 - me p = gs ÷ cs lowi = (-log(conf)) ÷ p (natural log)
IF lowi = 0 THEN table(l) = lowi + 1 ELSE table(l) = lowi
DO = 1 to (gs - 2)) lowi = lowi + 1 oldhi = cs - 1
WHILE ((oldhi - lowi) < > 1) highi = ((lowi + oldhi - 1) ÷ 2) + 1 lambda = highi * p term = exp(-lambda) sum = term DO i = 1 TO j term = term * (lambda ÷ i) sum = sum + term ENDDO
IF sum > conf THEN lowi = highi ELSE oldhi = highi ENDWHILE table(j+l) = lowi ENDDO table(gs) = cs The forgoing software algorithm and Table I are employed to statistically optimize the probable document distribution in the collection, and identifies one document to the result list during the first iteration, two documents to the result list during the second iteration, etc. until the final selection of ten documents are entered to the result list during the tenth iteration. During each iteration, a new sample of documents is selected from the collection, each sample being distinct from every other sample. Thus, referring to Table I, the first sample comprises documents 1 through 309, the second sample comprises documents 310 through 11095, the third sample comprises documents 11096 through 25070, etc. During the first iteration, the one document having the highest probability of meeting the information need defined by the query is selected from documents 1 through 309. During the second iteration, two documents having the two highest probabilities are selected from the group consisting of the sample of documents (documents 310 through 11095) plus the one document selected from the previous iteration. During the third iteration, three documents having the three highest probabilities are selected from the group consisting of documents 11096 through 25070 plus the two documents selected during the second iteration. The process continues through all iterations (10 in the example) to identify the predetermined number D of documents (10 in the example). It is evident from the foregoing that if a given sample, such as the third sample, has two documents having probabilities which exceed the lowest of the previously selected documents, one previously selected document will be removed from the selection list. The ultimately selected documents, being ten in number, are not necessarily selected one from each of the ten samples. Instead, the selected documents are those ten documents having the highest probability of meeting the information need defined by the query, within a given error, such as 5 % . While the above software algorithm sets forth the sample selection technique for any given number of documents to be identified, the above Table I sets forth a preferred example in connection with a document database of 500,000 documents selecting 10 documents most likely to meet the information need. Clearly, the algorithm may be used to provide the parameters for databases of other sizes, selection of other numbers of documents, and tolerance within other maximum error rates. Moreover, the algorithm may be modified to fit other examples in other situations, and, in fact, other algorithms are possible to define the sampling technique. It may be desirable to employ the probability threshold technique described above with the statistical optimization selection described above. Hence, referring to Table I, the probability threshold may be set from the first sample requiring that documents selected during successive iterations also equal or exceed the probability threshold. As the processing continues, if the document of the first sample is ultimately replaced (that is, for a given iteration the probability of the first sample document is exceeded by the probabilities of at least the number of documents required by the iteration), a new threshold is established as the threshold of the new lowest document. Consequently, the probability threshold level continues to advance as documents are continued to be identified.
Figure 10 is a flowchart of the steps of the statistical optimization selection technique of developing the probability threshold and document distribution optimization for the present invention. More particularly, at step 150 the document distribution table of Table I is initialized to meet the criteria for error, numbers of documents sought, and collection size in accordance with the above-described software algorithm. At step 152, the probability threshold value is initialized to 0 and the number of documents sought to be identified, D, is initialized to one. At step 154, a document from the collection is scored utilizing the maximum score optimization technique, explained below in connection with Figure 11. At the same time, the number of documents processed since the previous document was scored is identified. At step 156, a count is incremented identifying the total number of documents from the collection which had been processed.
Referring to Table I, if the first thirty documents of the collection contain no representations matching a concept of the query, the documents will not be scored because their probabilities would be 0.4. If the thirty-first document is the first document of the collection having representations which meet concepts of the query, that document is located and scored at step 154 using the maximum score optimizations described below. At the same time, a count of 31 is entered, representative of the number of documents processed ( _). Since the thirty-first document is the only document in the result list, it is placed at the top of the result list.
At step 158, the value from the table corresponding to O_ is compared against the number of documents x, counted at step 156. If the number of document, xi5 is smaller than the number Dj, the process continues to step 160. At step 160, each scored document is entered into the result list stored in the memory of the computer in descending order of probabilities. Thus, the document with the highest probability appears at the top of the result list whereas the document meeting the maximum score optimizations having the lowest probability is at the bottom of the list. In the initial iteration, x, is 31 since thirty- one documents had been processed, and the value from Table I is 309 (corresponding to D; = 1).
Since the value from the table, 309, is greater than x;, 31, the probability threshold is set at step 162 to the score for the Dth document in the result list, which in the example is the thirty-first document. At step 164, the number of documents processed, Xj, is compared to the total number of documents in the collection, n., and if the number of documents processed is smaller than the number of documents in the collection, the process loops back through point 166 to return to step 154. Any further documents which have probabilities less than the threshold probability (or which cannot mathematically achieve a probability greater than the probability threshold after calculation of less than all representation probabilities) are excluded (or not scored) at step 154.
Assume document one hundred eighty has a probability greater than the probability threshold established by document thirty one. Hence, document one hundred eighty is identified at step 154 and inserted into the result list in probability order, which is greater than document thirty one. At step 156, x. is incremented to indicate the count, 180, of the number of documents thus far processed, which count is still smaller than 309, the number in Table I associated with Di. Consequently, the sequence proceeds to step 160 to insert document one hundred eighty into the result list. At step 162 the probability threshold is set to the score of the Dth document in the result list. Since Dj is 1 , the probability threshold is set to the score of document one hundred eighty.
Assume the next document having a probability greater than the probability threshold set by document one hundred eighty is document six hundred ten. Document six hundred ten is found and scored at step 154. At step 156 the count x. is incremented to 610, and since the value 309 from Table I is not greater than 610 at step 156, O_ is incremented by 1 at step 168 so that the new value from Table I to be considered is 11,095. The process loops back to step 158 where the value 11,095 from Table I is found to be greater than 610. Hence the process continues to step 160 where document six hundred ten is inserted in the result list in probability order. At step 162 a new probability threshold equal to the Dth document in the result list is to be set. In this case, however, nothing occurs because Dj is now set to 2, meaning that both documents one hundred eighty and six hundred ten appear in the result list, and the probability threshold will continue to be set to the score of the document of the result list having the lowest probability, namely document one hundred eighty.
The process continues through the remainder of the database, incrementally increasing the value from Table I against which the document number is processed at step 158, the process continuing until 10 documents are identified and all documents in the database have been processed. When this occurs, Xi equals nc at step 164 and the final result list is retrieved at step 168. It might be advantageous, particularly where small document collections are to be searched and processing power is large, to perform the process of Figure 10 for only a single iteration to find the document of the first sample having the highest probability and setting the probability threshold to the probability of that document for scoring the remainder of the document collection in the manner described above. Thus, the probabilities of documents added to the result list must exceed the initial probability threshold, at least until the preselected number of documents is added to the result list. Thereafter, the probability threshold is increased as additional documents having higher probabilities are added to the list and documents with the lowest probabilities are removed from the list.
In any event, if less than the preselected number of documents are ultimately identified to the result list, a new probability threshold may be established slightly below the probability of the document on the result list with the lowest probability and the entire collection re-scored as described above.
Maximum Score Optimization
This technique is illustrated in the flow chart of Figure 11. More particularly, Figure 11 illustrates the iterative loops for scoring documents employed at step 154 in Figure 10. Each document in the document database has a document number associated with it. The maximum score optimization commences with the concept i, in the query having the highest idf,. A lower bound document number is chosen (such as the lowest document number in the database). The first document dj whose document number is greater than the lower bound document number and which contains the concept i, is selected as a candidate document.
A remainder score is initialized to the maximum possible score less the value that document d3- scores for the concept i, being examined. Thus, the remainder score value represents the maximum score which each document which does not contain concept i, could achieve without concept i,. The process continues by iterating through each of the concepts i2, i3, etc. The concepts are processed in descending order of concept idξ value. As noted above, the concept with the highest idf, is the concept which appears least frequently in the collection and is more likely to be a good discriminator than a concept which appears more often. The processing for each concept commences with the document having a document number greater than or equal to the lower bound document number. In the processing, three conditions can occur.
1. If the document number for the current concept is equal to that of the candidate document, the candidate document contains the concept and no change is made to the maximum score. Instead, the process continues to the next concept.
2. If the document number for the current concept is greater than that of the candidate document, the current document does not contain the concept and the value of the current concept is subtracted from the maximum score for the candidate document and the remainder score is adjusted. If the maximum score is still high enough that the candidate document might still be selected, the processing will continue to the next concept. If not, the candidate document is discarded and the processing starts over with the next higher document number as the candidate document.
3. If the document number for the current concept is less than that of the candidate document, a document exists with a lower number which must be evaluated before continuing with the candidate document.
The remainder score tabulated for each document represents the maximum score that document can achieve based on the concepts processed up to that point and the possibility that it contains all the subsequent concepts. As each concept is processed, the remainder score for the document is reduced by the value of the concept for each document in which the concept does not appear. In considering the remainder score, two possibilities exist.
1. If the remainder score is less than the minimum document score necessary to remain in the result list, then that document, and all other documents up to the candidate document number, can be discarded, since it is not possible for any of them to achieve a document score high enough to remain in the result list. In this situation, the next document number which is greater than or equal to the candidate document number is selected for the concept and the processing continues as described above. 2. If the remainder score is not less than the minimum document score necessary to remain in the result list, then the document is considered as a candidate for the result list. In this case, the document score for the document is set to the current remaining score and the candidate document number is reset. The process continues until a candidate is found having a maximum possible score greater than the probability threshold required to remain in the result list.
The process of the maximum score optimization may be explained with reference to the flowchart of Figure 11. At step 180 the lower bound document number, probability threshold (from step 152 or 162 in Figure 10) and the maximum possible score are inputted. For the initial iteration for a given document, the probability threshold is initialized to 0 at step 152 in Figure 10 and the maximum possible score is initialized. The lower bound document number is set to the first document in the database desired to be reviewed. At step 182, the first document having a document number greater than or equal to the lower bound document number and which contains the concept having the highest idf; is identified as a candidate document. Thus, the document number is identified for the first document containing the concept. At step 184, the remainder score for all other documents having a lower number is initialized to be equal to the maximum possible score less the incremental concept value from the missing concept i, having the highest idfj. At step 186, a decision is made as to whether all the concepts have been processed, and if they have not, the current concept is set to the concept i2 whose idfj is next highest in value below the first concept i,, at step 188. At step 190, the document number is set to the document number of the next document greater than or equal to the lower bound document number for the current (second) concept i2. At step 192, if the document number of the document containing the concept is less than the current candidate document number, then the decision is made at step 194 whether the remainder score is smaller than the probability threshold initialized at step 152 or set at step 162 in Figure 10. If the remainder score is smaller than the minimum probability threshold, then the lower bound document number is set to the current candidate document number and the document number of the next document containing the concept i2 currently being processed is set to the next document number greater than or equal to the current lower bound document number for the current concept. The concept incremental value is subtracted at step 200 from the remainder score.
If, at step 194, the remainder score is greater than or equal to the probability threshold, then the candidate document number is set, at step 202, to the document number of the next document containing the concept, and the candidate document score is set, at step 204, to the remainder score. The process then continues to step 200 to subtract the concept incremental value from the remainder score for the documents not containing the concept.
If at step 192 the document number containing the concept is greater than or equal to the candidate document number, then the process continues directly to step 200 where the concept incremental value is subtracted from the remainder score for the documents not containing the concept.
At step 206, if the document number containing the concept is equal to the candidate document number, then the candidate document is found to contain the concept, and the process returns to step 186 and processes through the loop again for the next concept. If the document number containing the concept is not equal to the candidate document number, then the concept incremental value is subtracted from the candidate document score at step 208. If the resulting candidate document score is greater than the probability threshold, the process loops back through step 186 again. On the other hand, if the candidate document score is not greater than the probability threshold, the lower bound document number is set to the candidate document number plus 1 and the process reloops to step 182.
If a candidate document loops through the process of Figure 11 through all of the concepts of the query, and the document score is greater than the probability threshold at step 210, step 186 identifies that all concepts have been processed and returns the document at step 214 for insertion into the full result list in sorted order at step 156 in Figure 10. The process terminates for a given threshold value only when a candidate is found, after all concepts have been examined, which has a maximum possible score greater than the probability threshold required to remain in the result list. The process iterates through the loops illustrated in Figure 10 until the required number of documents for the result list is identified. The documents may then be retrieved from database using the result list at step 170, the scoring of each document occurring through the iterations of the loops of Figure 11. It may be desirable to incorporate certain relational constraints on the placement of documents into the result list. As one example, it might be desirable to limit the search output to documents dated after a given date. Suffice it to say that such a constraint can be imposed on the document retrieval system in a manner well known in the art.
Document Retrieval
Figures 12 and 13 are flowcharts detailing the construction and evaluation of an inference network, Figure 12 being a detailed flowchart for constructing the query network 12 and Figure 13 being a detailed flowchart for evaluation the query network in the context of the document network 10. As heretofore described, an input query written in natural language is loaded into the computer, such as into a register therein, and is parsed (step 220) compared to the stopwords in database 222 (step 224) and stemmed at step 226. The result is the list 42 illustrated in Figure 4. Using synonym database 228, the list is compared at step 230 to the synonym database and synonyms are added to the list. As will be explained hereinafter, the handling of synonyms may actually occur after handling of the phrases. Citations are located at step 232 as heretofore described. More particularly, a proximity relationship is established showing the page number within five words of the volume number, without regard to the reporter system employed. The handling of citations, like the handling of synonyms, may be accomplished after phrase resolution, if desired.
Employing phrase database 234, a decision is made step 236 as to whether or not phrases are present in the query. If phrases are present, a comparison is made as step 240 to identify phrases. At step 242 a determination is made as to whether successive phrases share any common term(s) (an overlap condition). More particularly, and as heretofore described, terms which are apparently shared between successive phrases are detected at step 242. At step 244 a determination is made as to which phrase is the longer of the two phrases, and the shared term is included in the longer phrase and excluded from the shorter phrase. As a result of deleting the shared term from the shorter phrase, the resulting shorter phrase may not be a phrase at all, in which case the remaining term(s) are simply handled as stemmed words. On the other hand, if the two phrases are of equal length, then the shared term is accorded to the first phrase and denied to the second phrase. After overlap conflict is resolved at step 244, the resulting phrase substitution occurs at step 246. The process loops back to step 236 to determine if phrases are still present, and if they are the process repeats until no further phrases are present. At step 238, all duplicate terms are located, mapped, counted and removed, with a count V representing the number of duplicate terms removed. Thus, the search query illustrated at block 46 in Figure 4 is developed. As heretofore described, the handling of synonyms and citations may occur after resolution of the phrases, rather than before.
As illustrated in Figure 13, the resulting search query is provided to the document network where, at step 250 the number of terms T is counted, at step 252 i is set to 0 and at step 254 1 is added to i. Using document database 256 which also contains the text of the documents, the inverse document frequency (idfj) is determined and the probability estimate (tζ) is determined at step 258. As noted above, both fy and idf| are calculated from addresses, document numbers and offset data in the word index of the document database. The estimated inverse document frequency (idQ is also added to the database by a temporary memory or register. The component probability is determined at step 260 as heretofore described and is accumulated with other component probabilities at step 262. At step 264 a determination is made as to whether or not i equals T (where T is the number of terms in the search query). If all of the terms have not been compared to the database, the process is looped, adding 1 to i and repeated for each term until i equals T at step 264. As heretofore described, when terms having duplicates deleted from the input query are processed at step 258, the probability for such terms is multiplied by the number of duplicates deleted, thereby weighing the probability in accordance with the frequency of the term in the original input query. Consequently, at step 266, it is necessary to divide the accumulated component probability for the document by V + T (where V is the number of duplicate terms deleted from the input query) to thereby normalize the probability. The probability for each document is stored at step 268 and the process repeated at step 270 for the other documents. At step 272 the documents are ranked in accordance with the determined probabilities, and the top ranked documents are printed out or displayed at step 274.
As previously described, the scan technique may be a concept-based scan, rather than the document-based scan described. Further, as previously described, the scan may be aborted after less than complete scan of any given document if the probabilities result in a determination that the document will not reach the cutoff for the D top-ranked documents to be displayed or printed.
While the present invention has been described in connection with a time-shared computer system shown in Figure 3 wherein search queries are generated by PC computers or dumb terminals for transmission to and time- shared processing by a central computer containing the document network, it may be desirable in some cases to provide the document network (with or without the document text database) to the user for direct use at the PC. In such a case, the document database would be supplied on the same ROM 24 as the databases used with the search query, or on a separately supplied ROM for use with computer 20. For example, in the case of a legal database, updated ROMs containing the document database could be supplied periodically on a subscription basis to the user. In any case, the stopwords, phrases and key numbers would not be changed often, so it would not be necessary to change the ROM containing the databases of stopwords, phrases and key numbers.
Although the present invention has been described with reference to preferred embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.

Claims

WHAT IS CLAIMED IS:
1. Apparatus for estimating the frequency of occurrence documents containing a selected representation in a collection of documen represented by a database, comprising: sample selection means for selecting a samp comprising a plurality of documents from the collection, t sample containing fewer documents than the entire collection; frequency identifying means responsive to the samp selection means for identifying the frequency of occurrence documents containing the selected representation in the selecte sample of documents; processor means responsive to the frequen identifying means for calculating a maximum and a minimu probable frequency of occurrence of documents containing t selected representation in the collection and for identifying if t difference between the maximum and minimum probab frequencies is within a preselected limit; and selection means responsive to the processor mea for selecting a midpoint of the maximum and minimum probab frequencies as the estimated frequency of occurrence of documen containing the selected representation if the calculated differen between the maximum and minimum probable frequencies within the preselected limit.
2. The apparatus according to claim 1 further includi adjusting means responsive to the processor means for adding addition documents from the collection to the sample of documents if the calculate difference between the maximum and minimum probable frequencies excee the preselected limit.
3. The apparatus according to claim 1 where the process means calculates the maximum probable frequency, f^, and the minimu probable frequency, f^, in accordance with relationships based on t number of gaps between documents in the sample containing the selected representation (n-), the number of documents in the collection (n , and the number of documents in the sample (xj).
4. The apparatus according to claim 3 where the selected representation contains a plurality of terms, said processor means setting f equal to r^ if the calculated f^ is smaller than ___•, said processor means setting fπax equal to ___; + (n,. - Xj) if the calculated f^ is smaller than zero or smaller than ___•, and said processor means setting f^ equal to an a priori maximum if the calculated ,,-. is greater than the a priori maximum.
5. The apparatus according to claim 4 wherein the selected representation is a synonym represented by a plurality of terms, and wherein the a priori maximum is equal to the sum of all frequencies of occurrence o documents in the collection containing a term of the synonym, said processor means setting ,^ equal to an a priori m imum if the calculated f^ is smaller than the a priori minimum, where the a priori niinimum is equal to the frequency of occurrence of documents containing the term of the synonym appearing in the greatest number of documents in the collection.
6. A method of estimating the frequency of occurrence o documents containing a selected representation in a collection of documents, comprising: selecting a sample comprising a plurality of documents from the collection, the sample containing fewer documents than the entire collection; identifying the frequency of occurrence of documents containing the selected representation in the selected sample of documents; calculating a maximum and a minimum probable frequency of occurrence of documents containing the selected representation in the collection; identifying whether the difference between the maximum and minimum probable frequencies is within a preselected limit; and selecting a midpoint of the maximum and minimum probable frequencies as the estimated frequency of occurrence of documents containing the selected representation if the calculated difference between the maximum and minimum probable frequencies is within the preselected limit.
7. The method according to claim 6 further including adding additional documents to the sample from the collection if the calculated difference between the maximum and minimum probable frequencies exceeds the preselected limit.
8. Apparatus for identifying documents of a document collection containing representations that match a query containing a plurality of concepts, the apparatus comprising: sample selection means for selecting a sample comprising a plurality of documents from the collection, the sample containing fewer documents than the entire collection, processing means for calculating probabilities that documents contained in the sample contain representations that match the query and for identifying a first document contained in the sample having the highest calculated probability, the processing means being responsive to the probability of the first document for identifying a predetermined number of documents contained in the document collection having the highest probabilities that they respectively contain representations that match the query.
9. The apparatus according to claim 8 wherein the sample selection means iteratively selects successive samples of a plurality of documents from the collection for examination, each sample containing fewer documents than the entire collection and each successive sample containing documents different from each previous sample; the processing means is responsive to the sample selection means to identify, during each iteration, a preselected number of documents having the highest probabilities that they respectively contain representations that match the query, the documents being identified during an iteration from a group consisting of a respective sample of documents and the documents identified during the next previous iteration, the preselected number being no greater than the predetermined number.
10. The apparatus according to claim 9 further including threshold setting means responsive to the processing means for setting a probability threshold equal to the probability of the first document, the threshold setting means being responsive to the processing means to reset the probability threshold to the probability of the identified document having the lowest probability.
11. The apparatus according to claim 10 including determining means operable during each respective iteration and responsive to the identification of the preselected number of documents by the processing means to determine if an additional document has a probability greater than the probability threshold, the processing means being responsive to the determining means to replace the previously-identified document having the lowest probability by the additional document, and the threshold setting means being responsive to the processing means to reset the probability threshold to the probability of the identified document having the new lowest probability.
12. The apparatus according to claim 8 further including threshold setting means responsive to the processing means for setting a probability threshold equal to the probability of the first document, calculating means for calculating the probability that the representations in a document match a concept in the query, estimating means responsive to the calculating means for estimating a maximum probability for the document based on the calculated probability and an assumption that the representations in the document match the concepts of the query for which probabilities have not been calculated, the calculating means being responsive to the estimating means to cease probability calculation for the document if the estimating means estimates a maximum probability for the document that does not exceed the probability threshold, the calculating means being further responsive to the estimating means to calculate the probability that the representations in a document match additional concepts until either the probability calculation is ceased in response to an estimation of maximum probability by the estimating means or the probability is calculated for all concepts in the query.
13. The apparatus according to claim 12 wherein the processing means includes a result list responsive to the calculating means to identify in probability order, up to said predetermined number of documents whose probability calculation is not ceased by the calculating means, the threshold setting means being responsive to the result list to reset the probability threshold equal to the probability of the document lowest on the result list.
14. The method of identifying documents of a document collection containing representations that match a query containing a plurality of concepts, comprising selecting a sample comprising a plurality of documents from the collection, the sample containing fewer documents than the entire collection, calculating the probabilities that documents contained in the sample contain representations that match the query, identifying the document contained in the sample having the highest probability; and identifying a predetermined number of documents of the collection having the highest probabilities that they respectively contain representations that match the query.
15. The method according to claim 14 including iteratively selecting successive samples of a plurality of documents from the collection for examination, each sample containing fewer documents than the entire collection, and each successive sample containing documents different from each previous sample; identifying, during each iteration, a preselected number of documents having the highest probabilities that they respectively contain representations that match the query, the documents being selected from a group consisting of a respective sample of documents and the documents identified during the next previous iteration, the preselected number being no greater than the predetermined number.
16. The method according to claim 15 including setting a probability threshold to the probability of the identified document having the lowest probability of all identified documents, and during each respective iteration and after the preselected number of documents has been identified, deteπnining if an additional document has been identified having a probability greater than the probability threshold, and if so, replacing the previously- identified document having the lowest probability with the additional document and resetting the probability threshold to the probability of the identified document having the new lowest probability.
17. The method according to claim 14 further including setting a probability threshold equal to the probability of the identified document of the sample, and document probabilities are calculated by: a) calculating the probability that the representations in a document match a first concept in the query, b) estimating a maximum probability for the document based on the calculated probability and an assumption that the representations in the document match the concepts of the query for which probabilities have not been calculated, c) ceasing probability calculation for the document if the estimated maximum probability for the document does not exceed the probability threshold, and d) repeating steps a) to c) for additional query concepts until either the probability calculation is ceased or the probability is calculated for all concepts in the query.
18. The method according to claim 17 wherein those documents^vhose probability calculation is not ceased in step c) are identified to a result list in probability order, up to said predetermined number, said process "further including resetting the probability threshold equal to the probability of the document lowest on the result list.
PCT/US1994/002579 1993-03-30 1994-03-10 Probabilistic information retrieval networks WO1994023386A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU64450/94A AU6445094A (en) 1993-03-30 1994-03-10 Probabilistic information retrieval networks

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/039,757 US5488725A (en) 1991-10-08 1993-03-30 System of document representation retrieval by successive iterated probability sampling
US08/039,757 1993-03-30

Publications (2)

Publication Number Publication Date
WO1994023386A2 true WO1994023386A2 (en) 1994-10-13
WO1994023386A3 WO1994023386A3 (en) 1994-11-10

Family

ID=21907211

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1994/002579 WO1994023386A2 (en) 1993-03-30 1994-03-10 Probabilistic information retrieval networks

Country Status (3)

Country Link
US (1) US5488725A (en)
AU (1) AU6445094A (en)
WO (1) WO1994023386A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8429098B1 (en) 2010-04-30 2013-04-23 Global Eprocure Classification confidence estimating tool

Families Citing this family (279)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5649183A (en) * 1992-12-08 1997-07-15 Microsoft Corporation Method for compressing full text indexes with document identifiers and location offsets
US5619709A (en) * 1993-09-20 1997-04-08 Hnc, Inc. System and method of context vector generation and retrieval
SE502658C2 (en) * 1994-02-28 1995-12-04 Non Stop Info Ab Procedure and control device for reading identity and value documents.
US6473860B1 (en) 1994-04-07 2002-10-29 Hark C. Chan Information distribution and processing system
US7991347B1 (en) * 1994-04-07 2011-08-02 Data Innovation Llc System and method for accessing set of digital data at a remote site
US5704018A (en) * 1994-05-09 1997-12-30 Microsoft Corporation Generating improved belief networks
JPH07319917A (en) * 1994-05-24 1995-12-08 Fuji Xerox Co Ltd Document data base managing device and document data base system
JPH07319918A (en) * 1994-05-24 1995-12-08 Fuji Xerox Co Ltd Device for specifying retrieving object in document
US5745745A (en) * 1994-06-29 1998-04-28 Hitachi, Ltd. Text search method and apparatus for structured documents
US7181758B1 (en) 1994-07-25 2007-02-20 Data Innovation, L.L.C. Information distribution and processing system
JP3030533B2 (en) * 1994-07-26 2000-04-10 篤 今野 Information classifier
US5717913A (en) * 1995-01-03 1998-02-10 University Of Central Florida Method for detecting and extracting text data using database schemas
US5794050A (en) * 1995-01-04 1998-08-11 Intelligent Text Processing, Inc. Natural language understanding system
US5946678A (en) * 1995-01-11 1999-08-31 Philips Electronics North America Corporation User interface for document retrieval
US5694559A (en) * 1995-03-07 1997-12-02 Microsoft Corporation On-line help method and system utilizing free text query
US5855015A (en) * 1995-03-20 1998-12-29 Interval Research Corporation System and method for retrieval of hyperlinked information resources
US5748954A (en) * 1995-06-05 1998-05-05 Carnegie Mellon University Method for searching a queued and ranked constructed catalog of files stored on a network
US5675710A (en) * 1995-06-07 1997-10-07 Lucent Technologies, Inc. Method and apparatus for training a text classifier
US6067552A (en) * 1995-08-21 2000-05-23 Cnet, Inc. User interface system and method for browsing a hypertext database
JPH0981574A (en) * 1995-09-14 1997-03-28 Fujitsu Ltd Method and system for data base retrieval using retrieval set display picture
JP3040945B2 (en) * 1995-11-29 2000-05-15 松下電器産業株式会社 Document search device
US5787424A (en) * 1995-11-30 1998-07-28 Electronic Data Systems Corporation Process and system for recursive document retrieval
US5689696A (en) * 1995-12-28 1997-11-18 Lucent Technologies Inc. Method for maintaining information in a database used to generate high biased histograms using a probability function, counter and threshold values
US5819260A (en) * 1996-01-22 1998-10-06 Lexis-Nexis Phrase recognition method and apparatus
US5754840A (en) * 1996-01-23 1998-05-19 Smartpatents, Inc. System, method, and computer program product for developing and maintaining documents which includes analyzing a patent application with regards to the specification and claims
CA2245913C (en) * 1996-04-10 2002-06-11 At&T Corp. A system and method for finding information in a distributed information system using query learning and meta search
JP3113814B2 (en) * 1996-04-17 2000-12-04 インターナショナル・ビジネス・マシーンズ・コーポレ−ション Information search method and information search device
US5995921A (en) * 1996-04-23 1999-11-30 International Business Machines Corporation Natural language help interface
US5721896A (en) * 1996-05-13 1998-02-24 Lucent Technologies Inc. Method for skew resistant join size estimation
US20030195847A1 (en) * 1996-06-05 2003-10-16 David Felger Method of billing a purchase made over a computer network
US8229844B2 (en) 1996-06-05 2012-07-24 Fraud Control Systems.Com Corporation Method of billing a purchase made over a computer network
US7555458B1 (en) * 1996-06-05 2009-06-30 Fraud Control System.Com Corporation Method of billing a purchase made over a computer network
US5920859A (en) * 1997-02-05 1999-07-06 Idd Enterprises, L.P. Hypertext document retrieval system and method
US5778362A (en) * 1996-06-21 1998-07-07 Kdl Technologies Limted Method and system for revealing information structures in collections of data items
US6581056B1 (en) * 1996-06-27 2003-06-17 Xerox Corporation Information retrieval system providing secondary content analysis on collections of information objects
US5813002A (en) * 1996-07-31 1998-09-22 International Business Machines Corporation Method and system for linearly detecting data deviations in a large database
US5787435A (en) * 1996-08-09 1998-07-28 Digital Equipment Corporation Method for mapping an index of a database into an array of files
US5765158A (en) * 1996-08-09 1998-06-09 Digital Equipment Corporation Method for sampling a compressed index to create a summarized index
JP3099756B2 (en) * 1996-10-31 2000-10-16 富士ゼロックス株式会社 Document processing device, word extraction device, and word extraction method
US5950189A (en) * 1997-01-02 1999-09-07 At&T Corp Retrieval system and method
US6128712A (en) 1997-01-31 2000-10-03 Macromedia, Inc. Method and apparatus for improving playback of interactive multimedia works
DE29704393U1 (en) * 1997-03-11 1997-07-17 Aesculap Ag Device for preoperative determination of the position data of endoprosthesis parts
US7308485B2 (en) * 1997-04-15 2007-12-11 Gracenote, Inc. Method and system for accessing web pages based on playback of recordings
US7167857B2 (en) 1997-04-15 2007-01-23 Gracenote, Inc. Method and system for finding approximate matches in database
US5895464A (en) * 1997-04-30 1999-04-20 Eastman Kodak Company Computer program product and a method for using natural language for the description, search and retrieval of multi-media objects
US6460034B1 (en) * 1997-05-21 2002-10-01 Oracle Corporation Document knowledge base research and retrieval system
US6128613A (en) * 1997-06-26 2000-10-03 The Chinese University Of Hong Kong Method and apparatus for establishing topic word classes based on an entropy cost function to retrieve documents represented by the topic words
US5873081A (en) * 1997-06-27 1999-02-16 Microsoft Corporation Document filtering via directed acyclic graphs
US5926808A (en) * 1997-07-25 1999-07-20 Claritech Corporation Displaying portions of text from multiple documents over multiple databases related to a search query in a computer network
US5950196A (en) * 1997-07-25 1999-09-07 Sovereign Hill Software, Inc. Systems and methods for retrieving tabular data from textual sources
US6105023A (en) * 1997-08-18 2000-08-15 Dataware Technologies, Inc. System and method for filtering a document stream
US6081805A (en) * 1997-09-10 2000-06-27 Netscape Communications Corporation Pass-through architecture via hash techniques to remove duplicate query results
US5845278A (en) * 1997-09-12 1998-12-01 Inioseek Corporation Method for automatically selecting collections to search in full text searches
US6018733A (en) * 1997-09-12 2000-01-25 Infoseek Corporation Methods for iteratively and interactively performing collection selection in full text searches
DE69730057T2 (en) * 1997-09-29 2005-08-04 Webplus Ltd., Road Town A MULTI-ELEMENT TRUST INTERPRETATION SYSTEM AND METHOD THEREFOR
US5966702A (en) * 1997-10-31 1999-10-12 Sun Microsystems, Inc. Method and apparatus for pre-processing and packaging class files
US5987457A (en) * 1997-11-25 1999-11-16 Acceleration Software International Corporation Query refinement method for searching documents
US6389436B1 (en) * 1997-12-15 2002-05-14 International Business Machines Corporation Enhanced hypertext categorization using hyperlinks
US5983221A (en) * 1998-01-13 1999-11-09 Wordstream, Inc. Method and apparatus for improved document searching
US6119124A (en) * 1998-03-26 2000-09-12 Digital Equipment Corporation Method for clustering closely resembling data objects
US7778954B2 (en) * 1998-07-21 2010-08-17 West Publishing Corporation Systems, methods, and software for presenting legal case histories
US7529756B1 (en) 1998-07-21 2009-05-05 West Services, Inc. System and method for processing formatted text documents in a database
US6363377B1 (en) * 1998-07-30 2002-03-26 Sarnoff Corporation Search data processor
US6405188B1 (en) * 1998-07-31 2002-06-11 Genuity Inc. Information retrieval system
US6377949B1 (en) 1998-09-18 2002-04-23 Tacit Knowledge Systems, Inc. Method and apparatus for assigning a confidence level to a term within a user knowledge profile
US6115709A (en) 1998-09-18 2000-09-05 Tacit Knowledge Systems, Inc. Method and system for constructing a knowledge profile of a user having unrestricted and restricted access portions according to respective levels of confidence of content of the portions
US6154783A (en) 1998-09-18 2000-11-28 Tacit Knowledge Systems Method and apparatus for addressing an electronic document for transmission over a network
US8380875B1 (en) 1998-09-18 2013-02-19 Oracle International Corporation Method and system for addressing a communication document for transmission over a network based on the content thereof
WO2000017727A2 (en) 1998-09-18 2000-03-30 Tacit Knowledge Systems Method and apparatus for querying a user knowledge profile
WO2000017784A1 (en) 1998-09-18 2000-03-30 Tacit Knowledge Systems Method of constructing and displaying an entity profile constructed utilizing input from entities other than the owner
US6253202B1 (en) 1998-09-18 2001-06-26 Tacit Knowledge Systems, Inc. Method, system and apparatus for authorizing access by a first user to a knowledge profile of a second user responsive to an access request from the first user
US6549897B1 (en) * 1998-10-09 2003-04-15 Microsoft Corporation Method and system for calculating phrase-document importance
US6366910B1 (en) 1998-12-07 2002-04-02 Amazon.Com, Inc. Method and system for generation of hierarchical search results
US6430557B1 (en) * 1998-12-16 2002-08-06 Xerox Corporation Identifying a group of words using modified query words obtained from successive suffix relationships
US6327593B1 (en) * 1998-12-23 2001-12-04 Unisys Corporation Automated system and method for capturing and managing user knowledge within a search system
US7003719B1 (en) 1999-01-25 2006-02-21 West Publishing Company, Dba West Group System, method, and software for inserting hyperlinks into documents
US6360227B1 (en) * 1999-01-29 2002-03-19 International Business Machines Corporation System and method for generating taxonomies with applications to content-based recommendations
US6330564B1 (en) * 1999-02-10 2001-12-11 International Business Machines Corporation System and method for automated problem isolation in systems with measurements structured as a multidimensional database
NZ515293A (en) * 1999-05-05 2004-04-30 West Publishing Company D Document-classification system, method and software
AU4954200A (en) * 1999-06-04 2000-12-28 Seiko Epson Corporation Document sorting method, document sorter, and recorded medium on which document sorting program is recorded
AU5615000A (en) * 1999-06-14 2001-01-02 Thomson Corporation, The System for converting data to a markup language
US6381594B1 (en) * 1999-07-12 2002-04-30 Yahoo! Inc. System and method for personalized information filtering and alert generation
US6535865B1 (en) * 1999-07-14 2003-03-18 Hewlett Packard Company Automated diagnosis of printer systems using Bayesian networks
US6853950B1 (en) * 1999-07-20 2005-02-08 Newsedge Corporation System for determining changes in the relative interest of subjects
US6816857B1 (en) 1999-11-01 2004-11-09 Applied Semantics, Inc. Meaning-based advertising and document relevance determination
US6772149B1 (en) 1999-09-23 2004-08-03 Lexis-Nexis Group System and method for identifying facts and legal discussion in court case law documents
AU4025301A (en) * 1999-09-28 2001-04-30 Xmlexpress, Inc. System and method for automatic context creation for electronic documents
US6876991B1 (en) 1999-11-08 2005-04-05 Collaborative Decision Platforms, Llc. System, method and computer program product for a collaborative decision platform
US6651059B1 (en) * 1999-11-15 2003-11-18 International Business Machines Corporation System and method for the automatic recognition of relevant terms by mining link annotations
US6980990B2 (en) * 1999-12-01 2005-12-27 Barry Fellman Internet domain name registration system
AU2212801A (en) * 1999-12-07 2001-06-18 Qjunction Technology, Inc. Natural english language search and retrieval system and method
GB0003411D0 (en) * 2000-02-15 2000-04-05 Dialog Corp The Plc Accessing data
US7428500B1 (en) * 2000-03-30 2008-09-23 Amazon. Com, Inc. Automatically identifying similar purchasing opportunities
US7120574B2 (en) * 2000-04-03 2006-10-10 Invention Machine Corporation Synonym extension of search queries with validation
US7139743B2 (en) * 2000-04-07 2006-11-21 Washington University Associative database scanning and information retrieval using FPGA devices
US6711558B1 (en) 2000-04-07 2004-03-23 Washington University Associative database scanning and information retrieval
US8095508B2 (en) * 2000-04-07 2012-01-10 Washington University Intelligent data storage and processing using FPGA devices
US7962326B2 (en) * 2000-04-20 2011-06-14 Invention Machine Corporation Semantic answering system and method
US6701309B1 (en) * 2000-04-21 2004-03-02 Lycos, Inc. Method and system for collecting related queries
JP2001337980A (en) * 2000-05-29 2001-12-07 Sony Corp Electronic program guide retrieving method and electronic program guide retrieving device
JP2002117074A (en) * 2000-10-04 2002-04-19 Hitachi Ltd Information retrieving method
US6668251B1 (en) 2000-11-01 2003-12-23 Tacit Knowledge Systems, Inc. Rendering discriminator members from an initial set of result data
US6640228B1 (en) * 2000-11-10 2003-10-28 Verizon Laboratories Inc. Method for detecting incorrectly categorized data
US20040111386A1 (en) * 2001-01-08 2004-06-10 Goldberg Jonathan M. Knowledge neighborhoods
US7043489B1 (en) 2001-02-23 2006-05-09 Kelley Hubert C Litigation-related document repository
US6823333B2 (en) * 2001-03-02 2004-11-23 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration System, method and apparatus for conducting a keyterm search
US6721728B2 (en) * 2001-03-02 2004-04-13 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration System, method and apparatus for discovering phrases in a database
US6741981B2 (en) 2001-03-02 2004-05-25 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration (Nasa) System, method and apparatus for conducting a phrase search
US6697793B2 (en) 2001-03-02 2004-02-24 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration System, method and apparatus for generating phrases from a database
US8117313B2 (en) * 2001-03-19 2012-02-14 International Business Machines Corporation System and method for adaptive formatting of image information for efficient delivery and presentation
US6820081B1 (en) 2001-03-19 2004-11-16 Attenex Corporation System and method for evaluating a structured message store for message redundancy
US8484177B2 (en) * 2001-03-21 2013-07-09 Eugene M. Lee Apparatus for and method of searching and organizing intellectual property information utilizing a field-of-search
US20030016250A1 (en) * 2001-04-02 2003-01-23 Chang Edward Y. Computer user interface for perception-based information retrieval
US7593920B2 (en) * 2001-04-04 2009-09-22 West Services, Inc. System, method, and software for identifying historically related legal opinions
US7500017B2 (en) * 2001-04-19 2009-03-03 Microsoft Corporation Method and system for providing an XML binary format
US20020156778A1 (en) * 2001-04-24 2002-10-24 Beeferman Douglas H. Phrase-based text searching
US7552385B2 (en) * 2001-05-04 2009-06-23 International Business Machines Coporation Efficient storage mechanism for representing term occurrence in unstructured text documents
US6970881B1 (en) * 2001-05-07 2005-11-29 Intelligenxia, Inc. Concept-based method and system for dynamically analyzing unstructured information
USRE46973E1 (en) 2001-05-07 2018-07-31 Ureveal, Inc. Method, system, and computer program product for concept-based multi-dimensional analysis of unstructured information
US7194483B1 (en) 2001-05-07 2007-03-20 Intelligenxia, Inc. Method, system, and computer program product for concept-based multi-dimensional analysis of unstructured information
US7536413B1 (en) 2001-05-07 2009-05-19 Ixreveal, Inc. Concept-based categorization of unstructured objects
US7627588B1 (en) 2001-05-07 2009-12-01 Ixreveal, Inc. System and method for concept based analysis of unstructured data
US7269546B2 (en) * 2001-05-09 2007-09-11 International Business Machines Corporation System and method of finding documents related to other documents and of finding related words in response to a query to refine a search
US6925433B2 (en) * 2001-05-09 2005-08-02 International Business Machines Corporation System and method for context-dependent probabilistic modeling of words and documents
US20020174111A1 (en) * 2001-05-21 2002-11-21 Panagiotis Kougiouris System and method for managing resources stored in a relational database system
US6725217B2 (en) 2001-06-20 2004-04-20 International Business Machines Corporation Method and system for knowledge repository exploration and visualization
US20030014405A1 (en) * 2001-07-09 2003-01-16 Jacob Shapiro Search engine designed for handling long queries
US7133862B2 (en) * 2001-08-13 2006-11-07 Xerox Corporation System with user directed enrichment and import/export control
US7284191B2 (en) * 2001-08-13 2007-10-16 Xerox Corporation Meta-document management system with document identifiers
US6888548B1 (en) * 2001-08-31 2005-05-03 Attenex Corporation System and method for generating a visualized data representation preserving independent variable geometric relationships
US6978274B1 (en) 2001-08-31 2005-12-20 Attenex Corporation System and method for dynamically evaluating latent concepts in unstructured documents
US6778995B1 (en) 2001-08-31 2004-08-17 Attenex Corporation System and method for efficiently generating cluster groupings in a multi-dimensional concept space
US7716330B2 (en) 2001-10-19 2010-05-11 Global Velocity, Inc. System and method for controlling transmission of data packets over an information network
US20090161568A1 (en) * 2007-12-21 2009-06-25 Charles Kastner TCP data reassembly
US7062498B2 (en) * 2001-11-02 2006-06-13 Thomson Legal Regulatory Global Ag Systems, methods, and software for classifying text from judicial opinions and other documents
JP2003157376A (en) * 2001-11-21 2003-05-30 Ricoh Co Ltd Network system, identification information management method, server device, program and recording medium
US20050010604A1 (en) * 2001-12-05 2005-01-13 Digital Networks North America, Inc. Automatic identification of DVD title using internet technologies and fuzzy matching techniques
US7333966B2 (en) * 2001-12-21 2008-02-19 Thomson Global Resources Systems, methods, and software for hyperlinking names
US6941293B1 (en) * 2002-02-01 2005-09-06 Google, Inc. Methods and apparatus for determining equivalent descriptions for an information need
US20030157470A1 (en) * 2002-02-11 2003-08-21 Michael Altenhofen E-learning station and interface
US7343372B2 (en) * 2002-02-22 2008-03-11 International Business Machines Corporation Direct navigation for information retrieval
US7271804B2 (en) * 2002-02-25 2007-09-18 Attenex Corporation System and method for arranging concept clusters in thematic relationships in a two-dimensional visual display area
US8589413B1 (en) 2002-03-01 2013-11-19 Ixreveal, Inc. Concept-based method and system for dynamically analyzing results from search engines
US20040205660A1 (en) * 2002-04-23 2004-10-14 Joe Acton System and method for generating and displaying attribute-enhanced documents
US7093023B2 (en) * 2002-05-21 2006-08-15 Washington University Methods, systems, and devices using reprogrammable hardware for high-speed processing of streaming data to find a redefinable pattern and respond thereto
US7130866B2 (en) 2002-07-30 2006-10-31 Koninklijke Philips Electronics N.V. Controlling the growth of a feature frequency profile by deleting selected frequency counts of features of events
US7711844B2 (en) * 2002-08-15 2010-05-04 Washington University Of St. Louis TCP-splitter: reliable packet monitoring methods and apparatus for high speed networks
US20040093331A1 (en) * 2002-09-20 2004-05-13 Board Of Regents, University Of Texas System Computer program products, systems and methods for information discovery and relational analyses
AU2003284118A1 (en) * 2002-10-14 2004-05-04 Battelle Memorial Institute Information reservoir
US7085755B2 (en) * 2002-11-07 2006-08-01 Thomson Global Resources Ag Electronic document repository management and access system
US9805373B1 (en) 2002-11-19 2017-10-31 Oracle International Corporation Expertise services platform
US20050171948A1 (en) * 2002-12-11 2005-08-04 Knight William C. System and method for identifying critical features in an ordered scale space within a multi-dimensional feature space
JP2006512693A (en) * 2002-12-30 2006-04-13 トムソン コーポレイション A knowledge management system for law firms.
EP1457889A1 (en) * 2003-03-13 2004-09-15 Koninklijke Philips Electronics N.V. Improved fingerprint matching method and system
US7451129B2 (en) * 2003-03-31 2008-11-11 Google Inc. System and method for providing preferred language ordering of search results
US7451130B2 (en) * 2003-06-16 2008-11-11 Google Inc. System and method for providing preferred country biasing of search results
US8306972B2 (en) 2003-03-31 2012-11-06 Google Inc. Ordering of search results based on language and/or country of the search results
US7917483B2 (en) * 2003-04-24 2011-03-29 Affini, Inc. Search engine and method with improved relevancy, scope, and timeliness
JP2006526227A (en) 2003-05-23 2006-11-16 ワシントン ユニヴァーシティー Intelligent data storage and processing using FPGA devices
US10572824B2 (en) 2003-05-23 2020-02-25 Ip Reservoir, Llc System and method for low latency multi-functional pipeline with correlation logic and selectively activated/deactivated pipelined data processing engines
US20040260681A1 (en) * 2003-06-19 2004-12-23 Dvorak Joseph L. Method and system for selectively retrieving text strings
US20050005239A1 (en) * 2003-07-03 2005-01-06 Richards James L. System and method for automatic insertion of cross references in a document
US7599938B1 (en) 2003-07-11 2009-10-06 Harrison Jr Shelton E Social news gathering, prioritizing, tagging, searching, and syndication method
US7610313B2 (en) * 2003-07-25 2009-10-27 Attenex Corporation System and method for performing efficient document scoring and clustering
US8856163B2 (en) * 2003-07-28 2014-10-07 Google Inc. System and method for providing a user interface with search query broadening
US8086619B2 (en) * 2003-09-05 2011-12-27 Google Inc. System and method for providing search query refinements
US7505964B2 (en) 2003-09-12 2009-03-17 Google Inc. Methods and systems for improving a search ranking using related queries
US7231399B1 (en) 2003-11-14 2007-06-12 Google Inc. Ranking documents based on large data sets
EP2290559A1 (en) * 2003-12-31 2011-03-02 Thomson Reuters Global Resources Systems, methods, software and interfaces for integration of case law with legal briefs, litigation documents, and/or other litigation-support documents
CA2553196C (en) * 2003-12-31 2013-03-19 Thomson Global Resources Systems, methods, interfaces and software for automated collection and integration of entity data into online databases and professional directories
US7602785B2 (en) 2004-02-09 2009-10-13 Washington University Method and system for performing longest prefix matching for network address lookup using bloom filters
US7191175B2 (en) * 2004-02-13 2007-03-13 Attenex Corporation System and method for arranging concept clusters in thematic neighborhood relationships in a two-dimensional visual display space
US20050246308A1 (en) * 2004-03-12 2005-11-03 Barker Joel A Method of exploring (arc)
EP1738305A2 (en) * 2004-03-12 2007-01-03 Joel A. Barker Method of exploring (wheel)
US7366705B2 (en) * 2004-04-15 2008-04-29 Microsoft Corporation Clustering based text classification
US7260568B2 (en) * 2004-04-15 2007-08-21 Microsoft Corporation Verifying relevance between keywords and web site contents
US7689585B2 (en) * 2004-04-15 2010-03-30 Microsoft Corporation Reinforced clustering of multi-type data objects for search term suggestion
US20050234973A1 (en) * 2004-04-15 2005-10-20 Microsoft Corporation Mining service requests for product support
US7289985B2 (en) 2004-04-15 2007-10-30 Microsoft Corporation Enhanced document retrieval
US7305389B2 (en) * 2004-04-15 2007-12-04 Microsoft Corporation Content propagation for enhanced document retrieval
US7428529B2 (en) * 2004-04-15 2008-09-23 Microsoft Corporation Term suggestion for multi-sense query
BE1016079A6 (en) * 2004-06-17 2006-02-07 Vartec Nv METHOD FOR INDEXING AND RECOVERING DOCUMENTS, COMPUTER PROGRAM THAT IS APPLIED AND INFORMATION CARRIER PROVIDED WITH THE ABOVE COMPUTER PROGRAM.
US20080294375A1 (en) * 2004-06-22 2008-11-27 Koninklijke Philips Electronics, N.V. Method and Device for Selecting Multimedia Items, Portable Preference Storage Device
JP4587163B2 (en) * 2004-07-13 2010-11-24 インターナショナル・ビジネス・マシーンズ・コーポレーション SEARCH SYSTEM, SEARCH METHOD, REPORT SYSTEM, REPORT METHOD, AND PROGRAM
US7809695B2 (en) * 2004-08-23 2010-10-05 Thomson Reuters Global Resources Information retrieval systems with duplicate document detection and presentation functions
EP1784719A4 (en) * 2004-08-24 2011-04-13 Univ Washington Methods and systems for content detection in a reconfigurable hardware
US11468128B1 (en) * 2006-10-20 2022-10-11 Richard Paiz Search engine optimizer
GB2420426A (en) * 2004-11-17 2006-05-24 Transversal Corp Ltd An information handling system
US7533094B2 (en) * 2004-11-23 2009-05-12 Microsoft Corporation Method and system for determining similarity of items based on similarity objects and their features
US7404151B2 (en) * 2005-01-26 2008-07-22 Attenex Corporation System and method for providing a dynamic user interface for a dense three-dimensional scene
US7356777B2 (en) * 2005-01-26 2008-04-08 Attenex Corporation System and method for providing a dynamic user interface for a dense three-dimensional scene
US8782087B2 (en) 2005-03-18 2014-07-15 Beyondcore, Inc. Analyzing large data sets to find deviation patterns
US7849062B1 (en) * 2005-03-18 2010-12-07 Beyondcore, Inc. Identifying and using critical fields in quality management
US10127130B2 (en) 2005-03-18 2018-11-13 Salesforce.Com Identifying contributors that explain differences between a data set and a subset of the data set
US7533088B2 (en) * 2005-05-04 2009-05-12 Microsoft Corporation Database reverse query matching
US7765214B2 (en) * 2005-05-10 2010-07-27 International Business Machines Corporation Enhancing query performance of search engines using lexical affinities
US7487147B2 (en) * 2005-07-13 2009-02-03 Sony Computer Entertainment Inc. Predictive user interface
US7788263B2 (en) * 2005-08-10 2010-08-31 Microsoft Corporation Probabilistic retrospective event detection
US8209335B2 (en) * 2005-09-20 2012-06-26 International Business Machines Corporation Extracting informative phrases from unstructured text
CN101351795B (en) * 2005-10-11 2012-07-18 Ix锐示公司 System, method and device for concept based searching and analysis
US8572088B2 (en) * 2005-10-21 2013-10-29 Microsoft Corporation Automated rich presentation of a semantic topic
US10319252B2 (en) 2005-11-09 2019-06-11 Sdl Inc. Language capability assessment and training apparatus and techniques
US7702629B2 (en) * 2005-12-02 2010-04-20 Exegy Incorporated Method and device for high performance regular expression pattern matching
US7676485B2 (en) * 2006-01-20 2010-03-09 Ixreveal, Inc. Method and computer program product for converting ontologies into concept semantic networks
US7954114B2 (en) 2006-01-26 2011-05-31 Exegy Incorporated Firmware socket module for FPGA-based pipeline processing
US20070179940A1 (en) * 2006-01-27 2007-08-02 Robinson Eric M System and method for formulating data search queries
US7908273B2 (en) * 2006-03-09 2011-03-15 Gracenote, Inc. Method and system for media navigation
US8943080B2 (en) * 2006-04-07 2015-01-27 University Of Southern California Systems and methods for identifying parallel documents and sentence fragments in multilingual document collections
US7636703B2 (en) * 2006-05-02 2009-12-22 Exegy Incorporated Method and apparatus for approximate pattern matching
US20080189273A1 (en) * 2006-06-07 2008-08-07 Digital Mandate, Llc System and method for utilizing advanced search and highlighting techniques for isolating subsets of relevant content data
US7921046B2 (en) * 2006-06-19 2011-04-05 Exegy Incorporated High speed processing of financial information using FPGA devices
US7840482B2 (en) * 2006-06-19 2010-11-23 Exegy Incorporated Method and system for high speed options pricing
US7996393B1 (en) * 2006-09-29 2011-08-09 Google Inc. Keywords associated with document categories
US8661029B1 (en) 2006-11-02 2014-02-25 Google Inc. Modifying search result ranking based on implicit user feedback
US7660793B2 (en) 2006-11-13 2010-02-09 Exegy Incorporated Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors
US8326819B2 (en) * 2006-11-13 2012-12-04 Exegy Incorporated Method and system for high performance data metatagging and data indexing using coprocessors
US9122674B1 (en) 2006-12-15 2015-09-01 Language Weaver, Inc. Use of annotations in statistical machine translation
US7822763B2 (en) * 2007-02-22 2010-10-26 Microsoft Corporation Synonym and similar word page search
US8938463B1 (en) 2007-03-12 2015-01-20 Google Inc. Modifying search result ranking based on implicit user feedback and a model of presentation bias
US8694374B1 (en) 2007-03-14 2014-04-08 Google Inc. Detecting click spam
US9092510B1 (en) 2007-04-30 2015-07-28 Google Inc. Modifying search result ranking based on a temporal element of user feedback
US8694511B1 (en) 2007-08-20 2014-04-08 Google Inc. Modifying search result ranking based on populations
US20090094209A1 (en) * 2007-10-05 2009-04-09 Fujitsu Limited Determining The Depths Of Words And Documents
US8909655B1 (en) 2007-10-11 2014-12-09 Google Inc. Time based ranking
US20090150906A1 (en) * 2007-12-07 2009-06-11 Sap Ag Automatic electronic discovery of heterogeneous objects for litigation
US7831588B2 (en) * 2008-02-05 2010-11-09 Yahoo! Inc. Context-sensitive query expansion
US8374986B2 (en) 2008-05-15 2013-02-12 Exegy Incorporated Method and system for accelerated stream processing
US11048765B1 (en) 2008-06-25 2021-06-29 Richard Paiz Search engine optimizer
AU2009269115B2 (en) * 2008-07-11 2016-01-28 Thomson Reuters Enterprise Centre Gmbh Systems, methods, and interfaces for researching contractual precedents
US8396865B1 (en) 2008-12-10 2013-03-12 Google Inc. Sharing search engine relevance data between corpora
US20120095893A1 (en) 2008-12-15 2012-04-19 Exegy Incorporated Method and apparatus for high-speed processing of financial market depth data
US8949265B2 (en) 2009-03-05 2015-02-03 Ebay Inc. System and method to provide query linguistic service
US9009146B1 (en) 2009-04-08 2015-04-14 Google Inc. Ranking search results based on similar queries
US9245243B2 (en) * 2009-04-14 2016-01-26 Ureveal, Inc. Concept-based analysis of structured and unstructured data using concept inheritance
US8447760B1 (en) 2009-07-20 2013-05-21 Google Inc. Generating a related set of documents for an initial set of documents
US8515957B2 (en) 2009-07-28 2013-08-20 Fti Consulting, Inc. System and method for displaying relationships between electronically stored information to provide classification suggestions via injection
US8990064B2 (en) 2009-07-28 2015-03-24 Language Weaver, Inc. Translating documents based on content
CA3026879A1 (en) 2009-08-24 2011-03-10 Nuix North America, Inc. Generating a reference set for use during document review
US8498974B1 (en) 2009-08-31 2013-07-30 Google Inc. Refining search results
US8972391B1 (en) 2009-10-02 2015-03-03 Google Inc. Recent interest based relevance scoring
US8874555B1 (en) 2009-11-20 2014-10-28 Google Inc. Modifying scoring data based on historical changes
US8615514B1 (en) 2010-02-03 2013-12-24 Google Inc. Evaluating website properties by partitioning user feedback
US8924379B1 (en) 2010-03-05 2014-12-30 Google Inc. Temporal-based score adjustments
US10417646B2 (en) 2010-03-09 2019-09-17 Sdl Inc. Predicting the cost associated with translating textual content
US8959093B1 (en) 2010-03-15 2015-02-17 Google Inc. Ranking search results based on anchors
WO2011149608A1 (en) * 2010-05-25 2011-12-01 Beyondcore, Inc. Identifying and using critical fields in quality management
US9623119B1 (en) 2010-06-29 2017-04-18 Google Inc. Accentuating search results
US20130159889A1 (en) * 2010-07-07 2013-06-20 Li-Wei Zheng Obtaining Rendering Co-ordinates Of Visible Text Elements
US8832083B1 (en) 2010-07-23 2014-09-09 Google Inc. Combining user feedback
US10037568B2 (en) 2010-12-09 2018-07-31 Ip Reservoir, Llc Method and apparatus for managing orders in financial markets
US9002867B1 (en) 2010-12-30 2015-04-07 Google Inc. Modifying ranking data based on document changes
CN102646103B (en) * 2011-02-18 2016-03-16 腾讯科技(深圳)有限公司 The clustering method of term and device
US8543577B1 (en) 2011-03-02 2013-09-24 Google Inc. Cross-channel clusters of information
US11003838B2 (en) 2011-04-18 2021-05-11 Sdl Inc. Systems and methods for monitoring post translation editing
US10796232B2 (en) 2011-12-04 2020-10-06 Salesforce.Com, Inc. Explaining differences between predicted outcomes and actual outcomes of a process
US10802687B2 (en) 2011-12-04 2020-10-13 Salesforce.Com, Inc. Displaying differences between different data sets of a process
US11436672B2 (en) 2012-03-27 2022-09-06 Exegy Incorporated Intelligent switch for processing financial market data
US9990393B2 (en) 2012-03-27 2018-06-05 Ip Reservoir, Llc Intelligent feed switch
US10650452B2 (en) 2012-03-27 2020-05-12 Ip Reservoir, Llc Offload processing of data packets
US10121196B2 (en) 2012-03-27 2018-11-06 Ip Reservoir, Llc Offload processing of data packets containing financial market data
US10261994B2 (en) 2012-05-25 2019-04-16 Sdl Inc. Method and system for automatic management of reputation of translators
US9075898B1 (en) * 2012-08-10 2015-07-07 Evernote Corporation Generating and ranking incremental search suggestions for personal content
US9633093B2 (en) 2012-10-23 2017-04-25 Ip Reservoir, Llc Method and apparatus for accelerated format translation of data in a delimited data format
US10146845B2 (en) 2012-10-23 2018-12-04 Ip Reservoir, Llc Method and apparatus for accelerated format translation of data in a delimited data format
US10102260B2 (en) 2012-10-23 2018-10-16 Ip Reservoir, Llc Method and apparatus for accelerated data translation using record layout detection
US9152622B2 (en) 2012-11-26 2015-10-06 Language Weaver, Inc. Personalized machine translation via online adaptation
US11741090B1 (en) 2013-02-26 2023-08-29 Richard Paiz Site rank codex search patterns
US11809506B1 (en) 2013-02-26 2023-11-07 Richard Paiz Multivariant analyzing replicating intelligent ambience evolving system
US9323721B1 (en) * 2013-02-27 2016-04-26 Google Inc. Quotation identification
US9183499B1 (en) 2013-04-19 2015-11-10 Google Inc. Evaluating quality based on neighbor features
US9213694B2 (en) 2013-10-10 2015-12-15 Language Weaver, Inc. Efficient online domain adaptation
US10102274B2 (en) * 2014-03-17 2018-10-16 NLPCore LLC Corpus search systems and methods
WO2015164639A1 (en) 2014-04-23 2015-10-29 Ip Reservoir, Llc Method and apparatus for accelerated data translation
US9747273B2 (en) 2014-08-19 2017-08-29 International Business Machines Corporation String comparison results for character strings using frequency data
US10331782B2 (en) 2014-11-19 2019-06-25 Lexisnexis, A Division Of Reed Elsevier Inc. Systems and methods for automatic identification of potential material facts in documents
US10942943B2 (en) 2015-10-29 2021-03-09 Ip Reservoir, Llc Dynamic field data translation to support high performance stream data processing
WO2017210618A1 (en) 2016-06-02 2017-12-07 Fti Consulting, Inc. Analyzing clusters of coded documents
EP3560135A4 (en) 2016-12-22 2020-08-05 IP Reservoir, LLC Pipelines for hardware-accelerated machine learning
US11500930B2 (en) * 2019-05-28 2022-11-15 Slack Technologies, Llc Method, apparatus and computer program product for generating tiered search index fields in a group-based communication platform

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4422158A (en) * 1980-11-28 1983-12-20 System Development Corporation Method and means for interrogating a layered data base
US4554631A (en) * 1983-07-13 1985-11-19 At&T Bell Laboratories Keyword search automatic limiting method
US4843389A (en) * 1986-12-04 1989-06-27 International Business Machines Corp. Text compression and expansion method and apparatus
US4870568A (en) * 1986-06-25 1989-09-26 Thinking Machines Corporation Method for searching a database system including parallel processors
US5265065A (en) * 1991-10-08 1993-11-23 West Publishing Company Method and apparatus for information retrieval from a database by replacing domain specific stemmed phases in a natural language to create a search query
US5321833A (en) * 1990-08-29 1994-06-14 Gte Laboratories Incorporated Adaptive ranking system for information retrieval
US5335345A (en) * 1990-04-11 1994-08-02 Bell Communications Research, Inc. Dynamic query optimization using partial information

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4384329A (en) * 1980-12-19 1983-05-17 International Business Machines Corporation Retrieval of related linked linguistic expressions including synonyms and antonyms
JPS61105671A (en) * 1984-10-29 1986-05-23 Hitachi Ltd Natural language processing device
US5159667A (en) * 1989-05-31 1992-10-27 Borrey Roland G Document identification by characteristics matching
US5220625A (en) * 1989-06-14 1993-06-15 Hitachi, Ltd. Information search terminal and system
JPH0675265B2 (en) * 1989-09-20 1994-09-21 インターナシヨナル・ビジネス・マシーンズ・コーポレーシヨン Information retrieval method and system
JPH03122770A (en) * 1989-10-05 1991-05-24 Ricoh Co Ltd Method for retrieving keyword associative document
US5301109A (en) * 1990-06-11 1994-04-05 Bell Communications Research, Inc. Computerized cross-language document retrieval using latent semantic indexing
US5325298A (en) * 1990-11-07 1994-06-28 Hnc, Inc. Methods for generating or revising context vectors for a plurality of word stems
US5278980A (en) * 1991-08-16 1994-01-11 Xerox Corporation Iterative technique for phrase query formation and an information retrieval system employing same

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4422158A (en) * 1980-11-28 1983-12-20 System Development Corporation Method and means for interrogating a layered data base
US4554631A (en) * 1983-07-13 1985-11-19 At&T Bell Laboratories Keyword search automatic limiting method
US4870568A (en) * 1986-06-25 1989-09-26 Thinking Machines Corporation Method for searching a database system including parallel processors
US4843389A (en) * 1986-12-04 1989-06-27 International Business Machines Corp. Text compression and expansion method and apparatus
US5335345A (en) * 1990-04-11 1994-08-02 Bell Communications Research, Inc. Dynamic query optimization using partial information
US5321833A (en) * 1990-08-29 1994-06-14 Gte Laboratories Incorporated Adaptive ranking system for information retrieval
US5265065A (en) * 1991-10-08 1993-11-23 West Publishing Company Method and apparatus for information retrieval from a database by replacing domain specific stemmed phases in a natural language to create a search query

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PROC. ACM (SIGIR 85), 1985, BUCKLEY et al., "Optimization of Inverted Vector Searches", pages 97-110. *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8429098B1 (en) 2010-04-30 2013-04-23 Global Eprocure Classification confidence estimating tool

Also Published As

Publication number Publication date
WO1994023386A3 (en) 1994-11-10
AU6445094A (en) 1994-10-24
US5488725A (en) 1996-01-30

Similar Documents

Publication Publication Date Title
US5488725A (en) System of document representation retrieval by successive iterated probability sampling
US5418948A (en) Concept matching of natural language queries with a database of document concepts
US7330811B2 (en) Method and system for adapting synonym resources to specific domains
US5606690A (en) Non-literal textual search using fuzzy finite non-deterministic automata
Turtle Text retrieval in the legal world
Wang et al. Relational thesauri in information retrieval
US7251781B2 (en) Computer based summarization of natural language documents
US6868411B2 (en) Fuzzy text categorizer
Moens Innovative techniques for legal text retrieval
Turtle et al. Uncertainty in information retrieval systems
Croft et al. Retrieving documents by plausible inference: a priliminary study
Mock Hybrid hill-climbing and knowledge-based techniques for intelligent news filtering
Kamruzzaman et al. Text categorization using association rule and naive Bayes classifier
Evans et al. CLARIT TREC design, experiments, and results
Bassil A survey on information retrieval, text categorization, and web crawling
Croft Effective text retrieval based on combining evidence from the corpus and users
Han et al. Automatic query expansion of Japanese text retrieval
Nallapati et al. Capturing term dependencies using a sentence tree based language model
Murad et al. Word Similarity for Document Gouping using Soft Computing
Syu et al. A neural network model for information retrieval using latent semantic indexing
Thambi et al. Graph based document model and its application in keyphrase extraction
Delisle et al. Pattern matching for case analysis: A computational definition of closeness
Boughanem et al. A neural network model for documentary base self-organising and querying
JPH06259482A (en) Data base retrieving device and method
Salama et al. New topological approach to vocabulary mining and document classification

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AT AU BB BG BR BY CA CH CN CZ DE DK ES FI GB HU JP KP KR KZ LK LU LV MG MN MW NL NO NZ PL PT RO RU SD SE SI SK TT UA UZ VN

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

AK Designated states

Kind code of ref document: A3

Designated state(s): AT AU BB BG BR BY CA CH CN CZ DE DK ES FI GB HU JP KP KR KZ LK LU LV MG MN MW NL NO NZ PL PT RO RU SD SE SI SK TT UA UZ VN

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA