|Publication number||US6978275 B2|
|Application number||US 09/944,919|
|Publication date||Dec 20, 2005|
|Filing date||Aug 31, 2001|
|Priority date||Aug 31, 2001|
|Also published as||US20030046263|
|Publication number||09944919, 944919, US 6978275 B2, US 6978275B2, US-B2-6978275, US6978275 B2, US6978275B2|
|Inventors||Maria Castellanos, James R. Stinger|
|Original Assignee||Hewlett-Packard Development Company, L.P.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (17), Referenced by (57), Classifications (9), Legal Events (5)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The present invention relates to the field of data mining. More particularly, the present invention pertains to a method and system for mining a document containing dirty text.
Demand for business data has risen as data processing capabilities have grown. More than ever, knowledge can be a critical advantage to success, just as lack of it can be a critical disadvantage. Companies with superior business knowledge have dramatically reduced costs, increased revenues, and enhanced profitability. These demands, as well as the requirements to shorten time to market, react to competitive threats, expand market share, and improve customer service, are driving the decision to collect and analyze business data.
Text mining is a technology for analyzing business data that focuses on extracting content from a document or a collection of documents. Extracting content has been increasingly recognized as an important area of research and application during the last few years due to the overwhelming volume of on-line text available on the Internet. Additional sources of information include E-mails, memos, customer correspondence, and reports. Extracting relevant data from such diverse sources can potentially provide a company with a substantial business advantage.
A variety of text mining techniques are available which can be used to mine the document collection depending on the intended outcome. For example, categorization technologies focus on organizing documents into categories, thus facilitating user navigation through large sets of documents. Categorization techniques group sets of documents according to shared attributes. Clustering is a categorization technique used to discover categories from a collection of documents according to their similarities. Classification is another categorization technique which applies to document collections when the categories are predefined. Classification techniques learn a model for each category which explains the principles governing the assignment of documents in the collection. Subsequently added documents can then be automatically incorporated into the existing structure of the document collection.
Other text mining technologies like information extraction and summarization focus on extracting pieces of information from each document. Summarization describes the main ideas of a document while reducing the amount of text a user must read. A summarizer extracts the most relevant portions of a document and presents them in a summary to the user. The need for document abstraction mechanisms has poised summarization as one of the most important areas in applied natural language processing and text mining.
Existing techniques, prototypes, and products for summarizing are designed to work with documents that contain clean, grammatically correct, and narrative text. In real world applications however, documents frequently contain anomalies. Misspellings, typographical errors, joined words, and ad hoc abbreviations are commonly found in text. Furthermore, domain specific anomalies may be present as well, like cryptic tables, programming code, and core dumps. All these anomalies are collectively known as “dirty text” and if they are not appropriately dealt with, they can skew the data set and alter the outcome of text mining operations in general, and in particular, summarization. In order to extract content accurately, most dirty text must be identified and normalized or even removed prior to performing any mining operations. Other anomalies such as bad grammar, even if not solved, need to be addressed since they limit the range of applicable techniques. For example, natural language processing is not applicable when summarizing dirty text. Furthermore, existing summarizers in general do not take advantage of existing domain knowledge which can be very useful in improving the quality of the summaries. Those that do take advantage of existing domain knowledge do it in a very limited way and are difficult to customize.
Accordingly, the need exists for a method and system for mining text documents containing dirty text such as typographical errors, misspellings, joined words, and ad hoc abbreviations as well as bad grammar, cryptic tables, programming code, core dumps, missing or ambiguous punctuation, and haphazard capitalization. A need further exists for a method and system for mining a document containing dirty text that can be easily customized and that takes advantage of existing domain knowledge.
The present invention provides a method and system to mine text documents containing dirty text such as typographical errors, misspellings, joined words, and ad hoc abbreviations as well as bad grammar, cryptic tables, programming code, core dumps, missing or ambiguous punctuation, and haphazard capitalization. Other than bad grammar, dirty text is removed or replaced and the document is processed using a variety of text mining techniques. The present invention can be easily customized and takes advantage of existing domain knowledge.
In one embodiment, the removal and replacement of dirty text is divided into two stages. In the first stage, a general cleaning occurs which will take place on all documents without regard to what domain they belong to. A thesaurus assistant assists in creating a domain specific thesaurus. An editor then replaces misspelled words and phrases, ad hoc abbreviations, and joined words with their standard counterparts in the thesaurus. In the second stage, the cleaning of documents is more specific to the anomalies specific to the domain of the document as well as the mining task. For example, non sentence-like text such as computer code, and core dumps is removed, either temporarily or permanently, to facilitate later steps like feature selection and sentence identification. The document is then processed using a variety of data mining techniques to derive relevant information. In one embodiment, sentence summarization, in which the sentence boundaries are identified and the resulting sentences are scored and ranked according to their relevance, is performed and the highest ranked sentences are extracted from the documents. The present invention allows a user to leverage existing domain knowledge and can be easily customized according to the domain and task requirements.
The accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments of the present invention and, together with the description, serve to explain the principles of the invention.
A method of preparing and mining text documents containing dirty text is described. While numerous details are set forth in order to provide a thorough understanding of the present invention, it should be understood that it is not intended to limit the invention to this particular embodiment alone. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. However, it will be obvious to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the present invention.
With reference to
In the present embodiment, computer system 100 includes an address/data bus 101 for conveying digital information between the various components, a central processor unit (CPU) 102 for processing the digital information and instructions, a volatile main memory 103 comprised of volatile random access memory (RAM) for storing the digital information and instructions, and a non-volatile read only memory (ROM) 104 for storing information and instructions of a more permanent nature. In addition, computer system 100 may also include a data storage device 105 (e.g., a magnetic, optical, floppy, or tape drive or the like) for storing vast amounts of data. It should be noted that the software program for performing the method of the present invention can be stored either in volatile memory 103, data storage device 105, or in an external storage device (not shown).
Devices which are optionally coupled to computer system 100 include a display device 106 for displaying information to a computer user, an alpha-numeric input device 107 (e.g., a keyboard), and a cursor control device 108 (e.g., mouse, trackball, light pen, etc.) for inputting data, selections, updates, etc. Computer system 100 can also include a mechanism for emitting an audible signal (not shown).
Returning still to
Furthermore, computer system 100 can include an input/output (I/O) signal unit (e.g., interface) 109 for interfacing with a peripheral device 110 (e.g., a computer network, modem, mass storage device, etc.). Accordingly, computer system 100 may be coupled in a network, such as a client/server environment, whereby a number of clients (e.g., personal computers, workstations, portable computers, minicomputers, terminals, etc.) are used to run processes for performing desired tasks (e.g., “creating,” “processing,” “comparing,” “removing,” “processing,” “outputting,” “evaluating,” “ranking,” and “presenting” etc.). In particular, computer system 100 can be coupled in a system for mining a document containing dirty text.
The present invention is a method and system for mining a document containing dirty text. Some kinds of dirty text are removed or replaced and the document is mined using a variety of text mining techniques, limited by some anomalies that cannot be corrected such as bad grammar. In one embodiment, the removal and replacement of dirty text is divided into two stages. In the first stage, a general cleaning takes place. This general cleaning will take place on all documents without regard to either the domain they belong to or the mining task that will be performed. In the second stage, domain and task specific cleaning takes place. The cleaning of documents at this stage is more specific to the anomalies specific to the domain and to the task to be accomplished. In the third stage, the document is mined using a variety of data mining techniques according to the mining task. In other words, a given text mining technique used in the third stage is regarded as a component which is selected according to what data mining operation is to be performed.
In one embodiment of the present invention, the mining of documents comprises summarization. In other embodiments, other text mining operations, which utilize data mining algorithms, can be performed. For example, in another embodiment the document mining can be clustering of documents to discover categories in the collection. Each of these stages relies upon individual techniques and can be configured with the combination of text processing techniques which is best suited for a particular application. The present invention can be customized, using prior domain knowledge to adjust parameter values of the text processing techniques applied to a particular domain of documents. A user can also adjust the parameters after a data mining operation has taken place to obtain, for example, a more accurate result.
The left side of
In first stage 201, with reference to
As other processing tools become available, they can be readily integrated into the present invention to enhance its functionality. In another embodiment of the present invention, other normalizing techniques may also be included in the general cleaning stage. A natural language normalizer, which could correct instances of poor grammar, is an example of a technique well suited for this stage of cleaning. Using a natural language normalizer would enable the use of summarizing techniques which are not currently utilized because of their reliance upon grammatically correct text. In such a case, natural language techniques could be integrated into other stages as well to take into account linguistic aspects of text.
In second stage 202, with reference to
Computer code has a signature composed of defining characteristics (e.g., short lines, keywords, and special symbols) which are listed in a file and compared to the documents. These defining characteristics have weights assigned according to their importance in identifying computer code. The special characters and keywords used to identify lines of computer code may be edited to reflect different programming languages or domains. The minimum total weight of a line as well as the minimum number of consecutive code lines required are parameters to code removal module 215. There must be a minimum number of consecutive lines identified as code lines before any line is considered to be a line of computer code. This is due to the assumption that lines of code appear in blocks of lines rather than singly.
In some instances, the computer code contains information that is relevant and should be included in the summary. Therefore, the user has the option of discarding the computer code at this point or re-inserting it into the document after sentence boundaries have been identified. The re-inserted code can be used in the third stage for sentence scoring to help in identifying important themes independently of whether it is included in the summary or not.
Also in second stage 202, with reference to
Removal of table lines is a matter of identifying a minimum number of consecutive lines having these table characteristics. This minimum number of lines, as well as the minimum number of spaces required between groups of words on a line and the minimum number of columns in the table, are all parameters to table removal module 220. After identifying a table, it is removed from the document to facilitate sentence boundary identification in the next stage. As with the step of code removal, the table may be optionally re-inserted into the document after sentence boundaries have been identified if the table contains relevant information (e.g., keywords).
In third stage 203, with reference to
The present invention uses a heuristic-based method to identify sentences in a document. The user can examine the document collection and identify common patterns which assist the sentence identifier in delineating sentence boundaries, or use provided defaults. These common patterns are listed in a file which can be edited. Standard punctuation rules are used whenever possible such as sentence-ending punctuation (e.g., periods, question marks, and exclamation points), followed by one or more blanks, followed by a word starting with an upper case letter. When these standard punctuation rules do not apply, other methods such as blank lines indicating the end of a sentence and other formatting characteristics are used. A parameter is also established limiting the maximum number of lines that a sentence may occupy. If that limit is exceeded without finding an end to the sentence, the sentence is ended at the end of a line and a new one is started.
Also in the third stage with reference to
There are different aspects of a sentence that are indicative of its relevance. However, unless the document contains grammatically correct and narrative text, the choice of sentence scoring techniques available is limited. The present invention incorporates scoring techniques that are applicable to documents lacking those characteristics based on a careful analysis of all available techniques. However, these scoring techniques are equally applicable to documents containing grammatically correct and narrative text. The individual scoring techniques adopt different criteria and establish different metrics to score the relevance of a sentence. The keywords in a sentence, the correlation between sentences, and the location of the sentence within the document are complementary aspects that are used for assessing the relevance of a sentence.
Referring again to
One of the techniques used for scoring sentences is based on the presence of keywords in the sentence. It is assumed that sentences that have more instances of keyword occurrence are more likely to convey the relevant themes of a document. The keyword technique generates a keyword glossary which assigns weights of importance to each word identified as a keyword. Keywords that are identified by more than one technique are given higher weight. Once a document has been analyzed and the keywords identified, each sentence is scored according to the keywords it contains. Parameters can be adjusted to give sentences with a greater keyword density or frequency a higher score. The sentences are scored according to the frequency, weight, and density of keywords in them.
Embodiments of the present invention use a combination of techniques to generate the keyword list. The thematic keyword technique is a document specific keyword generating technique based on the frequency of occurrence of words in the document. Words in the document are compared to a stop word list comprised of words regarded as irrelevant to the current domain. Words from the document that are on the stop word list are disregarded when analyzing the word frequency of a document. From remaining words, those that occur more frequently in the text are assumed to be more important in conveying the relevant theme of the document and are placed on the keyword list. The user may adjust the frequency threshold to make the keyword glossary more inclusive or exclusive as necessary. For example, if the user found it necessary to find more keywords, lowering the frequency threshold would include more words in the keyword glossary.
The location keyword technique generates keywords utilizing prior knowledge of the document structure to identify important sections and give keywords from these sections greater weight. Certain sections of the document (e.g., the introduction and the conclusion) can be identified by the user as being more likely to contain relevant information. The location keyword technique gives keywords identified in these sections greater weight than those from other sections and can assign to each section its own weight of importance. For example, keywords from the introduction can be assigned greater weight than keywords from the conclusion. Lacking any prior knowledge of the structure of the document, the location keyword technique does not have to be used.
Keywords can also be provided by the user in the form of cue phrases. These are held in a glossary of bonus/stigma words 530. For example, a user who considers technical information to be relevant can identify technical words as bonus words which will give a sentence containing these words a greater score. The same user can also identify stigma words which are indicative of non-relevance and that, if found in a sentence, give that sentence a lower score. Again, prior domain knowledge can be leveraged by the user to obtain a more accurate sentence scoring.
One other technique used to generate keywords relies on the signature of documents that have been previously categorized. Each category of documents has signature keywords that describe the characteristics of that category. For example, a category called computer might have keywords of memory, processor, and motherboard. These keywords can be generated by any of a number of different feature selection techniques for determining the characteristics of categorized documents. In one embodiment, the chi-square statistical technique is used to evaluate the association between words and categories. If a sentence contains signature words, it is given a higher score.
As previously stated, using these techniques, the keyword list is generated as well as a list of weights associated with the keywords. This allows the user to evaluate, for a given domain, which techniques are most effective in generating keywords, allowing the user to give keywords generated by the more effective techniques greater weight. Keywords identified by more than one technique can be given a greater weight than words identified by only one technique. The user can also adjust the frequency threshold and density coefficient parameters of the sentence scorer. Once a document has been analyzed and the keywords identified, each sentence is assigned a local score according to the frequency, weight, and density of keywords it contains.
Another technique for scoring sentences analyzes the location of the sentence itself to assign a local score. This is to be distinguished from the location keyword technique which gave keywords in certain sections greater weight. This location technique assigns a local score to a sentence as a whole depending on its location in the document. Relevant sections are identified and sentences that are contained in those sections are scored higher. The sentence can get a higher score depending on what section of the document the sentence is in, what paragraph in the section the sentence is in, and the location of the sentence within the paragraph. It is appreciated that prior domain knowledge is needed to identify the important sections in the document. However, if there is no prior knowledge of the document, this method will still work as certain paragraphs (e.g., first and last) and sentence locations within a paragraph (e.g., the first and last sentences of a paragraph) will be given a higher score as a default value. A parameter set by the user indicates which of these options will be used.
One other technique utilized by the sentence scorer to assign a local score is based on the semantic similarity of sentences. A sentence is considered more relevant if it is semantically related to a larger number of other sentences in the document. The semantic similarity method compares vectors of sentences to determine the semantic similarity of the sentences. Sentences are considered semantically related only if they exceed a threshold based upon the cosine of the angle between the vectors.
The user also sets parameters determining how the summary will be presented. The summary can be in the form of an extract or excerpt containing the most relevant sentences in the order they appear in the document and an appended numerical ranking of each sentence according to its relevance. Referring still to
The summary can also be in the form of a highlighted version of the document. The entire document is presented with the most relevant sentences highlighted. The highlighted sentences can be appended with their numerical ranking as well. The user can also be presented with both an excerpt and highlighted version.
Thus, the present invention provides a method and system to mine documents containing dirty text such as typographical errors, misspellings, joined words, ad hoc abbreviations, cryptic tables, programming code, and core dumps, in addition to bad grammar and haphazard punctuation. It also provides a method and system for summarizing documents containing dirty text that can be easily customized and takes advantage of existing domain knowledge.
The preferred embodiment of the present invention, a method and system for mining a document containing dirty text, is thus described. While the present invention has been described in particular embodiments, it should be appreciated that the present invention should not be construed as limited by such embodiments, but rather construed according to the following claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4839853 *||Sep 15, 1988||Jun 13, 1989||Bell Communications Research, Inc.||Computer information retrieval using latent semantic structure|
|US4965763 *||Feb 6, 1989||Oct 23, 1990||International Business Machines Corporation||Computer method for automatic extraction of commonly specified information from business correspondence|
|US5754938 *||Oct 31, 1995||May 19, 1998||Herz; Frederick S. M.||Pseudonymous server for system for customized electronic identification of desirable objects|
|US5857179 *||Sep 9, 1996||Jan 5, 1999||Digital Equipment Corporation||Computer method and apparatus for clustering documents and automatic generation of cluster keywords|
|US6085206 *||Jun 20, 1996||Jul 4, 2000||Microsoft Corporation||Method and system for verifying accuracy of spelling and grammatical composition of a document|
|US6199034 *||Apr 14, 1998||Mar 6, 2001||Oracle Corporation||Methods and apparatus for determining theme for discourse|
|US6308172 *||Jul 6, 1999||Oct 23, 2001||International Business Machines Corporation||Method and apparatus for partitioning a database upon a timestamp, support values for phrases and generating a history of frequently occurring phrases|
|US6332138 *||Jul 24, 2000||Dec 18, 2001||Merck & Co., Inc.||Text influenced molecular indexing system and computer-implemented and/or computer-assisted method for same|
|US6374241 *||Mar 31, 1999||Apr 16, 2002||Verizon Laboratories Inc.||Data merging techniques|
|US6442545 *||Jun 1, 1999||Aug 27, 2002||Clearforest Ltd.||Term-level text with mining with taxonomies|
|US6446061 *||Jun 30, 1999||Sep 3, 2002||International Business Machines Corporation||Taxonomy generation for document collections|
|US6539376 *||Nov 15, 1999||Mar 25, 2003||International Business Machines Corporation||System and method for the automatic mining of new relationships|
|US6567789 *||Jan 11, 2000||May 20, 2003||Samuel R. Baker||Method and system for electronic exchange of tax information|
|US20020103834 *||Jun 27, 2001||Aug 1, 2002||Thompson James C.||Method and apparatus for analyzing documents in electronic form|
|US20020138528 *||Mar 26, 2001||Sep 26, 2002||Yihong Gong||Text summarization using relevance measures and latent semantic analysis|
|US20020169788 *||Feb 14, 2001||Nov 14, 2002||Wang-Chien Lee||System and method for automatic loading of an XML document defined by a document-type definition into a relational database including the generation of a relational schema therefor|
|US20020178002 *||May 24, 2001||Nov 28, 2002||International Business Machines Corporation||System and method for searching, analyzing and displaying text transcripts of speech after imperfect speech recognition|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7343551 *||Nov 27, 2002||Mar 11, 2008||Adobe Systems Incorporated||Autocompleting form fields based on previously entered values|
|US7370057 *||Dec 3, 2002||May 6, 2008||Lockheed Martin Corporation||Framework for evaluating data cleansing applications|
|US7617176 *||Jul 13, 2004||Nov 10, 2009||Microsoft Corporation||Query-based snippet clustering for search result grouping|
|US7644373||Jan 23, 2006||Jan 5, 2010||Microsoft Corporation||User interface for viewing clusters of images|
|US7657504||Oct 10, 2006||Feb 2, 2010||Microsoft Corporation||User interface for displaying images of sights|
|US7707208||Oct 10, 2006||Apr 27, 2010||Microsoft Corporation||Identifying sight for a location|
|US7725465||Apr 18, 2007||May 25, 2010||Oracle International Corporation||Document date as a ranking factor for crawling|
|US7836050||Jan 25, 2006||Nov 16, 2010||Microsoft Corporation||Ranking content based on relevance and quality|
|US7941419||Feb 28, 2007||May 10, 2011||Oracle International Corporation||Suggested content with attribute parameterization|
|US7996392||Jun 27, 2007||Aug 9, 2011||Oracle International Corporation||Changing ranking algorithms based on customer settings|
|US8005816||Feb 28, 2007||Aug 23, 2011||Oracle International Corporation||Auto generation of suggested links in a search system|
|US8027982||Feb 28, 2007||Sep 27, 2011||Oracle International Corporation||Self-service sources for secure search|
|US8145662 *||Dec 31, 2008||Mar 27, 2012||Ebay Inc.||Methods and apparatus for generating a data dictionary|
|US8214394||Jul 3, 2012||Oracle International Corporation||Propagating user identities in a secure federated search system|
|US8234561 *||Mar 16, 2007||Jul 31, 2012||Adobe Systems Incorporated||Autocompleting form fields based on previously entered values|
|US8239349||Oct 7, 2010||Aug 7, 2012||Hewlett-Packard Development Company, L.P.||Extracting data|
|US8239414||May 18, 2011||Aug 7, 2012||Oracle International Corporation||Re-ranking search results from an enterprise system|
|US8290958||May 30, 2003||Oct 16, 2012||Dictaphone Corporation||Method, system, and apparatus for data reuse|
|US8316007 *||Jun 28, 2007||Nov 20, 2012||Oracle International Corporation||Automatically finding acronyms and synonyms in a corpus|
|US8332430||Feb 28, 2007||Dec 11, 2012||Oracle International Corporation||Secure search performance improvement|
|US8352475||Apr 4, 2011||Jan 8, 2013||Oracle International Corporation||Suggested content with attribute parameterization|
|US8370734 *||Oct 10, 2006||Feb 5, 2013||Dictaphone Corporation.||Method, system and apparatus for data reuse|
|US8412717||Jun 27, 2011||Apr 2, 2013||Oracle International Corporation||Changing ranking algorithms based on customer settings|
|US8433712||Feb 28, 2007||Apr 30, 2013||Oracle International Corporation||Link analysis for enterprise environment|
|US8522214 *||May 16, 2007||Aug 27, 2013||Open Text S.A.||Keyword based software testing system and method|
|US8595255||May 30, 2012||Nov 26, 2013||Oracle International Corporation||Propagating user identities in a secure federated search system|
|US8601028||Jun 28, 2012||Dec 3, 2013||Oracle International Corporation||Crawling secure data sources|
|US8626794||Jul 2, 2012||Jan 7, 2014||Oracle International Corporation||Indexing secure enterprise documents using generic references|
|US8676829||Mar 23, 2012||Mar 18, 2014||Ebay, Inc.||Methods and apparatus for generating a data dictionary|
|US8707451||Feb 28, 2007||Apr 22, 2014||Oracle International Corporation||Search hit URL modification for secure application integration|
|US8713053||Mar 9, 2010||Apr 29, 2014||Cisco Technology, Inc||Active tags|
|US8725770||Nov 14, 2012||May 13, 2014||Oracle International Corporation||Secure search performance improvement|
|US8799760 *||Dec 8, 2011||Aug 5, 2014||Xerox Corporation||Smart macros using zone selection information and pattern discovery|
|US8812420||Jan 17, 2012||Aug 19, 2014||Alibaba Group Holding Limited||Identifying categorized misplacement|
|US8868540||Feb 28, 2007||Oct 21, 2014||Oracle International Corporation||Method for suggesting web links and alternate terms for matching search queries|
|US8875249||Feb 28, 2007||Oct 28, 2014||Oracle International Corporation||Minimum lifespan credentials for crawling data repositories|
|US9081816||Oct 23, 2013||Jul 14, 2015||Oracle International Corporation||Propagating user identities in a secure federated search system|
|US9104968||Jun 11, 2014||Aug 11, 2015||Alibaba Group Holding Limited||Identifying categorized misplacement|
|US9165065 *||Mar 26, 2010||Oct 20, 2015||Paypal Inc.||Terminology management database|
|US9177124||Feb 28, 2007||Nov 3, 2015||Oracle International Corporation||Flexible authentication framework|
|US20040107202 *||Dec 3, 2002||Jun 3, 2004||Lockheed Martin Corporation||Framework for evaluating data cleansing applications|
|US20040243551 *||May 30, 2003||Dec 2, 2004||Dictaphone Corporation||Method, system, and apparatus for data reuse|
|US20050114888 *||Dec 6, 2002||May 26, 2005||Martin Iilsley||Method and apparatus for displaying definitions of selected words in a television program|
|US20060026152 *||Jul 13, 2004||Feb 2, 2006||Microsoft Corporation||Query-based snippet clustering for search result grouping|
|US20060095426 *||Sep 21, 2005||May 4, 2006||Katsuhiko Takachio||System and method for creating document abstract|
|US20060161537 *||Jan 19, 2005||Jul 20, 2006||International Business Machines Corporation||Detecting content-rich text|
|US20070038611 *||Oct 10, 2006||Feb 15, 2007||Dictaphone Corporation||Method, system and apparatus for data reuse|
|US20070174790 *||Jan 23, 2006||Jul 26, 2007||Microsoft Corporation||User interface for viewing clusters of images|
|US20070174872 *||Jan 25, 2006||Jul 26, 2007||Microsoft Corporation||Ranking content based on relevance and quality|
|US20070250486 *||Apr 18, 2007||Oct 25, 2007||Oracle International Corporation||Document date as a ranking factor for crawling|
|US20080010539 *||May 16, 2007||Jan 10, 2008||Roth Rick R||Software testing|
|US20080086686 *||Oct 10, 2006||Apr 10, 2008||Microsoft Corporation||User interface for displaying images of sights|
|US20080288488 *||Jul 12, 2007||Nov 20, 2008||Iprm Intellectual Property Rights Management Ag C/O Dr. Hans Durrer||Method and system for determining trend potentials|
|US20110238584 *||Sep 29, 2011||Ebay Inc.||Terminology management database|
|US20120095984 *||Oct 18, 2010||Apr 19, 2012||Peter Michael Wren-Hilton||Universal Search Engine Interface and Application|
|US20130151939 *||Dec 8, 2011||Jun 13, 2013||Xerox Corporation||Smart macros using zone selection information and pattern discovery|
|WO2012102898A1 *||Jan 17, 2012||Aug 2, 2012||Alibaba Group Holding Limited||Identifying categorized misplacement|
|U.S. Classification||1/1, 707/E17.094, 707/999.102, 707/999.002|
|Cooperative Classification||Y10S707/99943, Y10S707/99932, G06F17/30719|
|Sep 30, 2003||AS||Assignment|
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY L.P.,TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:014061/0492
Effective date: 20030926
|Jun 22, 2009||FPAY||Fee payment|
Year of fee payment: 4
|Aug 2, 2013||REMI||Maintenance fee reminder mailed|
|Dec 20, 2013||LAPS||Lapse for failure to pay maintenance fees|
|Feb 11, 2014||FP||Expired due to failure to pay maintenance fee|
Effective date: 20131220