|Publication number||US20040024598 A1|
|Application number||US 10/610,679|
|Publication date||Feb 5, 2004|
|Filing date||Jul 2, 2003|
|Priority date||Jul 3, 2002|
|Also published as||US7290207, US7337115, US7801838, US8001066, US20040006481, US20040006576, US20040006737, US20040006748, US20040024582, US20040024585, US20040030550, US20040117188, US20040199495, US20110004576|
|Publication number||10610679, 610679, US 2004/0024598 A1, US 2004/024598 A1, US 20040024598 A1, US 20040024598A1, US 2004024598 A1, US 2004024598A1, US-A1-20040024598, US-A1-2004024598, US2004/0024598A1, US2004/024598A1, US20040024598 A1, US20040024598A1, US2004024598 A1, US2004024598A1|
|Inventors||Amit Srivastava, Francis Kubala|
|Original Assignee||Amit Srivastava, Francis Kubala|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (5), Referenced by (17), Classifications (14), Legal Events (5)|
|External Links: USPTO, USPTO Assignment, Espacenet|
 This application claims priority under 35 U.S.C. § 119 based on U.S. Provisional Application Nos. 60/394,064 and 60/394,082 filed Jul. 3, 2002 and Provisional Application No. 60/419,214 filed Oct. 17, 2002, the disclosures of which are incorporated herein by reference.
 The U.S. Government may have a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of Contract No. N66001-00-C-8008 (Defense Advanced Research Projects Agency (DARPA)).
 A. Field of the Invention
 The present invention relates generally to speech processing and, more particularly, to the segmentation of speech based on thematic classification.
 B. Description of Related Art
 Speech has not traditionally been valued as an archival information source. As effective as the spoken word is for communicating, archiving spoken segments in a useful and easily retrievable manner has long been a difficult proposition. Although the act of recording audio is not difficult, automatically transcribing and indexing speech in an intelligent and useful manner can be difficult.
 Speech is typically received into a speech recognition system as a continuous stream of words without breaks. In order to effectively use the speech in information management systems (e.g., information retrieval, natural language processing, real-time alerting), the speech recognition system initially processes the speech to generate a formatted version of the speech. For example, the speech may be transcribed and linguistic information, such as sentence structures, may be associated with the transcription.
 In addition to segmenting speech segments based on linguistic information, it may be desirable to also segment the speech based on thematic structure. For example, when archiving a continuous broadcast of a radio news program, it may be desirable to know the portions of the news program that discussed the weather and the portions that were about foreign affairs. The portion of the broadcast that was directed to foreign affairs may be further classified into European and Middle East news segments. Users can, thus, later browse or listen to an archive copy of the news broadcast based on topics of interest.
 One technique for segmenting a continuous stream of speech based on thematic elements involves making thematic boundary decisions based on a word count within a moving window of text. FIG. 1 is a block diagram illustrating this technique in additional detail. Initial input audio information is transcribed by transcription component 101. The transcription may be performed manually or automatically. Transcription component 101 outputs a continuous stream of text. Windowing component 102 segments the text into chunks of texts of a predetermined length (e.g., 200 words) and generates a vector of the words that occur within the window. Words that occur more frequently within the window are weighted more heavily in the vector. Boundary decision component 103 detects changes in thematic segments based on the word count weighted vectors.
 A problem with this technique is that it can produce erroneous or non-optimal thematic segments. Accordingly, there is a need in the art to improve thematic segmentation of speech.
 Systems and methods consistent with the principles of this invention provide a thematic segmentation tool that acts on text augmented with additional information extracted from the spoken version of the text. The thematic segmentation tool may generate overlapping thematic segments for a single portion of text.
 One aspect of the invention is directed to a thematic segmentation tool that includes a transcription component configured to receive spoken audio information and to convert the spoken audio information into a document of text corresponding to the audio information. A linguistic detection component generates linguistic information corresponding to the text produced by the transcription component. A topic classification component generates topics relevant to the document. A thematic decision component generates indications of thematic segments based on the linguistic information, the document, and the topics.
 Another aspect of the invention is directed to a method for determining thematically coherent segments within a document. The method comprises receiving a document having associated linguistic information that describes linguistic features of the document and generating indications of thematically coherent segments within the document that occur at the linguistic features in the document.
 Yet another aspect of the invention is directed to a computing device comprising a processor and a computer memory coupled to the processor. The computer memory contains program instructions that when executed by the processor associate linguistic information with a document. The linguistic information demarcates linguistic breaks within the document. The program instructions additionally generate, based on the linguistic breaks within the document, indications of thematically coherent segments, and output the thematically coherent segments associated with labels describing thematic content of the thematically coherent segments.
 The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate the invention and, together with the description, explain the invention. In the drawings,
FIG. 1 is a block diagram illustrating thematic segmentation using a conventional technique based on a word count within a moving window of text;
FIG. 2 is a diagram illustrating an exemplary system in which concepts consistent with the invention may be implemented;
FIG. 3 is a block diagram illustrating software elements in a thematic segmentation tool consistent with the invention; and
FIG. 4 is a diagram illustrating exemplary thematic segments for a document.
 The following detailed description of the invention refers to the accompanying drawings. The same reference numbers may be used in different drawings to identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and equivalents of the claim limitations.
 Thematic segmentation of spoken audio is performed by a thematic segmentation tool on a transcribed version of the audio supplemented with additional information that further describes the audio. In one implementation, the transcription is supplemented with visible linguistic structural information, such as sentence demarcations and non-visible linguistic structural information such, as phrasal boundaries, topic lists, and speaker boundaries. The result of the thematic segmentation includes hierarchical and potentially overlapping thematic segments.
 Thematic segmentation, as described herein, may be performed on one or more processing devices or networks of processing devices. FIG. 2 is a diagram illustrating an exemplary system 200 in which concepts consistent with the invention may be implemented. System 200 includes a computing device 201 that has a computer-readable medium 209, such as random access memory, coupled to a processor 208. Computing device 201 may also include a number of additional external or internal devices, such as, without limitation, a mouse, a CD-ROM, a keyboard, and a display.
 In general, computing device 201 may be any type of computing platform, and may be connected to a network 202. Computing device 201 is exemplary only. Concepts consistent with the present invention can be implemented on any computing device, whether or not connected to a network.
 Processor 208 can be any of a number of well-known computer processors, such as processors from Intel Corporation, of Santa Clara, Calif. Processor 208 executes program instructions stored in memory 209.
 Memory 209 contains an application program 215. In particular, application program 215 may implement the thematic segmentation tool described below. The thematic segmentation tool 215 may receive input data, such as linguistically segmented text, from other application programs executing in computing device 201 or other computing devices, such as those connected to computing device 201 through network 202. Thematic segmentation tool 215 processes the input data to generate indications of thematic segments.
FIG. 3 is a block diagram conceptually illustrating software elements of thematic segmentation tool 215. Decisions relating to thematic segmentation are made by thematic decision component 310. Thematic decision component 310 implements a statistical framework that generates thematic segments for a “document.” The term document, as used herein, refers to a textual information and associated descriptive information relating to the document (e.g., speaker boundaries, phrasal boundaries, etc.). Although such a document may be generated from data from audio sources, it could be generated in other manners, such as from data from video or textual sources.
 Thematic decision component 310 receives a number of inputs that describe the document. Specifically, as shown in FIG. 3, thematic decision component 310 receives a text transcript of the document from transcription component 320, speaker boundary information from speaker boundary detection component 321, linguistic information from linguistic detection component 322, and topic classifications from topic classification component 323. Although transcription component 320, speaker boundary detection component 321, linguistic detection component 322, and topic classification component 323 are illustrated as part of thematic segmentation tool 215, in other implementations, these components may be considered as providing input information to a thematic segmentation tool implemented by thematic decision component 310.
 Transcription component 320 may be an automated or manual transcription tool that converts the audio input stream it receives into text. Transcription component 320 may use conventional techniques to perform the conversion.
 Speaker boundary detection component 321 locates boundaries between speakers in the audio input stream. Knowledge of speaker changes in an audio stream may be a useful indicator of potential changes in thematic content. Automated speaker boundary detection techniques are known in the art. For example, speaker boundary detection is described in Liu et al., “Fast Speaker Change Detection for Broadcast News Transcription and Indexing,” Eurospeech '99, Budapest, Hungary, September 99, pp. 1031-1034.; and Chen et al., “Speaker, Environment, and Channel Change Detection and Clustering via the Bayesian Information Criterion,” Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop, Lansdowne, Va., 1998. Alternatively, instead of automatically detecting speaker boundaries, the speaker boundaries may be manually inserted into the document.
 Linguistic detection component 322 receives the text generated by transcription component 320 and the audio input stream. Automated transcription techniques generally produce a simple stream of words without linguistic information (e.g., periods, exclamation marks, quotation marks) that would ideally be associated with the text. Linguistic component 322 annotates the text from transcription component 320 to include this linguistic information. In addition to visible linguistic information, such as periods, linguistic component 322 may associate non-visible linguistic information, such as phrasal boundaries, with the received text.
 Techniques for generating both visible and non-visible linguistic information are described in detail in U.S. patent application Ser. No. ______ (Attorney Docket Number 02-4024), titled “Linguistic Segmentation of Speech,” filed ______, the contents of which are hereby incorporated by reference.
 Topic classification component 323 generates topics selected from a predefined topic vocabulary that are relevant to the document. For example, a document may include any combination of words from a 60,000 word corpus. Topic classification component 323 examines the document and outputs one or more predefined topics, where the number of possible topics is less than the 60,000 word corpus (e.g., a 5,000 word topic vocabulary).
 Topic classification component 323, in one implementation, uses a Bayesian framework to generate topics for a document. More particularly, topic classification component 323 may be implemented as a probabilistic Hidden Markov Model (HMM) whose parameters are estimated from training samples of documents with given topic labels. This model allows each word in a document to contribute different amounts to each of the topics assigned to the document. The output of topic classification component 323 may be a rank-ordered list of all possible topics and corresponding scores that indicate the estimated relevance of each topic. In general, automated topic classification systems are known in the art. See, for example, Makhoul et al., “Speech and Language Technologies for Audio Indexing and Retrieval,” Proceedings of the IEEE, vol. 88, no. 8, August 2000.
 In another possible implementation, instead of estimating parameters based on training samples that have topics manually generated, topic classification component 323 can be constructed to generate topics based on unsupervised topic discovery.
 Thematic decision component 310 uses the outputs of transcription component 320, speaker boundary detection component 321, linguistic detection component 322, and topic classification component 323 to generate indications of thematic segments in the input document.
 Consistent with an aspect of the present invention, the thematic segments generated by thematic decision component 310 may include multiple overlapping thematic segments for a particular portion of a document. Additionally, thematic decision component 310 may label the thematic segments using a hierarchical labeling scheme such that a specific thematic segment (e.g., a thematic segment labeled “hurricane”) is organized as a subset of a more general thematic segment (e.g., the thematic segment labeled “weather”).
FIG. 4 is a diagram conceptually illustrating exemplary thematic segments for a document. Document 401 is conceptually illustrated as a series of lines that are assumed to correspond to text. Associated with the text in document 401 are linguistic cues such as periods 402 and commas 403. Although not shown, speaker boundaries, topics, and non-visible linguistic information may also be associated with document 401.
 Thematic segments in FIG. 4 are illustrated by the bracketed segments 410-412. As shown, thematic segment 410 and 412 overlap one another. In general, thematic segments do not necessarily have to sequentially follow one another. Thematic segment 412 may be hierarchically related to thematic segment 410 as a subset of thematic segment 410, or thematic segment 412 may be an independent and concurrent thematic segment.
 In general, when generating thematic segments, such as thematic segments 410-412, thematic decision component 310 honors the linguistic boundary information as basic constituents of the document. Thematic segments are formed as one or more of the sequential constituents (e.g., one or more sentences) determined by linguistic detection component 322.
 A number of different techniques can be used to implement the statistical framework of thematic decision component 310. Some of these techniques, and the speech features on which they are based, will now be described in more detail.
 Acoustic Features
 Speech has a range of properties that make it very different from plain text. Thematic segmentation on speech-transcribed text gains from the fact that the problem now has access to the original signal from the speaker in addition to the textual content of what was spoken. Nuances in speech from the speaker are frequently very relevant indicators of changes in content as well as intent by the speaker, both of which can be used to effectively model the shift in themes in an episode. Prosodic features, such as pause, pitch, energy, and, speaking rate, can be used in statistical models for detecting changes in the speech that correspond to a change in the theme of the content.
 Linguistic Features
 Word repetition can be used alone or in conjunction with other features like word frequency and synonyms. In most cases, synonyms are identified using predefined word-tables or word thesaurus, both of which are hard-to-generate-and-generalize resources. Latent Semantic Analysis (LSA) is a known robust technique used to match words that are synonyms and better handle the multiple meanings of a term. An example of the use of LSA is given in T. Brants, “Topic-Based Document Segmentation with Probabilistic Latent Semantic Analysis,” Proceedings of the Conference on Information and Knowledge Management, Nov. 4-9, 2002, McLean, Va. LSA uses singular value decomposition to map the high-dimensional word-document count matrix to a lower dimensional latent ‘semantic’ space wherein terms and documents that are closely associated are placed near one another. LSA has the additional property that it can reduce the dimensions of the linguistic features space (typically of the order of 60,000 terms for conventional large-vocabulary speech recognition systems) to much more manageable size and do this intelligently such that the inherent similarities between the terms in the space is not only preserved, it is collated for better modeling. Additional linguistic features, like Minimum Description Length (MDL) phrases, and named-entity phrases, can be added to the linguistic sub-space and rely on the LSA technique to connect the terms and the phrases effectively.
 Segmentation Approaches
 Segmentation techniques can compare the distance between two blocks of text and select segmentation points based on the similarity values between pairs of adjacent blocks. For Example, M. A. Hearst, “Multi-paragraph Segmentation of Expository Text,” Proceedings of the Association for Computational Linguistics, 1994, uses a sliding window and computes similarities between adjacent blocks based on their term frequency vectors. For thematic segmentation in speech, sliding windows of text can be used with a similarity measure based on the persistence of statistical-model-based hypothesized topics between pairs of adjacent blocks of windowed-text. The smallest unit for the segmentation process is an elementary block. Sentences can be used as the elementary blocks for defining the segmentation candidates. The text can be broken into blocks, i.e., sequences of consecutive elementary blocks, where each block includes some number of elementary blocks. In the training documents, these blocks are variable-sized, non-overlapping and generally do not cross segment boundaries. However, in the documents to be segmented, these blocks may be overlapping, as in the use of a sliding window. The set of positions between every pair of adjacent blocks compose the segmentation candidates.
 Mathematical Models
 There are a number of mathematical models that can be used to determine the relationship between varied features and shifts in thematic content. The models can operate on the pure acoustic features or the pure linguistic features, as well as the combined acoustic/linguistic features. For example, statistical learn-by-example techniques can be trained on roughly annotated training data for every domain and language may be used.
 Neural Networks
 Neural Networks can additionally be effective in approximating the complex, non-linear relationships that exist between features of various types, such as continuous, discrete, and in some cases even Boolean, as well as the change in structure in the underlying speech. Neural Networks can be used to model the acoustic features and produce an estimate of the similarity or dissimilarity between adjacent blocks on either side of a segmentation candidate. With the help of LSA, high-dimensional linguistic features can be mapped onto a low dimensional compact sub-space. The mapped features can be used with the prosodic information in a combined neural network to detect changes in themes.
 Probabilistic Latent Semantic Analysis (PLSA)
 Probabilistic Latent Semantic Analysis can be used to model high-dimensional linguistic features. The PLSA model can be used quite effectively with the combined feature space since it is highly adept at finding the subtle cross-correlations between features that expose the inter-relations between the terms and underlying themes. PLSA is a statistical latent class model that may provide better results than LSA for term matching in retrieval applications. In PLSA, the conditional probability between documents d and feature terms f is modeled through a latent variable z, which can be loosely thought of as a class or topic. A PLSA model is parameterized by P(f|z) and P(z|d), and the words may belong to more than one class and a document may discuss more than one “topic”. The latent variable z can be thought of as an unknown variable in the context of Expectation-Maximization algorithms and thus the parameters of the PLSA model can be trained from a corpus of documents using the EM algorithm. The use of PLSA allows for a better representation of sparse information in a text block, such as a sentence or a sequence of sentences. A wide variety of similarity measures like the cosine distance, the Bhattacharya distance, as well as Kullback-Leibler divergence can be used with the scores generated from the PLSA model to determine the segmentation boundaries.
 As described above, a thematic segmentation tool demarcates segments for a document that have similar thematic content. The thematic segmentation tool bases the thematic segments on a transcription of audio data augmented with additional information relating to linguistic and speaker descriptive properties of the audio. The thematic segments generated by the tool may be hierarchical and may include multiple different thematic segments for a portion of text.
 The foregoing description of preferred embodiments of the invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention.
 Certain portions of the invention have been described as software that performs one or more functions. The software may more generally be implemented as any type of logic. This logic may include hardware, such as application specific integrated circuit or a field programmable gate array, software, or a combination of hardware and software.
 No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used.
 The scope of the invention is defined by the claims and their equivalents.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US2151733||May 4, 1936||Mar 28, 1939||American Box Board Co||Container|
|CH283612A *||Title not available|
|FR1392029A *||Title not available|
|FR2166276A1 *||Title not available|
|GB533718A||Title not available|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7363279 *||Apr 29, 2004||Apr 22, 2008||Microsoft Corporation||Method and system for calculating importance of a block within a display page|
|US7487094 *||Jun 18, 2004||Feb 3, 2009||Utopy, Inc.||System and method of call classification with context modeling based on composite words|
|US7504969 *||Jul 11, 2006||Mar 17, 2009||Data Domain, Inc.||Locality-based stream segmentation for data deduplication|
|US7529765||Nov 23, 2004||May 5, 2009||Palo Alto Research Center Incorporated||Methods, apparatus, and program products for performing incremental probabilistic latent semantic analysis|
|US8095478||Apr 10, 2008||Jan 10, 2012||Microsoft Corporation||Method and system for calculating importance of a block within a display page|
|US8401977||Jan 10, 2012||Mar 19, 2013||Microsoft Corporation||Method and system for calculating importance of a block within a display page|
|US8670978 *||Dec 14, 2009||Mar 11, 2014||Nec Corporation||Topic transition analysis system, method, and program|
|US8819023 *||May 17, 2012||Aug 26, 2014||Reputation.Com, Inc.||Thematic clustering|
|US8886651 *||Dec 22, 2011||Nov 11, 2014||Reputation.Com, Inc.||Thematic clustering|
|US8954434 *||Jan 8, 2010||Feb 10, 2015||Microsoft Corporation||Enhancing a document with supplemental information from another document|
|US9053750 *||Jun 17, 2011||Jun 9, 2015||At&T Intellectual Property I, L.P.||Speaker association with a visual representation of spoken content|
|US20050246296 *||Apr 29, 2004||Nov 3, 2005||Microsoft Corporation||Method and system for calculating importance of a block within a display page|
|US20090306797 *||Sep 8, 2006||Dec 10, 2009||Stephen Cox||Music analysis|
|US20100198598 *||Aug 5, 2010||Nuance Communications, Inc.||Speaker Recognition in a Speech Recognition System|
|US20110173210 *||Jan 8, 2010||Jul 14, 2011||Microsoft Corporation||Identifying a topic-relevant subject|
|US20110246183 *||Dec 14, 2009||Oct 6, 2011||Kentaro Nagatomo||Topic transition analysis system, method, and program|
|US20120323575 *||Jun 17, 2011||Dec 20, 2012||At&T Intellectual Property I, L.P.||Speaker association with a visual representation of spoken content|
|International Classification||G06F17/00, G06F17/28, G10L21/00, G10L11/00, G06F17/21, G10L15/00, G06F7/00, G10L15/26|
|Cooperative Classification||G10L25/78, G10L15/26, Y10S707/99943|
|European Classification||G10L15/26A, G10L25/78|
|Jul 2, 2003||AS||Assignment|
Owner name: BBNT SOLUTIONS LLC, TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SRIVASTAVA, AMIT;KUBALA, FRANCIS;REEL/FRAME:014336/0622
Effective date: 20030701
|May 12, 2004||AS||Assignment|
Owner name: FLEET NATIONAL BANK, AS AGENT,MASSACHUSETTS
Free format text: PATENT & TRADEMARK SECURITY AGREEMENT;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:014624/0196
Effective date: 20040326
|Mar 2, 2006||AS||Assignment|
Owner name: BBN TECHNOLOGIES CORP.,MASSACHUSETTS
Free format text: MERGER;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:017274/0318
Effective date: 20060103
|Aug 9, 2007||AS||Assignment|
Owner name: BBNT SOLUTIONS LLC, MASSACHUSETTS
Free format text: CORRECTION OF ASSIGNEE ADDRESS RECORDED AT REEL/FRAME 014336/0622;ASSIGNORS:SRIVASTAVA, AMIT;KUBALA, FRANCIS;REEL/FRAME:019682/0623
Effective date: 20030701
|Oct 27, 2009||AS||Assignment|