CA2085895A1 - Continuous speech processing system - Google Patents
Continuous speech processing systemInfo
- Publication number
- CA2085895A1 CA2085895A1 CA002085895A CA2085895A CA2085895A1 CA 2085895 A1 CA2085895 A1 CA 2085895A1 CA 002085895 A CA002085895 A CA 002085895A CA 2085895 A CA2085895 A CA 2085895A CA 2085895 A1 CA2085895 A1 CA 2085895A1
- Authority
- CA
- Canada
- Prior art keywords
- data sets
- cluster
- word
- frame
- frame data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/10—Speech classification or search using distance or distortion measures between unknown speech and reference templates
Abstract
2085895 9200585 PCTABS00010 The speech (40) to be recognized is converted from utterances to frame data sets (42), which are smoothed (46) to generate a smooth frame model over a predetermined number of frames. A resident vocabulary is stored within the computer (50) as clusters of word models which are acoustically similar over a succession of frame periods. A cluster socre is generated by the system, which score inlcudes the likelihood of the smooth frames evaluated using a probability model for the cluster against which the smooth frame model is being compared. Cluster sets having cluster scores below a predetermined acoustic threshold are removed from further consideration. The remaining cluster sets are unpacked for determination of a word score for each unpacked word. These word scores are used to identify those words which are above a second predetermined threshold to define a word list which is sent to a recognizer for a more lengthy word match.
Description
W~92/00S~5 PCT/US9l/~321
2 ~ 3~
: . .
~ONTI~UOUS SPE~C~ PRO~ESSING S~S~
'' . ~.
5 BACKGROUN~ OF~E ~ISCLOSURE
While machines which recognize discrete, or isolated, words are well-known in the art, there is on-going research and development in constructing 10 large vocabulary systems for recognizing continuous speech. Esamples of discrete speech recognition systems are described in U.S. Patent ~oi 4,783,803 (Baker et al., Nov. 8, 19~3) and U.S. Patent No~
4,837,831 (Gillick et al., Jun. 6, 1989), both of 15 which are assigned to the assignee of the present application and are herein incorporated by reference. Generally, most speech recognition systems match an acoustic description of words, or parts of words, in a predetermined vocabulary against 20 a representation of the acoustic signal gen4rated by the utterance of the word to be recognized. One method for establishing the vocabulary is through the incorporation of a training process, by which a user ~$rains~ the computer to identify a certain word 25 having a specific acoustic segment.
A large number of calculations are reguired to identify a spoken word from a given large vocabulary in a speech recognition system. The 30 number of calculations would effectively prevent real-time identification of spoken words in such a ; speech recognition system. Pre-filtering is one means of identifying a preliminary set of word models :
. .
.
`,' . '' ' ' ' . ~ ', ' , '; ' - ' ' ',' : ` " ' ' ' ' : ~ .' ' .. ',;, ~ ' . ''' ' '. `' ' ' . . ' .' ' : ' ' '' ' :~ ' ." ' ' , ' `'' ' . ' ' ~ , .
W092/00585 PCT/US91/~321 ~ 9~ -2-a~ainst which an acoustic model may ba compared.
Pre-fil~ering enablqs such a speech recognition system to id~ntify spoken words in real-time.
S Present pre-filtering systems used in certain prior art discrete word recognition systems rely upon identification of the beginning of a word.
One esample, as described in detail in U.S. Patent ~;
No. 4,837,831, involves establishing an anchor for 10 each utterance of each word, which anchor then forms the starting point of calculations. That patent -discloses a system in which each vocabulary wora is represented by a sequence of statistical node models. Each such node model is a multi-dimensional 15 probability distribution, each dimension of which represents the probability distribution for the values of a given ~rame parameter if its associated frame belongs to the class of sounds represented by the node model. Each dimension of the probab~lity 20 distribution is represented by two statistics, an estimated espected value, or mu, and an estimated absolute deviation, or sigma. A method for deriving statistical models of a basic type is disclosed in U.S. Patent No. 4,903,305 (Gillick et al., Feb. 20, 25 1990), which is assigned to the assignee of the - present application and which is herein incorporated by reference.
Patent No. 4,903,305 discloses dividing the 30 nodes from many words into groups of nodes with similar statistical acoustic models, forming ;~
cluster~, and calculating a statistical acoustic ~ ; -model for each such cluster. The model for a given cluster is then used in place of the individual node ~ ~
.~ "
;':: . :'.'. ' , , , ' ' , ;' ' W092/0058~ PCT/US91/04321 _3_ 2~ 9~
mcdels from different words which have been grouped into that cluster, greatly reducing the number of models which have to be stored. One use of such cluster models is found in U.S. Patent No. 4,837j831 5 (Gillick et al., Jun. 6, 1989), cited above. In that patent, the acoustic description of the utterance to be recognized includes a succession of acoustic descriptions, representing a sequence of sounds associated with that utterance. A successio~ of the 10 acoustic representations from the utterance to be recognized are compared against the succession of acoustic models associated with each cluster model to produce a cluster likelihood score for each such cluster. These cluster models are ~wordstart~
15 models, that is, models which normally represent the initial portion of vocabulary words. The likelihood score produced for a given wordstart cluster model is used as an initial prefiltering score for each of its corresponding words. Extra steps are included which 20 compare acoustic models from portions of each such word following that re~resented by its wordstart model against acoustic descriptions from the ut~Erance to be recoqnized. Vocabulary words having the worst scoring wordstart models are pruned from 25 further consideration before performing extra prefilter scoring steps. The comparison between the succession of acoustic descriptions associated with the utterance to be recognized and the succession of acoustic models in such cluster model are performed 30 using linear time alignment. The acoustic description of the utterance to be recognized comprises a sequence of individual frames, ea~h describing the utterance during a brief period of time, and a series of smoothed frames, each derived ... ; ~ ~ :: ;
.. ~ ... .. -, ~ ~ .
~, . . . .. . .. .... . ..
W092/00~85 PCT/US91/04321 ~ ~S 9 ~ 5 _4_ f om a weighted average of a plurality of individual frames, is used in the comparison against the cluster ;~
model. ~ -Other methods for reducing the size of a set against which utterances are to ~e identified by the system include pruning, and le~ical retriev~l. U.S.
Patent No. 4,837,83l, cited above, discloses a method ~
of prefiltering which compares a sequence of models ,!
l0 from the speech to be recognized against corresponding se~uences of models which are associated with the beginning of one or more vocabulary ~ords. This method compensates for its use of linea~ time alignment by combining its 15 prefilter score produced by linear time alignment with another prefilter score which is calculated in a manner that is forgiving of changes in speaking rate or improper insertion or deletion of speech sounds.
The statistical method of hidden Mar~ov modeling, as incorporated into a continuous speech recognition system, is described in detail in U.S.
Patent No. 4,803,729 (Baker et al., Feb. 7, 1989), which is assigned to the assignee of this 25 application, and which is herein incorporated by reference. In that patent, use of the hidden Markov -model as a technique for determining which phonetic label should be associated with each frame is disclosed. That stochastic model, utilizing the 30 Markov assumption, greatly reduces the amount of computation required to solve complex statistical probability equations such as are necessary for word recognition systems. Although the hidden Markov ~-model increases the speed of such speech recognition - . . :: . -. '. ' . : . : : : ,: ,: ' : : : ~ , : . , , :
;;.. , . . .. ~.. ~ ., .. , ., :;, , ,...... .. , : -,, . . ~ . . : -. , W092/0~85 PCTtUS91/~321 systems, the problem remains in applying such a statistical method to continuous word recognition where the beginning of each word is contained in a continuous sequence of utterances.
Many discrete speech recognition systems use some form of a ~dynamic programming~ algorithm.
Dynamic programming is an algorithm for implementing certain calculations to which a hidden Markov ~odel lO leads. In the conte~t of ~peech recognition systems, dynamic programming perorms calculations to `~
det~ermine the probabilities that a hidden Markov Model would assign to given data.
Typically, speech recognition systems using dynamic programming represent speech as a seguence of frames, each of which represents the speech during a brief perioa of time, e.g., fiftieth or hundredth of a second. Such systems normally model each 20 vocabulary word with a sequence of node models which represent the seguence of differe~t frames associated with that word. Rouqhly speaking, the effect of dynamic programming, at the time of reco~nition, is to slide, or espand and contract, an operating 25 region, or window, relative to the frames of speech so as to align those frames with the node models of each vocabulary word to find a relatively optimal time alignment between those frames and those nodes.
The dynamic programming in effect calculatss the 30 probability that a given sequence of frames matches a given word model as a function of how well each such frame matcheæ the node mode~ with which it has been time-aligned. The word model which has the highest probability score is selected as corresponding to the .... ~ ......................................................... .
.: . . : . .. ;:: .. .:
W092t~85 PCT/US~1/04321 ~ 5 -6-speech. Dynamic programming obtains relative}y optimal time alignment between the speech to be recognized and the nodes of each worcl model, which compensates for the unavoidable differences in 5 speaking rates which occur in different utteranees of the same word. In addition, since dynamic programming scores words as a functic)n o~ the fit between word models and the speech over many frames, it usually gives the correct word the best s~ore, l0 even if the word has been slightly misspoken or obscursd by background sound. This is important, because humans often mispronounce words either by deleting or mispronouncing proper sounds, or by inserting sounds which do not belong. Even absent 15 any background sound, there is an inherent variability to human speech which must be considered in a speech recognition system.
Dynamic programming requires a tremendous 20 amount of computation. In order for it to find the optimal time alignment between a sequence of frames ~;~
and a sequence of node models, it must compare most frames against a plurality of node models. One method of reducing the amount of computation required 25 for dynamic programming is to use pruning. Pruning terminates the dynamic programming of a given portion of speech against a given word model if the partial probability score for that comparison drops below a given threshold. This greatly reduces computation, 30 since the dynamic programming of a given portion of speech against most words pro~uces poor dynamic programming scores rather guickly, enabling most words to be pruned after only a small percent of their comparison has been performed. Unfortunately, .
..... . . : . . .
. ,~ -.: . . . :
~ ": - . . - . ,,. . -W092/00585 P~T~US9ltQ4321 -7- ~
however, even with such pruning, the amount of computat-on required in large vocabulary systems of the type necessary to transcribe normal dictation. .
Continuous speech computational requirements are even greater. In continuous speech, the type of which humans normally speak, words are run together, without pauses or other simple cues to indicate where one word ends and the nest beyins. When a mechanical - l0 speech recognition system attempts to recognize continuous speech, it initially has no way of identifying-those portions of speech which correspond to individual words. Speakers of English apply a -host of duration and coarticulation rules when 15 combining phonemes into words and sentencPs, employing the same rules in recognizing spo~en -~
language. A speaker of English, given a phonemic spelling of an unfamiliar word from a dictionary, can pronounce the word recognizably or recognize the word 20 when it is spoken. On the other hand, it is impossible to put together an Ualphabet~ of recorded phonemes which, when concatenated, will sound like natural English words. It comes as a surprise to most speakers, for example, to discover that the 25 vowels in ~will~ and ~kick~, which are identical according to dictionary pronunciations, are as different in ~heir spectral characteristics as the vowels in ~not~ and ~nut", or that the vowel in ~size~ has more than twice the duration of the same 30 vowel in ~seismograph~.
One approach to this problem of recognizing discrete words in continous speech is to treat each successive frame of the speech as the possible -~
.
W092/00585 PCT/US9t/04321 ~?~ ~ ~S -8-, beginning of a new word, and to begin dynamic programming at each such frame against the start of each vocabulary word. However, this approach requires a tremendous amount of computation. A more 5 efficient method used in the prior art begins dynamic programming against new words only at those frames for which the dynamic programming indicates that the speaking of a previous word has just ended. Although this latter method is a considerable improvement, 10 there remains a need to further reduce computation by reducing the number of words against which dynamic programming is started when there is indication that a prior word has ended. ~
~. :
; 15 One such method of reducing the number of vocabulary words against which dynamic prograrnming is started in continuous speech recognition associates a phonetic label with each frame of the speech to be recognized. The phonetic label identiEies which ones 20 of a plurality of phonetic frame models compares most closely to a given frame of speech. The system then divides the speech into segments of successive frames associated with a single phonetic label. For each given segment, the system takes the sequence of five 25 phonetic labels associated with that segment plus the ne~t four segments, and refers to a look-up table to find the set o~ vocabulary words which pre~iously have been determined to have a reasonable probability of starting with that sequence of phonetic labels.
30 As referred to above, this is known as a ~wordstart cluster~. The system then limits the words against which dynamic programming could start in the g:;ven segment to words in that cluster or set.
. ~: ,: : .. . . .
:... . .. . .
' .: . . : ': . : : .
,: . ~. ... . ~ :~ .
WO 92/0ûS85 PCT/~JS~1/04321 _ 9 ~ ~ r~
A method for handling continuous speech recognition is described in U.S. Patent No. 4,805,219 (Baker et al., Feb. 14, 1989), which is assigned to the assignee o this application, and which is herein 5 incorporated by reference. In that patent, both the speech to be recognized and a plurality of speech pattern models ar~ time-aligned against a common time-aligning model. The resulting time-aligned speech model is then compared against each of the 10 resulting time-aligned pattern models. The time-alignment against a common time~alignment model causes the comparisons between the speech model and each of the pattern models to compensate for variations in the rate at which the portion of speech 15 is spoken, without requiring each portion of speech to be separately time-aligned ayainst each pattern model.
One method of continuous speech recognition 20 is described in U.S. Patent ~o. 4,803,729, cited above. In that patent, once the speech to be recognized is converted into a sequence of acoustic frsmes, the nest step consists of ~smooth frame labelling~. This smooth frame labelling method 25 associates a phonetic frame label with each frame of the speech to be labelled as a function Of: tl) the closeness with which the given frame compares to each of a plurality of the acoustic phonetic frame models;
(2) an indication of which one or ~ore of the 30 phonetic frame models most probably correspond with the frames which precede and follow the given frame, and; t3) the transition probability which indicates for the phonetic models associated with those W092/0058~ PCT/US91/~321 --10-- :
neighboring frames which phonetic models are most likely associated with the given frame.
Up to this time, no pre-fi'Ltering system has 5 been implemented which provides the desired speed and accuracy in a large vocabulary continuous speech `-~
recognition system. Thus, there remains a need for an improved continuous speech reco~11ition system which rapidly and accurately recognizes words ~;~
; lO contained in a sequence of continuous utterances.
- It is thus an object of the present invention to provide a continuous speech pre-filtering system for use in a continuous speech 15 recognition computer system.
~ ....... ... . .... .
W092/00~85 ~$~ PCT/US91/04321 SuMMARy OF THE INVE~TION
The system of the present invention relates to continuous speech processing systems for use in 5 large vocabulary continuous speech recognition systems.
Briefly, the system includes a stored ~ocabulary of word models. Utterances are temporally l0 segmented, and at least two non-successive segments are processed with respect to the vocabulary.
subset of word models is generated from the stored vocabulary based on p-redetermined criteria. The subset of word models defines a list of candidate 15 words which are represented by a signal generated by the system of the invention.
In one form, the system of the invention generates a succession of frame aata sets which begin 20 at a frame start time. Each of the frame data sets represents successive acoustic segments of utterances for a specified frame period. The frame data sets are each smoothed to generate a smooth frame data set, or smooth frame model, over a predetermined 25 number of frames.
The system also includes a vocabulary, trained into the system by the user, which may be stored within the system as clusters. In the 30 preferred embodiment of the invention, these clusters are wordstart clusters. Each cluster includes a plurality of word models which are acoustically similar over a succession of frame periods. Each word model includes nodes represPnting the .--.. . .. :., .. .. . ,......... . . ........ .... : . .
.. ~ .: . . . . . .... . . . .
. , . ., , ~ . . . . ...
W092/005~5 PCT/US91/~321 ~ 5 -12-probability distribution for the occurrence of a selected acoustic segment from that word model in a segment of the speech to be recognize!d.
The system generates a eluster score which represen~s the average negative-log likelihood of the smooth frames, from the previously identified smooth frame data set, evaluated using the probability model for the cluster against which the smooth frame model -l0 is being compared. Cluster sets having cluster scores above a predetermined acoustic threshold are removed from further consideration.
The cluster sets not removed from further 15 consideration are then unpacked to identify the individual words from each identified cluster. At this point, the system genarates a word score for each unpacked word resulting ~rom the first filter.
The word w ore represents the sum of the clust~r 20 score ~or the cluster ~rom which the word was unpacked, and a language model score. This word score is used to identify those words which are below a second combined threshold to form a word list. The word list generated by the system of the invention is 2~ then sent to a recognizer for a more lengthy word match.
One important aspect of the invention is in a controler which enables the system to initialize 30 times corresponding to the frame start time ~or each of a given frame data set. As a result of this -controler, the system enabl~s the identification of a preliminary list of candidate words for a continuous utterance. In prior art discrete speech systems, W092/00585 P~T/US9l/~4321 ~3~3 ~
such pre-filtering systems rely upon the occurrence of silence to mark the beginning of a spoken word.
The system of the present invention generates a preliminary candidate word list, by pre-filtering at 5 arbitrarily selected times, not necessarily for successive acoustic segments, and without identification of an anchor, such as silence.
W092/00~85 PC~/US91/~321 BRIEF DE$~RIpTIOr~Q F THE ~ L~9~
The foregoing and other objects of thisinvention, the various features thereof, as well as 5 the invention itself, may be more ful:Ly understood from the following description, when read together with accompanying drawings in which:
FIGUXE l is a schematic flow diagram of a lO preferred embodiment of a continuous speech recognition system according to the present invention;
~ :
FIGURE 2 is a schematic block diagram of the hardware used in a preferred embodiment of a 15 continuous speech recognition system according to the present invention; and FIGURE 3 is a schematic representation of ;, .~.
the smooth frame system of the present invention.
: .. . - - .- . . . : . - ~ . . : . -. -::.. : ~ : : . . .. . . .. . . .. .. .
. ~ . : .. -,: .. , . . : .. :: - . :, :,: . - :
:,. . ... . .: ,- ~ .. .
: , - .
:.
WO92J00585 ~ ~ PCT~US91/04321 DESÇR~TION ~F THE pREFERRED EMaQnlMEE~
The preferred embodiment of the present invention is a computer system designed to recognize 5 continuous speech input, in the form of utterances, by a user. The system may include a wide variety of computers and computer languages, provided that the computer includes the capacity to convert speech into digital signals, which signals may then be processed l0 by the computer. A specific version of the invention which has already been tested by the inventor is run on a Compaq Deskpro 386 personal computer manufactured by the Compaq Computer Company of Houston, Texas~ and is written in the C programming l5 language.
, FIGURE l is a general flow diagram showing the flow of information or data of the pr~sent invention. As shown, Phase I involves the flow of 20 data from the user, in the ~orm of utterances UT, through a series of transformers into transform data TR. The ~ransform data is concurrently sent to a recognizer R and a processing, or pre-filter system PF. While the recognizer R processes the transform 25 data TR, it queries the pre-filter system PF for data. Phase II involves the flow of transform data TR to the pre-filter system PF. Phase III then involves data flow of pre-filter data to a recognizer R upon query by the recognizer R. User U reGeives 30 recognizer data in the form of a monitor word display on a monitor M. Each phase will separately be discussed below. The system of the present invention is used during Phase II for converting transform data into pre-filter data whi~h is then sent for more 35 lengthy filtering at a recognizer (Phase III).
., .: , ::
:~ : .. : -: :
.:
~ 3~ -l6-Phase I
As shown in FIGURE 2, the present system of 5 the invention includes hardware for detecting utterances of spoken words by a user, and for converting the utterances into digital signals. The ~ ;
hardware for detecting utterances may include a microphone 40, which in the preferred embodiment is a lO head-mount microphone for easy user access. The hardware further inc}udes an A~D converter 42, a peak amplitude detector 44, a fast-fourier-transform (or ~FFTn) 46, an~ an utterance detector 48. The signals produced by each of these devices are supplied to a ~-15 programmable computer 50, such as a Compag model '386 computer, or its equivalent. A monitor 52, keyboard 54, and the computer interfaces 80 and B2, respectively, are generally of the type commonly used with such personal computers.
The,output of microphone 40 is connected to the input of the ~D converter 42. The A/D converter converts the analog signal produced by the microphone -40 into a sequence of digital values representing the 25 amplitude of the signal produced by the microphone 40 at a sequence of evenly spaced times. For purposes of the present invention, it is sufficient if the A/D
converter 42 is a codec chip giving 8-bit ~-law samples at a sample rate o 12000 hertz. These 30 samples are converted to 14-bit signed linearized samples which are supplied to the inputs of the peak-amplitude detector 44 and the FFT 4S.
- . .
W092~0585 ~ PCT/~S91/04321 FFT is well known in the art of digital signal processing. Such a transform converts a time domain signal, which is amplitude over time, into a frequency domain spectrum, which e~presses the 5 frequency content of the time domain signal~ In the preferred embodiment, the FFT 46 converts the output of the A~D converter 42 into a sequence of frames, each of which indicates the spectrum of the ~ignal supplied by the A~D converter in each of eight - lO different frequency bands. In the preferred embodiment FFT 46 produces one such frame every fiftieth of a secona.
The ~F~ 46 thus produces a vector of values lS corresponding to the energy amplitude in each of sisteen frequency bands. The FFT 46 converts each of these sisteen energy amplitude values into a sixteen-bit logarithmic value. This reduc~s subsequent computation since the sixteen-bit 20 logarithmic values are more simple to perform calculations on than the longer linear energy amplitude values produced by the FFT, while representing the same dynamic range. Ways $or improving logarithmic conversions are well known in 25 ~he art, one of the simplest bein~ use of a look-up table.
In addition, the FFT 46 modifies its output to simplify computations based on the amplitude of a 30 given frame. This modification is made by deriving an average value of the logarithms of the amplitudes for all sixteen bands. This average value is then subtracted from each of a predetermined group of -logarithms, representative of a predetermined group ~........ . , , . - , - , . . . ~ .
- ,, , : , ~ . , : :
:::- ~ : . : : :
,:,. .. . ,. .. ~ ~
WO 92/00~85 PCl~/US9~/04321 J~r- -18-of frequencies. In the preferred embodiment, the predetermined group consists of the first seven ;
logarithmic values, representing each of the f irst ~ `:
seven frequency bands.
S .
Thus, utterances are convert;ed from acoustic data to a sequence of vectors o k dimensions, each seguence of vectors identified as an acoustic frame. ~ ;
In the preferred embodiment, each frame represents 20 10 milliseconds of utterance, or duration, and k = 8.
Other devices, systems, and methods of transforming utterances received from a user i~nto data upon which pre-filtering systems may act are contemplated as being within the scope of this invention.
Phas~_Il The primary function of a pre-filter system in a speech recognition system is reduction of the 20 size of the vocabulary of words against which an utterance is compared. In a large vocabulary system, over 30,000 words may be contained in a predetermined vocabulary. The time required to test each acoustic segment of an utterance against each of those 30,000 25 words essentially prohibits real-time speech recognition. Thus, Phase II of the present invention involves the reduction of the vocabulary against which utterances are checked, in conjunction with reducing the number of acoustic segments which are 30 checked against the vocabulary to correctly identify the spoken word. The resulting pre-filter data consists of a preliminary word list which, during Phase III, is involved in a more lengthy word match.
~s :
' :. , , ~ . , . : :
.......... .. . . . . .. . ..
W092/00585 ;~ 3 ~ pcr/us91/o43~l --19-- ~ ~
Th~ prefiltering system of the present invention consi~ts of a rapid match, similar to that described in U.S. Patent No. 4,783,803. The purpose of the rapid match computation is to select, from a 5 relatively large initial vocabulary, a smaller, originally active, vocabulary of words judged most likely to correspond to a given spoken utterance.
Rapid match conserves computation by providing a preliminary word list upon which a more lengthy word 10 match is performed. The preliminary word list only includes words which have a reasonable chance of corresponding to the given utterance, as determined by the rapid match system.
At any given time t, the system of the invention provides a short list of words that might begin at that time, based on analysis o~ the sequence of W frame vectors:
vt, vt~l~.. -vt~w-l where w is the number of frames over which the system performs an evaluation, or the window width, and each v is a vector having k values for its associated 25 frame. In the preferred embodiment, window width w = 12. Although in the present embodiment the periods for each frame are the same, those values may differ in other embodiments.
From the sequence of frame vectors, the system generates s smooth frames Yl,... Ys wh ch are based on the vectors in the window, each of which are determined in accordance with one of the following:
WV92/0058~ PCT/USgltO4321 ~ 395 -20~
.
b-l ~
Y~ aivt~i ~' ~''' `
l s O ~, b-l Y2 = ~ aivt+c~i i-O
b-l Ys ~ i~o aiVt~(s-l)c~
wherein the indes b (e.g., b=4) is the smooth frame window width, coefficients ai are smoothing weights, and s is the number of smooth frames in the ~mooth frame data set (e.g., s~3). In the preferred 20 embodiment, ai are positive, and all ai = l/b such that the sum of all ai = l. In one aspect of the invention, the smooth frame window width may be e~pressed such that w - b ~ (s-l)c. In other aspects, window lengths may be variable. Variable c 25 is the offset of each of said smooth frame windows which may be set so that current windows are either overlapping or non-overlapping. In the preferred embodiment of the invention, c = 4, so that the current windows are non-overlapping. In other 30 aspects, the offsets of successive windows may be variable.
Other systems of data reduction may be included, instead of smoothing. For e2ample, the 35 data may be reduced by fitting it to a predetermined model. The type of data reduction system used will depend upon the computer system, the form of data manipulated, and the desired speed of the system.
.,' ~
~ ~, ., .. , . ~ . : .
WO 92/~K85 ~ ~ PCT/US9~/~321 ..
In addition, and as schematically shown in FIGURE 3, smooth frame data sets Yl (llOA), Y2 (110~) :
and Y3 tllOc) are established from frame vectors Vo ~ :
5 (lOOA), ...V11 ~lOOL), where WQ12~ b 4, ai.l, S=3, and c=4, the frame vectors have dimension k and the .
smooth frame data sets have dimension j.
In the illustrated embodiment, j~k=8, but in 10 other embodiments j may be less than k. The k ~alues for each of the transform data TR are identified in FIGU~E 3 as parameters P(l) through P(AMP). The first seven parameters of each frame, parameters P(l) - P(7), represent the logarithm of the energy in 15 their associated frequency band. The eighth ~:~
parameter, P~AMP), corresponds to the logarithm of the total energy contained in all eiyht frequency bands used to calculate each frame.
In the illustrated embodiment, the eight parameters of each smooth frame 110 are calculated from the respective eight parameters of four n~ .
individual frames 100. According to this process, ~ :
four sequential ~rames 100 are averagPd, to form one 25 smooth frame 110. In the illustrated embodiment, individual frames lOOA through lOOD are averaged to ~ : :
form the smooth frame llOA, individual frames lOOE
through lOOH are averaged to form the smooth rame 1103, and individual frames lOOI through lOOL are 30 averaged to form the smooth ~rame llOC. Since each .
frame represents eight k parameters, each of the eight j parameters of the resulting smooth frames :
llOA-C has a value which corresponds to an average of `~
the corresponding k parameters in each of the four ,:
, ::~ . : . : : . . - . . : : - . . : . , , - - .. , ~ .. .
.. . . ...
: . ; . .. ~ : : . .
W092/~585 PCT/US91tO4321 ~ 9 5 -2Z-: ' ' individual frames, i.e. llOA-D, lOOE-H, and lOOI-L.
The resultant smooth frames ~l~ Y2, and Y3, as so derived from utterances, are then evaluated against a vocabulary of stored word models as described in 5 detail below.
A predetermined, or "trained~, vocabulary of words resides in the system. The triaining of the system as to thè vocabulary, and the manipulating of lO such words into word models is described in more detail in U.S. Patent No. 4,837,831. To reduce the total number of words required to be stored in a system, and to reduce the number of individual words against which an utterance is to be matched, the word lS models for each word are grouped into a plurality of cluster sets, or clusters. In the preferred embodiment, these clusters are wordstart clusters, indicating that they are identified based on the beginning of each word.
The term ~cluster~ refers to a ~et in which a group of probabilistic models, such as the multipIè
æmooth frame models described above, are divided into subgroups, or clusters, of relatively similar 25 models. More specifically, it involves the set of acoustic probability distribution models which have associated therewith similar likelihoods of generating a predetermined acoustic description. A
wordstart cluster of the illustrated embodiment 30 consists of a collection of M acoustically similar words over the period of w frames from their beginnings, i.e. over the span of the window length.
: . . . . .
~ . , ~ . . . . " ..................... , .: ,,- ,. . :
.:. ~ : .. . . . . . ,.,: . :. :, -Wog~/00585 ~ 5 PCT/US91/04321 -23- `~
A word may appear in several different wordstart clusters, depending upon its speech contest. For esample, four word contexts may be demonstrated as:
silence -- word - silence silence -- word -- speech speech -- word -- silence speech -- word -- speech -~
Since the contest of a spoken word may drastically in~luence the acoustics, wordstart clusters may include the same or different words in different speech contests, yet still have acoustically similar lS descriptions. ~ ~
:- ' " ' .
Each wordstart cluster also consists of a ;
seguence of acoustic models for smooth fra~es that might be generated from that wordstart cluster. In 20 the preferred embodiment, a wordstart cluster is represented by r probability densities, or nodes, where l<r<s: fl, f2,...fr. Each smooth frame model, fi, being a distribution in k dimensional space.
To assess the evidence for whether the ;
current seguence of smooth frames, Yl,...,YS, represents words in a particular wordstart cluster, a cluster score Sy is computed for each sequence:
Sy= l/r i~l -log fi (Yi) ';'.,';
: ` : .. ,. .. : ., . : : : ,, . ~ :
W092/~585 PCT/US91/~321 2~ 3~5 -24-This represents the average negative-log likelihood of the smooth frame models fi evaluated using the probability model r for the wordstart cluster against which the smooth frame model is being compared. The ;~
5 score Sy will be computed for each of M wordstart clusters, Sl,... SM.
In the preferred embodiment, it is assumed . ~;~
that each probability density r is the product oÇ k lO univariate probability densities. That is, it is assumed that the k elements of each Y are independent. Furthermore, it is assumed that each Y
univariate density is a double e~ponential distribution, such that: .
k f~Y) ~.n ~}/2c;) exp tl~Yti] ~ j ]
~= ' 20 if y , (y~ Yt2)~.~.y~k))~ Thus, a particular ~i in a wordstart cluster is specified by two k parameters~ 2~ -~k~ with ~ representing mean values of that cluster; and k mean absolute deviationS: ~ 2~ - k The amount of computation required to identify wordstart clusters is reduced by having a typical smooth frame model, ftyp~ appear in multiple different wordstart clusters. In that instance, the 30 only computation required for a given Y and f is the negative-log probability: -(logftyp)(y). This value only needs to be calculated once, and the outcome is then added into diff~rent wordstart cluster scores having frame ftyp-.
.. : .. ~ .......... .. ..... . . .
: , ~.. ... .: : - . :. :: . .:: : ::., , ... , . : .
W092/0U58~ gl~35 PCT/US91/~4321 -25~
Once smooth frame scores Syi, for Yl through YM, are computed for all M wordstart clusters in a vocabulary, all wordstart clusters having Sy greater than an absolute acousti~ threshold value, Tl, are 5 disregarded from further consideration by the system. Prior art systems have used a relative threshold to derive a set of words, the threshold being set relative to the best word. In the present system, an absolute threshold is set, independent o l0 the best word. The threshold may be adjusted to accommodate the requirements of the recipient recognizer, to either reduce or increase the number of possible words upon which the recognizer must ~-~
perform a more lengthy word match. For e~ample, the 15 clustering threshold may be set to produce the smallest number of clusters which provide a relatively high level of performance for the æpeech recognition method of the present invention.
Once the wordstart clusters to be disregarded are removed ~rom consideration, the remaininq wordstart clusters form the preliminary cluster set. Each wordstart cluster in the preliminary cluster set is then unpacked so that the 25 individual words in each cluster may further be considered. For each word W from the unpacked wordstart clusters, the following score is derived:
SW = Sy + SL
in which Sw is the score for each unpacked word from each considered wordstart cluster, Sy is the wordstart cluster s~ore for each cluster from which - , , WO 92/00585 PClr/US91/04321
: . .
~ONTI~UOUS SPE~C~ PRO~ESSING S~S~
'' . ~.
5 BACKGROUN~ OF~E ~ISCLOSURE
While machines which recognize discrete, or isolated, words are well-known in the art, there is on-going research and development in constructing 10 large vocabulary systems for recognizing continuous speech. Esamples of discrete speech recognition systems are described in U.S. Patent ~oi 4,783,803 (Baker et al., Nov. 8, 19~3) and U.S. Patent No~
4,837,831 (Gillick et al., Jun. 6, 1989), both of 15 which are assigned to the assignee of the present application and are herein incorporated by reference. Generally, most speech recognition systems match an acoustic description of words, or parts of words, in a predetermined vocabulary against 20 a representation of the acoustic signal gen4rated by the utterance of the word to be recognized. One method for establishing the vocabulary is through the incorporation of a training process, by which a user ~$rains~ the computer to identify a certain word 25 having a specific acoustic segment.
A large number of calculations are reguired to identify a spoken word from a given large vocabulary in a speech recognition system. The 30 number of calculations would effectively prevent real-time identification of spoken words in such a ; speech recognition system. Pre-filtering is one means of identifying a preliminary set of word models :
. .
.
`,' . '' ' ' ' . ~ ', ' , '; ' - ' ' ',' : ` " ' ' ' ' : ~ .' ' .. ',;, ~ ' . ''' ' '. `' ' ' . . ' .' ' : ' ' '' ' :~ ' ." ' ' , ' `'' ' . ' ' ~ , .
W092/00585 PCT/US91/~321 ~ 9~ -2-a~ainst which an acoustic model may ba compared.
Pre-fil~ering enablqs such a speech recognition system to id~ntify spoken words in real-time.
S Present pre-filtering systems used in certain prior art discrete word recognition systems rely upon identification of the beginning of a word.
One esample, as described in detail in U.S. Patent ~;
No. 4,837,831, involves establishing an anchor for 10 each utterance of each word, which anchor then forms the starting point of calculations. That patent -discloses a system in which each vocabulary wora is represented by a sequence of statistical node models. Each such node model is a multi-dimensional 15 probability distribution, each dimension of which represents the probability distribution for the values of a given ~rame parameter if its associated frame belongs to the class of sounds represented by the node model. Each dimension of the probab~lity 20 distribution is represented by two statistics, an estimated espected value, or mu, and an estimated absolute deviation, or sigma. A method for deriving statistical models of a basic type is disclosed in U.S. Patent No. 4,903,305 (Gillick et al., Feb. 20, 25 1990), which is assigned to the assignee of the - present application and which is herein incorporated by reference.
Patent No. 4,903,305 discloses dividing the 30 nodes from many words into groups of nodes with similar statistical acoustic models, forming ;~
cluster~, and calculating a statistical acoustic ~ ; -model for each such cluster. The model for a given cluster is then used in place of the individual node ~ ~
.~ "
;':: . :'.'. ' , , , ' ' , ;' ' W092/0058~ PCT/US91/04321 _3_ 2~ 9~
mcdels from different words which have been grouped into that cluster, greatly reducing the number of models which have to be stored. One use of such cluster models is found in U.S. Patent No. 4,837j831 5 (Gillick et al., Jun. 6, 1989), cited above. In that patent, the acoustic description of the utterance to be recognized includes a succession of acoustic descriptions, representing a sequence of sounds associated with that utterance. A successio~ of the 10 acoustic representations from the utterance to be recognized are compared against the succession of acoustic models associated with each cluster model to produce a cluster likelihood score for each such cluster. These cluster models are ~wordstart~
15 models, that is, models which normally represent the initial portion of vocabulary words. The likelihood score produced for a given wordstart cluster model is used as an initial prefiltering score for each of its corresponding words. Extra steps are included which 20 compare acoustic models from portions of each such word following that re~resented by its wordstart model against acoustic descriptions from the ut~Erance to be recoqnized. Vocabulary words having the worst scoring wordstart models are pruned from 25 further consideration before performing extra prefilter scoring steps. The comparison between the succession of acoustic descriptions associated with the utterance to be recognized and the succession of acoustic models in such cluster model are performed 30 using linear time alignment. The acoustic description of the utterance to be recognized comprises a sequence of individual frames, ea~h describing the utterance during a brief period of time, and a series of smoothed frames, each derived ... ; ~ ~ :: ;
.. ~ ... .. -, ~ ~ .
~, . . . .. . .. .... . ..
W092/00~85 PCT/US91/04321 ~ ~S 9 ~ 5 _4_ f om a weighted average of a plurality of individual frames, is used in the comparison against the cluster ;~
model. ~ -Other methods for reducing the size of a set against which utterances are to ~e identified by the system include pruning, and le~ical retriev~l. U.S.
Patent No. 4,837,83l, cited above, discloses a method ~
of prefiltering which compares a sequence of models ,!
l0 from the speech to be recognized against corresponding se~uences of models which are associated with the beginning of one or more vocabulary ~ords. This method compensates for its use of linea~ time alignment by combining its 15 prefilter score produced by linear time alignment with another prefilter score which is calculated in a manner that is forgiving of changes in speaking rate or improper insertion or deletion of speech sounds.
The statistical method of hidden Mar~ov modeling, as incorporated into a continuous speech recognition system, is described in detail in U.S.
Patent No. 4,803,729 (Baker et al., Feb. 7, 1989), which is assigned to the assignee of this 25 application, and which is herein incorporated by reference. In that patent, use of the hidden Markov -model as a technique for determining which phonetic label should be associated with each frame is disclosed. That stochastic model, utilizing the 30 Markov assumption, greatly reduces the amount of computation required to solve complex statistical probability equations such as are necessary for word recognition systems. Although the hidden Markov ~-model increases the speed of such speech recognition - . . :: . -. '. ' . : . : : : ,: ,: ' : : : ~ , : . , , :
;;.. , . . .. ~.. ~ ., .. , ., :;, , ,...... .. , : -,, . . ~ . . : -. , W092/0~85 PCTtUS91/~321 systems, the problem remains in applying such a statistical method to continuous word recognition where the beginning of each word is contained in a continuous sequence of utterances.
Many discrete speech recognition systems use some form of a ~dynamic programming~ algorithm.
Dynamic programming is an algorithm for implementing certain calculations to which a hidden Markov ~odel lO leads. In the conte~t of ~peech recognition systems, dynamic programming perorms calculations to `~
det~ermine the probabilities that a hidden Markov Model would assign to given data.
Typically, speech recognition systems using dynamic programming represent speech as a seguence of frames, each of which represents the speech during a brief perioa of time, e.g., fiftieth or hundredth of a second. Such systems normally model each 20 vocabulary word with a sequence of node models which represent the seguence of differe~t frames associated with that word. Rouqhly speaking, the effect of dynamic programming, at the time of reco~nition, is to slide, or espand and contract, an operating 25 region, or window, relative to the frames of speech so as to align those frames with the node models of each vocabulary word to find a relatively optimal time alignment between those frames and those nodes.
The dynamic programming in effect calculatss the 30 probability that a given sequence of frames matches a given word model as a function of how well each such frame matcheæ the node mode~ with which it has been time-aligned. The word model which has the highest probability score is selected as corresponding to the .... ~ ......................................................... .
.: . . : . .. ;:: .. .:
W092t~85 PCT/US~1/04321 ~ 5 -6-speech. Dynamic programming obtains relative}y optimal time alignment between the speech to be recognized and the nodes of each worcl model, which compensates for the unavoidable differences in 5 speaking rates which occur in different utteranees of the same word. In addition, since dynamic programming scores words as a functic)n o~ the fit between word models and the speech over many frames, it usually gives the correct word the best s~ore, l0 even if the word has been slightly misspoken or obscursd by background sound. This is important, because humans often mispronounce words either by deleting or mispronouncing proper sounds, or by inserting sounds which do not belong. Even absent 15 any background sound, there is an inherent variability to human speech which must be considered in a speech recognition system.
Dynamic programming requires a tremendous 20 amount of computation. In order for it to find the optimal time alignment between a sequence of frames ~;~
and a sequence of node models, it must compare most frames against a plurality of node models. One method of reducing the amount of computation required 25 for dynamic programming is to use pruning. Pruning terminates the dynamic programming of a given portion of speech against a given word model if the partial probability score for that comparison drops below a given threshold. This greatly reduces computation, 30 since the dynamic programming of a given portion of speech against most words pro~uces poor dynamic programming scores rather guickly, enabling most words to be pruned after only a small percent of their comparison has been performed. Unfortunately, .
..... . . : . . .
. ,~ -.: . . . :
~ ": - . . - . ,,. . -W092/00585 P~T~US9ltQ4321 -7- ~
however, even with such pruning, the amount of computat-on required in large vocabulary systems of the type necessary to transcribe normal dictation. .
Continuous speech computational requirements are even greater. In continuous speech, the type of which humans normally speak, words are run together, without pauses or other simple cues to indicate where one word ends and the nest beyins. When a mechanical - l0 speech recognition system attempts to recognize continuous speech, it initially has no way of identifying-those portions of speech which correspond to individual words. Speakers of English apply a -host of duration and coarticulation rules when 15 combining phonemes into words and sentencPs, employing the same rules in recognizing spo~en -~
language. A speaker of English, given a phonemic spelling of an unfamiliar word from a dictionary, can pronounce the word recognizably or recognize the word 20 when it is spoken. On the other hand, it is impossible to put together an Ualphabet~ of recorded phonemes which, when concatenated, will sound like natural English words. It comes as a surprise to most speakers, for example, to discover that the 25 vowels in ~will~ and ~kick~, which are identical according to dictionary pronunciations, are as different in ~heir spectral characteristics as the vowels in ~not~ and ~nut", or that the vowel in ~size~ has more than twice the duration of the same 30 vowel in ~seismograph~.
One approach to this problem of recognizing discrete words in continous speech is to treat each successive frame of the speech as the possible -~
.
W092/00585 PCT/US9t/04321 ~?~ ~ ~S -8-, beginning of a new word, and to begin dynamic programming at each such frame against the start of each vocabulary word. However, this approach requires a tremendous amount of computation. A more 5 efficient method used in the prior art begins dynamic programming against new words only at those frames for which the dynamic programming indicates that the speaking of a previous word has just ended. Although this latter method is a considerable improvement, 10 there remains a need to further reduce computation by reducing the number of words against which dynamic programming is started when there is indication that a prior word has ended. ~
~. :
; 15 One such method of reducing the number of vocabulary words against which dynamic prograrnming is started in continuous speech recognition associates a phonetic label with each frame of the speech to be recognized. The phonetic label identiEies which ones 20 of a plurality of phonetic frame models compares most closely to a given frame of speech. The system then divides the speech into segments of successive frames associated with a single phonetic label. For each given segment, the system takes the sequence of five 25 phonetic labels associated with that segment plus the ne~t four segments, and refers to a look-up table to find the set o~ vocabulary words which pre~iously have been determined to have a reasonable probability of starting with that sequence of phonetic labels.
30 As referred to above, this is known as a ~wordstart cluster~. The system then limits the words against which dynamic programming could start in the g:;ven segment to words in that cluster or set.
. ~: ,: : .. . . .
:... . .. . .
' .: . . : ': . : : .
,: . ~. ... . ~ :~ .
WO 92/0ûS85 PCT/~JS~1/04321 _ 9 ~ ~ r~
A method for handling continuous speech recognition is described in U.S. Patent No. 4,805,219 (Baker et al., Feb. 14, 1989), which is assigned to the assignee o this application, and which is herein 5 incorporated by reference. In that patent, both the speech to be recognized and a plurality of speech pattern models ar~ time-aligned against a common time-aligning model. The resulting time-aligned speech model is then compared against each of the 10 resulting time-aligned pattern models. The time-alignment against a common time~alignment model causes the comparisons between the speech model and each of the pattern models to compensate for variations in the rate at which the portion of speech 15 is spoken, without requiring each portion of speech to be separately time-aligned ayainst each pattern model.
One method of continuous speech recognition 20 is described in U.S. Patent ~o. 4,803,729, cited above. In that patent, once the speech to be recognized is converted into a sequence of acoustic frsmes, the nest step consists of ~smooth frame labelling~. This smooth frame labelling method 25 associates a phonetic frame label with each frame of the speech to be labelled as a function Of: tl) the closeness with which the given frame compares to each of a plurality of the acoustic phonetic frame models;
(2) an indication of which one or ~ore of the 30 phonetic frame models most probably correspond with the frames which precede and follow the given frame, and; t3) the transition probability which indicates for the phonetic models associated with those W092/0058~ PCT/US91/~321 --10-- :
neighboring frames which phonetic models are most likely associated with the given frame.
Up to this time, no pre-fi'Ltering system has 5 been implemented which provides the desired speed and accuracy in a large vocabulary continuous speech `-~
recognition system. Thus, there remains a need for an improved continuous speech reco~11ition system which rapidly and accurately recognizes words ~;~
; lO contained in a sequence of continuous utterances.
- It is thus an object of the present invention to provide a continuous speech pre-filtering system for use in a continuous speech 15 recognition computer system.
~ ....... ... . .... .
W092/00~85 ~$~ PCT/US91/04321 SuMMARy OF THE INVE~TION
The system of the present invention relates to continuous speech processing systems for use in 5 large vocabulary continuous speech recognition systems.
Briefly, the system includes a stored ~ocabulary of word models. Utterances are temporally l0 segmented, and at least two non-successive segments are processed with respect to the vocabulary.
subset of word models is generated from the stored vocabulary based on p-redetermined criteria. The subset of word models defines a list of candidate 15 words which are represented by a signal generated by the system of the invention.
In one form, the system of the invention generates a succession of frame aata sets which begin 20 at a frame start time. Each of the frame data sets represents successive acoustic segments of utterances for a specified frame period. The frame data sets are each smoothed to generate a smooth frame data set, or smooth frame model, over a predetermined 25 number of frames.
The system also includes a vocabulary, trained into the system by the user, which may be stored within the system as clusters. In the 30 preferred embodiment of the invention, these clusters are wordstart clusters. Each cluster includes a plurality of word models which are acoustically similar over a succession of frame periods. Each word model includes nodes represPnting the .--.. . .. :., .. .. . ,......... . . ........ .... : . .
.. ~ .: . . . . . .... . . . .
. , . ., , ~ . . . . ...
W092/005~5 PCT/US91/~321 ~ 5 -12-probability distribution for the occurrence of a selected acoustic segment from that word model in a segment of the speech to be recognize!d.
The system generates a eluster score which represen~s the average negative-log likelihood of the smooth frames, from the previously identified smooth frame data set, evaluated using the probability model for the cluster against which the smooth frame model -l0 is being compared. Cluster sets having cluster scores above a predetermined acoustic threshold are removed from further consideration.
The cluster sets not removed from further 15 consideration are then unpacked to identify the individual words from each identified cluster. At this point, the system genarates a word score for each unpacked word resulting ~rom the first filter.
The word w ore represents the sum of the clust~r 20 score ~or the cluster ~rom which the word was unpacked, and a language model score. This word score is used to identify those words which are below a second combined threshold to form a word list. The word list generated by the system of the invention is 2~ then sent to a recognizer for a more lengthy word match.
One important aspect of the invention is in a controler which enables the system to initialize 30 times corresponding to the frame start time ~or each of a given frame data set. As a result of this -controler, the system enabl~s the identification of a preliminary list of candidate words for a continuous utterance. In prior art discrete speech systems, W092/00585 P~T/US9l/~4321 ~3~3 ~
such pre-filtering systems rely upon the occurrence of silence to mark the beginning of a spoken word.
The system of the present invention generates a preliminary candidate word list, by pre-filtering at 5 arbitrarily selected times, not necessarily for successive acoustic segments, and without identification of an anchor, such as silence.
W092/00~85 PC~/US91/~321 BRIEF DE$~RIpTIOr~Q F THE ~ L~9~
The foregoing and other objects of thisinvention, the various features thereof, as well as 5 the invention itself, may be more ful:Ly understood from the following description, when read together with accompanying drawings in which:
FIGUXE l is a schematic flow diagram of a lO preferred embodiment of a continuous speech recognition system according to the present invention;
~ :
FIGURE 2 is a schematic block diagram of the hardware used in a preferred embodiment of a 15 continuous speech recognition system according to the present invention; and FIGURE 3 is a schematic representation of ;, .~.
the smooth frame system of the present invention.
: .. . - - .- . . . : . - ~ . . : . -. -::.. : ~ : : . . .. . . .. . . .. .. .
. ~ . : .. -,: .. , . . : .. :: - . :, :,: . - :
:,. . ... . .: ,- ~ .. .
: , - .
:.
WO92J00585 ~ ~ PCT~US91/04321 DESÇR~TION ~F THE pREFERRED EMaQnlMEE~
The preferred embodiment of the present invention is a computer system designed to recognize 5 continuous speech input, in the form of utterances, by a user. The system may include a wide variety of computers and computer languages, provided that the computer includes the capacity to convert speech into digital signals, which signals may then be processed l0 by the computer. A specific version of the invention which has already been tested by the inventor is run on a Compaq Deskpro 386 personal computer manufactured by the Compaq Computer Company of Houston, Texas~ and is written in the C programming l5 language.
, FIGURE l is a general flow diagram showing the flow of information or data of the pr~sent invention. As shown, Phase I involves the flow of 20 data from the user, in the ~orm of utterances UT, through a series of transformers into transform data TR. The ~ransform data is concurrently sent to a recognizer R and a processing, or pre-filter system PF. While the recognizer R processes the transform 25 data TR, it queries the pre-filter system PF for data. Phase II involves the flow of transform data TR to the pre-filter system PF. Phase III then involves data flow of pre-filter data to a recognizer R upon query by the recognizer R. User U reGeives 30 recognizer data in the form of a monitor word display on a monitor M. Each phase will separately be discussed below. The system of the present invention is used during Phase II for converting transform data into pre-filter data whi~h is then sent for more 35 lengthy filtering at a recognizer (Phase III).
., .: , ::
:~ : .. : -: :
.:
~ 3~ -l6-Phase I
As shown in FIGURE 2, the present system of 5 the invention includes hardware for detecting utterances of spoken words by a user, and for converting the utterances into digital signals. The ~ ;
hardware for detecting utterances may include a microphone 40, which in the preferred embodiment is a lO head-mount microphone for easy user access. The hardware further inc}udes an A~D converter 42, a peak amplitude detector 44, a fast-fourier-transform (or ~FFTn) 46, an~ an utterance detector 48. The signals produced by each of these devices are supplied to a ~-15 programmable computer 50, such as a Compag model '386 computer, or its equivalent. A monitor 52, keyboard 54, and the computer interfaces 80 and B2, respectively, are generally of the type commonly used with such personal computers.
The,output of microphone 40 is connected to the input of the ~D converter 42. The A/D converter converts the analog signal produced by the microphone -40 into a sequence of digital values representing the 25 amplitude of the signal produced by the microphone 40 at a sequence of evenly spaced times. For purposes of the present invention, it is sufficient if the A/D
converter 42 is a codec chip giving 8-bit ~-law samples at a sample rate o 12000 hertz. These 30 samples are converted to 14-bit signed linearized samples which are supplied to the inputs of the peak-amplitude detector 44 and the FFT 4S.
- . .
W092~0585 ~ PCT/~S91/04321 FFT is well known in the art of digital signal processing. Such a transform converts a time domain signal, which is amplitude over time, into a frequency domain spectrum, which e~presses the 5 frequency content of the time domain signal~ In the preferred embodiment, the FFT 46 converts the output of the A~D converter 42 into a sequence of frames, each of which indicates the spectrum of the ~ignal supplied by the A~D converter in each of eight - lO different frequency bands. In the preferred embodiment FFT 46 produces one such frame every fiftieth of a secona.
The ~F~ 46 thus produces a vector of values lS corresponding to the energy amplitude in each of sisteen frequency bands. The FFT 46 converts each of these sisteen energy amplitude values into a sixteen-bit logarithmic value. This reduc~s subsequent computation since the sixteen-bit 20 logarithmic values are more simple to perform calculations on than the longer linear energy amplitude values produced by the FFT, while representing the same dynamic range. Ways $or improving logarithmic conversions are well known in 25 ~he art, one of the simplest bein~ use of a look-up table.
In addition, the FFT 46 modifies its output to simplify computations based on the amplitude of a 30 given frame. This modification is made by deriving an average value of the logarithms of the amplitudes for all sixteen bands. This average value is then subtracted from each of a predetermined group of -logarithms, representative of a predetermined group ~........ . , , . - , - , . . . ~ .
- ,, , : , ~ . , : :
:::- ~ : . : : :
,:,. .. . ,. .. ~ ~
WO 92/00~85 PCl~/US9~/04321 J~r- -18-of frequencies. In the preferred embodiment, the predetermined group consists of the first seven ;
logarithmic values, representing each of the f irst ~ `:
seven frequency bands.
S .
Thus, utterances are convert;ed from acoustic data to a sequence of vectors o k dimensions, each seguence of vectors identified as an acoustic frame. ~ ;
In the preferred embodiment, each frame represents 20 10 milliseconds of utterance, or duration, and k = 8.
Other devices, systems, and methods of transforming utterances received from a user i~nto data upon which pre-filtering systems may act are contemplated as being within the scope of this invention.
Phas~_Il The primary function of a pre-filter system in a speech recognition system is reduction of the 20 size of the vocabulary of words against which an utterance is compared. In a large vocabulary system, over 30,000 words may be contained in a predetermined vocabulary. The time required to test each acoustic segment of an utterance against each of those 30,000 25 words essentially prohibits real-time speech recognition. Thus, Phase II of the present invention involves the reduction of the vocabulary against which utterances are checked, in conjunction with reducing the number of acoustic segments which are 30 checked against the vocabulary to correctly identify the spoken word. The resulting pre-filter data consists of a preliminary word list which, during Phase III, is involved in a more lengthy word match.
~s :
' :. , , ~ . , . : :
.......... .. . . . . .. . ..
W092/00585 ;~ 3 ~ pcr/us91/o43~l --19-- ~ ~
Th~ prefiltering system of the present invention consi~ts of a rapid match, similar to that described in U.S. Patent No. 4,783,803. The purpose of the rapid match computation is to select, from a 5 relatively large initial vocabulary, a smaller, originally active, vocabulary of words judged most likely to correspond to a given spoken utterance.
Rapid match conserves computation by providing a preliminary word list upon which a more lengthy word 10 match is performed. The preliminary word list only includes words which have a reasonable chance of corresponding to the given utterance, as determined by the rapid match system.
At any given time t, the system of the invention provides a short list of words that might begin at that time, based on analysis o~ the sequence of W frame vectors:
vt, vt~l~.. -vt~w-l where w is the number of frames over which the system performs an evaluation, or the window width, and each v is a vector having k values for its associated 25 frame. In the preferred embodiment, window width w = 12. Although in the present embodiment the periods for each frame are the same, those values may differ in other embodiments.
From the sequence of frame vectors, the system generates s smooth frames Yl,... Ys wh ch are based on the vectors in the window, each of which are determined in accordance with one of the following:
WV92/0058~ PCT/USgltO4321 ~ 395 -20~
.
b-l ~
Y~ aivt~i ~' ~''' `
l s O ~, b-l Y2 = ~ aivt+c~i i-O
b-l Ys ~ i~o aiVt~(s-l)c~
wherein the indes b (e.g., b=4) is the smooth frame window width, coefficients ai are smoothing weights, and s is the number of smooth frames in the ~mooth frame data set (e.g., s~3). In the preferred 20 embodiment, ai are positive, and all ai = l/b such that the sum of all ai = l. In one aspect of the invention, the smooth frame window width may be e~pressed such that w - b ~ (s-l)c. In other aspects, window lengths may be variable. Variable c 25 is the offset of each of said smooth frame windows which may be set so that current windows are either overlapping or non-overlapping. In the preferred embodiment of the invention, c = 4, so that the current windows are non-overlapping. In other 30 aspects, the offsets of successive windows may be variable.
Other systems of data reduction may be included, instead of smoothing. For e2ample, the 35 data may be reduced by fitting it to a predetermined model. The type of data reduction system used will depend upon the computer system, the form of data manipulated, and the desired speed of the system.
.,' ~
~ ~, ., .. , . ~ . : .
WO 92/~K85 ~ ~ PCT/US9~/~321 ..
In addition, and as schematically shown in FIGURE 3, smooth frame data sets Yl (llOA), Y2 (110~) :
and Y3 tllOc) are established from frame vectors Vo ~ :
5 (lOOA), ...V11 ~lOOL), where WQ12~ b 4, ai.l, S=3, and c=4, the frame vectors have dimension k and the .
smooth frame data sets have dimension j.
In the illustrated embodiment, j~k=8, but in 10 other embodiments j may be less than k. The k ~alues for each of the transform data TR are identified in FIGU~E 3 as parameters P(l) through P(AMP). The first seven parameters of each frame, parameters P(l) - P(7), represent the logarithm of the energy in 15 their associated frequency band. The eighth ~:~
parameter, P~AMP), corresponds to the logarithm of the total energy contained in all eiyht frequency bands used to calculate each frame.
In the illustrated embodiment, the eight parameters of each smooth frame 110 are calculated from the respective eight parameters of four n~ .
individual frames 100. According to this process, ~ :
four sequential ~rames 100 are averagPd, to form one 25 smooth frame 110. In the illustrated embodiment, individual frames lOOA through lOOD are averaged to ~ : :
form the smooth frame llOA, individual frames lOOE
through lOOH are averaged to form the smooth rame 1103, and individual frames lOOI through lOOL are 30 averaged to form the smooth ~rame llOC. Since each .
frame represents eight k parameters, each of the eight j parameters of the resulting smooth frames :
llOA-C has a value which corresponds to an average of `~
the corresponding k parameters in each of the four ,:
, ::~ . : . : : . . - . . : : - . . : . , , - - .. , ~ .. .
.. . . ...
: . ; . .. ~ : : . .
W092/~585 PCT/US91tO4321 ~ 9 5 -2Z-: ' ' individual frames, i.e. llOA-D, lOOE-H, and lOOI-L.
The resultant smooth frames ~l~ Y2, and Y3, as so derived from utterances, are then evaluated against a vocabulary of stored word models as described in 5 detail below.
A predetermined, or "trained~, vocabulary of words resides in the system. The triaining of the system as to thè vocabulary, and the manipulating of lO such words into word models is described in more detail in U.S. Patent No. 4,837,831. To reduce the total number of words required to be stored in a system, and to reduce the number of individual words against which an utterance is to be matched, the word lS models for each word are grouped into a plurality of cluster sets, or clusters. In the preferred embodiment, these clusters are wordstart clusters, indicating that they are identified based on the beginning of each word.
The term ~cluster~ refers to a ~et in which a group of probabilistic models, such as the multipIè
æmooth frame models described above, are divided into subgroups, or clusters, of relatively similar 25 models. More specifically, it involves the set of acoustic probability distribution models which have associated therewith similar likelihoods of generating a predetermined acoustic description. A
wordstart cluster of the illustrated embodiment 30 consists of a collection of M acoustically similar words over the period of w frames from their beginnings, i.e. over the span of the window length.
: . . . . .
~ . , ~ . . . . " ..................... , .: ,,- ,. . :
.:. ~ : .. . . . . . ,.,: . :. :, -Wog~/00585 ~ 5 PCT/US91/04321 -23- `~
A word may appear in several different wordstart clusters, depending upon its speech contest. For esample, four word contexts may be demonstrated as:
silence -- word - silence silence -- word -- speech speech -- word -- silence speech -- word -- speech -~
Since the contest of a spoken word may drastically in~luence the acoustics, wordstart clusters may include the same or different words in different speech contests, yet still have acoustically similar lS descriptions. ~ ~
:- ' " ' .
Each wordstart cluster also consists of a ;
seguence of acoustic models for smooth fra~es that might be generated from that wordstart cluster. In 20 the preferred embodiment, a wordstart cluster is represented by r probability densities, or nodes, where l<r<s: fl, f2,...fr. Each smooth frame model, fi, being a distribution in k dimensional space.
To assess the evidence for whether the ;
current seguence of smooth frames, Yl,...,YS, represents words in a particular wordstart cluster, a cluster score Sy is computed for each sequence:
Sy= l/r i~l -log fi (Yi) ';'.,';
: ` : .. ,. .. : ., . : : : ,, . ~ :
W092/~585 PCT/US91/~321 2~ 3~5 -24-This represents the average negative-log likelihood of the smooth frame models fi evaluated using the probability model r for the wordstart cluster against which the smooth frame model is being compared. The ;~
5 score Sy will be computed for each of M wordstart clusters, Sl,... SM.
In the preferred embodiment, it is assumed . ~;~
that each probability density r is the product oÇ k lO univariate probability densities. That is, it is assumed that the k elements of each Y are independent. Furthermore, it is assumed that each Y
univariate density is a double e~ponential distribution, such that: .
k f~Y) ~.n ~}/2c;) exp tl~Yti] ~ j ]
~= ' 20 if y , (y~ Yt2)~.~.y~k))~ Thus, a particular ~i in a wordstart cluster is specified by two k parameters~ 2~ -~k~ with ~ representing mean values of that cluster; and k mean absolute deviationS: ~ 2~ - k The amount of computation required to identify wordstart clusters is reduced by having a typical smooth frame model, ftyp~ appear in multiple different wordstart clusters. In that instance, the 30 only computation required for a given Y and f is the negative-log probability: -(logftyp)(y). This value only needs to be calculated once, and the outcome is then added into diff~rent wordstart cluster scores having frame ftyp-.
.. : .. ~ .......... .. ..... . . .
: , ~.. ... .: : - . :. :: . .:: : ::., , ... , . : .
W092/0U58~ gl~35 PCT/US91/~4321 -25~
Once smooth frame scores Syi, for Yl through YM, are computed for all M wordstart clusters in a vocabulary, all wordstart clusters having Sy greater than an absolute acousti~ threshold value, Tl, are 5 disregarded from further consideration by the system. Prior art systems have used a relative threshold to derive a set of words, the threshold being set relative to the best word. In the present system, an absolute threshold is set, independent o l0 the best word. The threshold may be adjusted to accommodate the requirements of the recipient recognizer, to either reduce or increase the number of possible words upon which the recognizer must ~-~
perform a more lengthy word match. For e~ample, the 15 clustering threshold may be set to produce the smallest number of clusters which provide a relatively high level of performance for the æpeech recognition method of the present invention.
Once the wordstart clusters to be disregarded are removed ~rom consideration, the remaininq wordstart clusters form the preliminary cluster set. Each wordstart cluster in the preliminary cluster set is then unpacked so that the 25 individual words in each cluster may further be considered. For each word W from the unpacked wordstart clusters, the following score is derived:
SW = Sy + SL
in which Sw is the score for each unpacked word from each considered wordstart cluster, Sy is the wordstart cluster s~ore for each cluster from which - , , WO 92/00585 PClr/US91/04321
3 r;
the unpacked words are derived, and SL is the language score.
The score for each unpacked word, Sw, is 5 identified based on a variation of Bayes's Theorem which may be expressed as:
p(Wi¦A) = [P~Alwi?P~wi)~ ~ p~A~
' wherein the terms represent the following: p(Wi ¦A) represents the posterior probability of a word Wi, given the acoustics, or spoken word, A; p~A¦Wi) represents the probability of a spoken word, given a 15 word Wi; p(Wi) represents the p~obability of occurrence of a word Wi based on the frequence of occurrence of the word in the language; and, p~A) represents the probability of the ocurrence o the acoustics, A, in the utterance, averaged over all 20 possible words. Each Wi i8 identi~ied for which p(Wi¦A) is biggest, which is equivalent to each Wi for which p~A¦Wi)p(Wi) is biggest, which is eguivalent to each Wi for which lo~p~A¦Wi)~
log~p(Wi) is biggest. This may ~inally be e~pressed 25 as each Wi for which the following is smallest:
SWi = -log~p~Alwi~] - K logtp¢wi)]
wherein -log~p(A¦Wi)] is representative of the 30 acous~ic model score, and -X logtp(Wi)~ is representative of the language model score. The scale factor K is included to accommodate for the fact that each frame is not probabilistically independent. Thus, in the preferred embodiment a 35 scale factor X is incorporated in order to correctly ., :: . :... .. .. ; : :: :. ,. ~. .
, .... .. .. . . . .. ..
W092/00585 ~ 5 P~T/US~1/04321 add language model score, SL. In various forms of the invention, X may be preset based on ~mpirical data, or K may be dynamically determined by way of an error rate feedback loop.
Once a score Swi is derived for each word W
in each of the unpac~ed preliminary clusters, it is compared against a combined threshold T2. This threshold, similar to Tl, is empirically derived.
l0 Alternatively, it may be dynamically adjusted, and is dependent upon the trade-off between accuracy and speed. In the preferred embodiment, all words havin~
a score Sw which is less than the threshold T2 are not further considered. That is, words having the 15 best score, relative to the ~econd threshold, become part of the word list sent to the recognizer for more lengthy consideration. This second threshold is set to produce the smallest feasible number of words, as determined either by the recognizer, the user, or the 20 system designer. Obviously, the larger the word list, the slower will be the overall ~ystem, since a - lengthy word match is performed for each word in the word list. on the other hand, if the word list is too small, the true match word may be omitted. In 25 the preferred embodiment, for e~ample, if the threshold T2 is set such that the word list contains not more than Wx words that begin at frame ft, the system then identifies the ~est scorinq W~ words from the unpacked clusters. However, if there are fewer 30 than Wx words which meet the T2 criteria, all words will be returned from the wordstart clusters meeting the T2 threshold criteria.
.
.
,...... . .: . . . .: . :
W092/00~85 PCT/V~9t~04321 ~ 95 -28-An important aspect of the present invention is that at any time frame t, a word list may be derived by the system. Thus, in operation, the system may operate on a ~sliding window" basis, 5 deriving a word list for any predetermined g intervals. For e~ample, for g=4, the system will identify a candidate word list for reporting to the recognizer at every fourth frame. This sliding window enables the system to recognize discrete words l0 within a continuous utterance, since dynamic programming acts independent of the identification of an anchor.
In the preferred embodiment, the system 15 described above removes any duplicate words which may occur in the word list prior to sending the word list to the recognizer. Duplicates can occur, as e~plained above, since a given word may occur in several different wordstart clusters. This may be 20 accomplished by any means well known and available to those skilled in the art. One esample is by recognizing words from different clusters which have the same identification.
Using the system described above, pre-filter data PF is identified to be sent to a recognizer.
The pre-filter data is essentially the result of the implementation of two filters: one for deriving a preliminary cluster data set of wordstart clusters 30 satisfying a specified absolute threshold criteria;
and, one for deriving a word list of words satisfying a specified combined threshold criteria. Thus, the pre-filter data comprises words upon which a more lengthy match may be performed.
; . .
.. . ., . . , . ~ .
PHASE III
Once pre-filter data PF, in the form of a word list, is available from the system described above, a recognizer then performs a more lengthy word match to identify the single best word from the available word list which most closely matches the acoustic, or spoken, word. Continuous speech recognizers are described in detail in U.S. Patent No. 4,783,803, cited above. As in that patent, the speech recognition system of the preferred embodiment utilizes dynamic programming in performing the more intensive word matching.
In addition to dynamic programming, further processing may be performed during this phase. For example, a system for determining the feasibility of a selected word model, or endscores may be included.
Other filtering systems may also be included during this phase. , PHASE IV -- ;
This phase of the system involves reporting the candidate word model which has the highest probability of matching the utterance at a time t.
In the preferred embodiment, reporting involves displaying the selected word on a monitor to be viewed by user who uttered the recognized speech.
Standard monitors presently commercially available are all appropriate reporting devices.
W092/~585 P~T/~S~I/W321 -3~-9~
The system may also include means ~or enabling a user to correct or modify tAe word selection of the system. This may include a menu showing a predetermined number of alternate 5 selections for a given word or phrase. The system may include means for the user to actively edit the selected word or phrase, or otherwise manipulate the selected word or phrase identified by the system.
The invention may be embodiled in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the 15 scope of the invention being indicated by the apppended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
What is claimed is:
~ . . . . . .
, :. ~. ,: ~ ~ .
-: : , . -: .
the unpacked words are derived, and SL is the language score.
The score for each unpacked word, Sw, is 5 identified based on a variation of Bayes's Theorem which may be expressed as:
p(Wi¦A) = [P~Alwi?P~wi)~ ~ p~A~
' wherein the terms represent the following: p(Wi ¦A) represents the posterior probability of a word Wi, given the acoustics, or spoken word, A; p~A¦Wi) represents the probability of a spoken word, given a 15 word Wi; p(Wi) represents the p~obability of occurrence of a word Wi based on the frequence of occurrence of the word in the language; and, p~A) represents the probability of the ocurrence o the acoustics, A, in the utterance, averaged over all 20 possible words. Each Wi i8 identi~ied for which p(Wi¦A) is biggest, which is equivalent to each Wi for which p~A¦Wi)p(Wi) is biggest, which is eguivalent to each Wi for which lo~p~A¦Wi)~
log~p(Wi) is biggest. This may ~inally be e~pressed 25 as each Wi for which the following is smallest:
SWi = -log~p~Alwi~] - K logtp¢wi)]
wherein -log~p(A¦Wi)] is representative of the 30 acous~ic model score, and -X logtp(Wi)~ is representative of the language model score. The scale factor K is included to accommodate for the fact that each frame is not probabilistically independent. Thus, in the preferred embodiment a 35 scale factor X is incorporated in order to correctly ., :: . :... .. .. ; : :: :. ,. ~. .
, .... .. .. . . . .. ..
W092/00585 ~ 5 P~T/US~1/04321 add language model score, SL. In various forms of the invention, X may be preset based on ~mpirical data, or K may be dynamically determined by way of an error rate feedback loop.
Once a score Swi is derived for each word W
in each of the unpac~ed preliminary clusters, it is compared against a combined threshold T2. This threshold, similar to Tl, is empirically derived.
l0 Alternatively, it may be dynamically adjusted, and is dependent upon the trade-off between accuracy and speed. In the preferred embodiment, all words havin~
a score Sw which is less than the threshold T2 are not further considered. That is, words having the 15 best score, relative to the ~econd threshold, become part of the word list sent to the recognizer for more lengthy consideration. This second threshold is set to produce the smallest feasible number of words, as determined either by the recognizer, the user, or the 20 system designer. Obviously, the larger the word list, the slower will be the overall ~ystem, since a - lengthy word match is performed for each word in the word list. on the other hand, if the word list is too small, the true match word may be omitted. In 25 the preferred embodiment, for e~ample, if the threshold T2 is set such that the word list contains not more than Wx words that begin at frame ft, the system then identifies the ~est scorinq W~ words from the unpacked clusters. However, if there are fewer 30 than Wx words which meet the T2 criteria, all words will be returned from the wordstart clusters meeting the T2 threshold criteria.
.
.
,...... . .: . . . .: . :
W092/00~85 PCT/V~9t~04321 ~ 95 -28-An important aspect of the present invention is that at any time frame t, a word list may be derived by the system. Thus, in operation, the system may operate on a ~sliding window" basis, 5 deriving a word list for any predetermined g intervals. For e~ample, for g=4, the system will identify a candidate word list for reporting to the recognizer at every fourth frame. This sliding window enables the system to recognize discrete words l0 within a continuous utterance, since dynamic programming acts independent of the identification of an anchor.
In the preferred embodiment, the system 15 described above removes any duplicate words which may occur in the word list prior to sending the word list to the recognizer. Duplicates can occur, as e~plained above, since a given word may occur in several different wordstart clusters. This may be 20 accomplished by any means well known and available to those skilled in the art. One esample is by recognizing words from different clusters which have the same identification.
Using the system described above, pre-filter data PF is identified to be sent to a recognizer.
The pre-filter data is essentially the result of the implementation of two filters: one for deriving a preliminary cluster data set of wordstart clusters 30 satisfying a specified absolute threshold criteria;
and, one for deriving a word list of words satisfying a specified combined threshold criteria. Thus, the pre-filter data comprises words upon which a more lengthy match may be performed.
; . .
.. . ., . . , . ~ .
PHASE III
Once pre-filter data PF, in the form of a word list, is available from the system described above, a recognizer then performs a more lengthy word match to identify the single best word from the available word list which most closely matches the acoustic, or spoken, word. Continuous speech recognizers are described in detail in U.S. Patent No. 4,783,803, cited above. As in that patent, the speech recognition system of the preferred embodiment utilizes dynamic programming in performing the more intensive word matching.
In addition to dynamic programming, further processing may be performed during this phase. For example, a system for determining the feasibility of a selected word model, or endscores may be included.
Other filtering systems may also be included during this phase. , PHASE IV -- ;
This phase of the system involves reporting the candidate word model which has the highest probability of matching the utterance at a time t.
In the preferred embodiment, reporting involves displaying the selected word on a monitor to be viewed by user who uttered the recognized speech.
Standard monitors presently commercially available are all appropriate reporting devices.
W092/~585 P~T/~S~I/W321 -3~-9~
The system may also include means ~or enabling a user to correct or modify tAe word selection of the system. This may include a menu showing a predetermined number of alternate 5 selections for a given word or phrase. The system may include means for the user to actively edit the selected word or phrase, or otherwise manipulate the selected word or phrase identified by the system.
The invention may be embodiled in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the 15 scope of the invention being indicated by the apppended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
What is claimed is:
~ . . . . . .
, :. ~. ,: ~ ~ .
-: : , . -: .
Claims (27)
1. A system for processing continuous speech, said speech including a succession of utterances, comprising:
A. means for storing a plurality of word models;
B. means for identifying a succession of temporal segments of said utterances;
C. means selectively operable within at least two non-successive ones of said segments for identifying a subset of said plurality of word models meeting predetermined criteria, said subset defining a list of candidate words; and D. means for generating a signal representative of said list of candidate words.
A. means for storing a plurality of word models;
B. means for identifying a succession of temporal segments of said utterances;
C. means selectively operable within at least two non-successive ones of said segments for identifying a subset of said plurality of word models meeting predetermined criteria, said subset defining a list of candidate words; and D. means for generating a signal representative of said list of candidate words.
2. A system for processing continuous speech, said speech including a succession of utterances, comprising:
A. cluster data storage means for storing a plurality of M cluster data sets, C1,...,CM, where M is an integer greater than 1, each of said cluster data sets including data representative of a plurality of word models;
B. frame data means for generating a succession of w frame data sets vt, vt+1,...vt+w-1, beginning at a frame start time t during said succession of utterances, where w is an integer greater than 1, said succession of frame data sets being representative of a corresponding succession of temporal segments of said utterances, each of said frame data sets including k values representative of different frame parameters, where k ? 1;
C. data reduction means selectively operable on said w frame data sets for generating s reduced frame data sets where s < w, each of said reduced frame data sets being related to an associated plurality of said frame data sets and including j values representative of different reduced frame data set parameters;
D. scoring means for evaluating each of said reduced frame data sets against succession of said cluster data sets to generate a cluster score SYi for each of said cluster data sets, where i = 1,...,M;
E. selectively operable identifying means for identifying each of said word models of said cluster data sets having a cluster score bearing a predetermined relation to at least one threshold score T, said identified word models defining a candidate word list;
F. control means for determining said frame start times t, where successive start times t are spaced apart by more than the duration of the intervening temporal segment; and G. means for generating a signal representative of said candidate word list.
A. cluster data storage means for storing a plurality of M cluster data sets, C1,...,CM, where M is an integer greater than 1, each of said cluster data sets including data representative of a plurality of word models;
B. frame data means for generating a succession of w frame data sets vt, vt+1,...vt+w-1, beginning at a frame start time t during said succession of utterances, where w is an integer greater than 1, said succession of frame data sets being representative of a corresponding succession of temporal segments of said utterances, each of said frame data sets including k values representative of different frame parameters, where k ? 1;
C. data reduction means selectively operable on said w frame data sets for generating s reduced frame data sets where s < w, each of said reduced frame data sets being related to an associated plurality of said frame data sets and including j values representative of different reduced frame data set parameters;
D. scoring means for evaluating each of said reduced frame data sets against succession of said cluster data sets to generate a cluster score SYi for each of said cluster data sets, where i = 1,...,M;
E. selectively operable identifying means for identifying each of said word models of said cluster data sets having a cluster score bearing a predetermined relation to at least one threshold score T, said identified word models defining a candidate word list;
F. control means for determining said frame start times t, where successive start times t are spaced apart by more than the duration of the intervening temporal segment; and G. means for generating a signal representative of said candidate word list.
3. A system according to claim 2 wherein said cluster data storage means and said frame data means are adapted whereby each of said frame data sets are associated with duration D1, and wherein said cluster data sets are each associated with duration D2, such that:
D1 ? D2.
D1 ? D2.
4. A system according to claim 2 wherein said cluster data storage means is adapted whereby said cluster data sets are wordstart cluster data sets.
5. A system according to claim 2 wherein said cluster data storage means is adapted whereby said word models of each of said cluster data sets correspond to acoustically similar utterances over a succession of no more than w frame data sets.
6. A system according to claim 2 wherein said data reduction means includes smooth frame means for processing said frame data sets whereby said reduced frame data sets are smoothed frame data sets.
7. A system according to claim 2 wherein said cluster data storage means is adapted whereby each of said word models includes r node data vectors f1,..., fr, where r ? s, each of said node data vectors being representative of a characteristic related to the occurrence of a selected one of acoustic segments from each word of a set of words associated with said cluster data sets in said acoustic segment of said utterances.
8. A system according to claim 2 wherein said cluster data storage means is adapted whereby said characteristic related to the occurrence of a selected one of acoustic segments comprises a probability distribution.
9. A system according to claim 2 wherein said data reduction means is adapted whereby j = k.
10. A system according to claim 2 wherein said identifying means includes a first identifying means for identifying each of said cluster data sets having a cluster score measured with respect to a first predetermined threshold T1, said identified cluster data sets defining preliminary cluster data sets.
11. A system according to claim 2 wherein said identifying means further comprises a selectively operable second identifying means for identifying each of said word models of each of said preliminary cluster data sets having a word score SW measured with respect to a second threshold T2, said word score being representative of the sum of said cluster score Sy of said cluster data set associated with each of said word models and a language model score SL, said sum represented by:
SW = Sy + SL.
SW = Sy + SL.
12. A system according to claim 6 wherein said smooth frame means is adapted whereby each of said smooth frame data sets is associated with b of said frame data sets and being determined in accordance with:
;
wherein said b are integers greater than 1, ai are predetermined weighting coefficients, c defines an offset of each smooth frame data set with respect to the nest previous smooth frame data set.
;
wherein said b are integers greater than 1, ai are predetermined weighting coefficients, c defines an offset of each smooth frame data set with respect to the nest previous smooth frame data set.
13. A system according to claim 12 wherein said data reduction means is adapted whereby w = b + (s-1)c.
14. A system according to claim 13 wherein said data reduction means is adapted whereby w=12.
15. A system according to claim 13 wherein said data reduction means is adapted whereby k=8.
16. A system according to claim 15 wherein said data reduction means is adapted whereby s=3.
17. A system according to claim 16 wherein said data reduction means is adapted whereby b-4.
18. A system according to claim 17 wherein said data reduction means is adapted whereby c=4.
19. A system according to claim 12 wherein said data reduction means is adapted whereby said weighting coefficients ai - 1/b.
20. A system according to claim 13 wherein w=12, k=8, s=3, b=4, c=4, and ai=1/b.
21. A system according to claim 8 wherein said cluster data storage means is adapted whereby said cluster score Sy corresponds to:
wherein Yi are ones of Y1, Y2,...,Ys, and r are said node data vectors f1,...,fr for corresponding ones of said word models.
wherein Yi are ones of Y1, Y2,...,Ys, and r are said node data vectors f1,...,fr for corresponding ones of said word models.
22. A continuous speech processing method comprising the steps of:
A. storing a plurality of M cluster data sets, C1,...,CM, where M is an integer greater than 1, each of said cluster data sets including data representative of a plurality of word models;
B. generating a succession of w frame data sets vt, vt+1,...vt+w-1, beginning at a frame start time t during said succession of utterances, where w is an integer greater than 1, each of said frame data sets being representative of successive acoustic segments of utterances for a frame period, each of said frame data sets including k values representative of different frame parameters where k ? 1;
C. reducing w of said frame data sets to generate s reduced frame data sets, where s < w, each of said reduced frame data sets being related to an associated plurality of said frame data sets an including j values related to the k values of said associated frame data sets, where j ? k;
D. evaluating said reduced frame data sets with a succession of said cluster data sets to generate a cluster score Syi for each of said cluster data sets, where i=1,...,M;
E. identifying each of said word models having a cluster score bearing a predetermined relation to at least one threshold score T, said identified word models defining a word list;
F. determining said frame start times t, where successive start times t are spaced apart bymore than the duration of the intervening temporal segments; and G. generating a signal representative of said candidate word list.
A. storing a plurality of M cluster data sets, C1,...,CM, where M is an integer greater than 1, each of said cluster data sets including data representative of a plurality of word models;
B. generating a succession of w frame data sets vt, vt+1,...vt+w-1, beginning at a frame start time t during said succession of utterances, where w is an integer greater than 1, each of said frame data sets being representative of successive acoustic segments of utterances for a frame period, each of said frame data sets including k values representative of different frame parameters where k ? 1;
C. reducing w of said frame data sets to generate s reduced frame data sets, where s < w, each of said reduced frame data sets being related to an associated plurality of said frame data sets an including j values related to the k values of said associated frame data sets, where j ? k;
D. evaluating said reduced frame data sets with a succession of said cluster data sets to generate a cluster score Syi for each of said cluster data sets, where i=1,...,M;
E. identifying each of said word models having a cluster score bearing a predetermined relation to at least one threshold score T, said identified word models defining a word list;
F. determining said frame start times t, where successive start times t are spaced apart bymore than the duration of the intervening temporal segments; and G. generating a signal representative of said candidate word list.
23. A method according to claim 22 wherein said reducing step C further comprises the substep of smoothing said frame data sets.
24. A method according to claim 22 wherein said smoothing substep includes smoothing in accordance with:
wherein said b are integers greater than 1, ai are predetermined weighting coefficients, c defines an offset of each smooth frame data set with respect to the nest previous smooth frame data set.
wherein said b are integers greater than 1, ai are predetermined weighting coefficients, c defines an offset of each smooth frame data set with respect to the nest previous smooth frame data set.
25. A method according to claim 24 wherein said smoothing substep includes setting w=12, k=8, s=3, b=4, c=4, and ai-1/b.
26. A method according to claim 22 wherein said evaluating step includes the substep of determining said cluster score Sy in accordance with:
wherein Yi are ones of Y1, Y2,...,Ys, and r are said node data vectors f1...,fr for corresponding ones of said word models.
wherein Yi are ones of Y1, Y2,...,Ys, and r are said node data vectors f1...,fr for corresponding ones of said word models.
27. A method for processing continuous speech, said speech including a succession of utterances, comprising the steps of:
A. storing a plurality of word models;
B. identifying a succession of temporal segments of said utterances;
C. within at least two non-successive ones of said segments, selectively identifying a subset of said plurality of word models meeting predetermined criteria, said subset defining a list of candidate words; and D. generating a signal representative of said list of candidate words.
A. storing a plurality of word models;
B. identifying a succession of temporal segments of said utterances;
C. within at least two non-successive ones of said segments, selectively identifying a subset of said plurality of word models meeting predetermined criteria, said subset defining a list of candidate words; and D. generating a signal representative of said list of candidate words.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US542,520 | 1990-06-22 | ||
US07/542,520 US5202952A (en) | 1990-06-22 | 1990-06-22 | Large-vocabulary continuous speech prefiltering and processing system |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2085895A1 true CA2085895A1 (en) | 1991-12-23 |
Family
ID=24164175
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002085895A Abandoned CA2085895A1 (en) | 1990-06-22 | 1991-06-17 | Continuous speech processing system |
Country Status (7)
Country | Link |
---|---|
US (2) | US5202952A (en) |
EP (1) | EP0535146B1 (en) |
JP (1) | JPH06501319A (en) |
AT (1) | ATE158889T1 (en) |
CA (1) | CA2085895A1 (en) |
DE (1) | DE69127818T2 (en) |
WO (1) | WO1992000585A1 (en) |
Families Citing this family (242)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5202952A (en) * | 1990-06-22 | 1993-04-13 | Dragon Systems, Inc. | Large-vocabulary continuous speech prefiltering and processing system |
US5682464A (en) * | 1992-06-29 | 1997-10-28 | Kurzweil Applied Intelligence, Inc. | Word model candidate preselection for speech recognition using precomputed matrix of thresholded distance values |
GB9223066D0 (en) * | 1992-11-04 | 1992-12-16 | Secr Defence | Children's speech training aid |
US5983179A (en) * | 1992-11-13 | 1999-11-09 | Dragon Systems, Inc. | Speech recognition system which turns its voice response on for confirmation when it has been turned off without confirmation |
US5463641A (en) * | 1993-07-16 | 1995-10-31 | At&T Ipm Corp. | Tailored error protection |
CA2126380C (en) * | 1993-07-22 | 1998-07-07 | Wu Chou | Minimum error rate training of combined string models |
JPH0793370A (en) * | 1993-09-27 | 1995-04-07 | Hitachi Device Eng Co Ltd | Gene data base retrieval system |
US5699456A (en) * | 1994-01-21 | 1997-12-16 | Lucent Technologies Inc. | Large vocabulary connected speech recognition system and method of language representation using evolutional grammar to represent context free grammars |
JP2775140B2 (en) * | 1994-03-18 | 1998-07-16 | 株式会社エイ・ティ・アール人間情報通信研究所 | Pattern recognition method, voice recognition method, and voice recognition device |
US5606643A (en) * | 1994-04-12 | 1997-02-25 | Xerox Corporation | Real-time audio recording system for automatic speaker indexing |
DE69525178T2 (en) * | 1994-10-25 | 2002-08-29 | British Telecomm | ANNOUNCEMENT SERVICES WITH VOICE INPUT |
GB2328055B (en) * | 1995-01-26 | 1999-04-21 | Apple Computer | System and method for generating and using context dependent sub-syllable models to recognize a tonal language |
NZ302748A (en) * | 1995-03-07 | 1999-04-29 | British Telecomm | Speech recognition using a priori weighting values |
US5745875A (en) * | 1995-04-14 | 1998-04-28 | Stenovations, Inc. | Stenographic translation system automatic speech recognition |
DE69622565T2 (en) * | 1995-05-26 | 2003-04-03 | Speechworks Int Inc | METHOD AND DEVICE FOR DYNAMICALLY ADJUSTING A LARGE VOCABULARY LANGUAGE IDENTIFICATION SYSTEM AND USING RESTRICTIONS FROM A DATABASE IN A VOICE LABELING LANGUAGE IDENTIFICATION SYSTEM |
US5680511A (en) * | 1995-06-07 | 1997-10-21 | Dragon Systems, Inc. | Systems and methods for word recognition |
US5719996A (en) * | 1995-06-30 | 1998-02-17 | Motorola, Inc. | Speech recognition in selective call systems |
US5903864A (en) * | 1995-08-30 | 1999-05-11 | Dragon Systems | Speech recognition |
US5852801A (en) * | 1995-10-04 | 1998-12-22 | Apple Computer, Inc. | Method and apparatus for automatically invoking a new word module for unrecognized user input |
US5970457A (en) * | 1995-10-25 | 1999-10-19 | Johns Hopkins University | Voice command and control medical care system |
US5765132A (en) * | 1995-10-26 | 1998-06-09 | Dragon Systems, Inc. | Building speech models for new words in a multi-word utterance |
US5732265A (en) * | 1995-11-02 | 1998-03-24 | Microsoft Corporation | Storage optimizing encoder and method |
US5799276A (en) * | 1995-11-07 | 1998-08-25 | Accent Incorporated | Knowledge-based speech recognition system and methods having frame length computed based upon estimated pitch period of vocalic intervals |
US6601027B1 (en) | 1995-11-13 | 2003-07-29 | Scansoft, Inc. | Position manipulation in speech recognition |
US5799279A (en) * | 1995-11-13 | 1998-08-25 | Dragon Systems, Inc. | Continuous speech recognition of text and commands |
US5794189A (en) * | 1995-11-13 | 1998-08-11 | Dragon Systems, Inc. | Continuous speech recognition |
US6064959A (en) * | 1997-03-28 | 2000-05-16 | Dragon Systems, Inc. | Error correction in speech recognition |
US7207804B2 (en) * | 1996-03-27 | 2007-04-24 | Michael Hersh | Application of multi-media technology to computer administered vocational personnel assessment |
AU730985B2 (en) | 1996-03-27 | 2001-03-22 | Michael Hersh | Application of multi-media technology to psychological and educational assessment tools |
EP0800158B1 (en) * | 1996-04-01 | 2001-06-27 | Hewlett-Packard Company, A Delaware Corporation | Word spotting |
US5870706A (en) * | 1996-04-10 | 1999-02-09 | Lucent Technologies, Inc. | Method and apparatus for an improved language recognition system |
US5819225A (en) * | 1996-05-30 | 1998-10-06 | International Business Machines Corporation | Display indications of speech processing states in speech recognition system |
US5822730A (en) * | 1996-08-22 | 1998-10-13 | Dragon Systems, Inc. | Lexical tree pre-filtering in speech recognition |
US5995928A (en) * | 1996-10-02 | 1999-11-30 | Speechworks International, Inc. | Method and apparatus for continuous spelling speech recognition with early identification |
US6151575A (en) * | 1996-10-28 | 2000-11-21 | Dragon Systems, Inc. | Rapid adaptation of speech models |
US5884258A (en) * | 1996-10-31 | 1999-03-16 | Microsoft Corporation | Method and system for editing phrases during continuous speech recognition |
US5829000A (en) * | 1996-10-31 | 1998-10-27 | Microsoft Corporation | Method and system for correcting misrecognized spoken words or phrases |
US5899976A (en) * | 1996-10-31 | 1999-05-04 | Microsoft Corporation | Method and system for buffering recognized words during speech recognition |
US5950160A (en) * | 1996-10-31 | 1999-09-07 | Microsoft Corporation | Method and system for displaying a variable number of alternative words during speech recognition |
US6122613A (en) * | 1997-01-30 | 2000-09-19 | Dragon Systems, Inc. | Speech recognition using multiple recognizers (selectively) applied to the same input sample |
US6224636B1 (en) * | 1997-02-28 | 2001-05-01 | Dragon Systems, Inc. | Speech recognition using nonparametric speech models |
US6029124A (en) * | 1997-02-21 | 2000-02-22 | Dragon Systems, Inc. | Sequential, nonparametric speech recognition and speaker identification |
US5946654A (en) * | 1997-02-21 | 1999-08-31 | Dragon Systems, Inc. | Speaker identification using unsupervised speech models |
US6167377A (en) * | 1997-03-28 | 2000-12-26 | Dragon Systems, Inc. | Speech recognition language models |
US6212498B1 (en) | 1997-03-28 | 2001-04-03 | Dragon Systems, Inc. | Enrollment in speech recognition |
US6374219B1 (en) * | 1997-09-19 | 2002-04-16 | Microsoft Corporation | System for using silence in speech recognition |
US6076056A (en) * | 1997-09-19 | 2000-06-13 | Microsoft Corporation | Speech recognition system for recognizing continuous and isolated speech |
US5983177A (en) * | 1997-12-18 | 1999-11-09 | Nortel Networks Corporation | Method and apparatus for obtaining transcriptions from multiple training utterances |
US8855998B2 (en) | 1998-03-25 | 2014-10-07 | International Business Machines Corporation | Parsing culturally diverse names |
US6963871B1 (en) * | 1998-03-25 | 2005-11-08 | Language Analysis Systems, Inc. | System and method for adaptive multi-cultural searching and matching of personal names |
US8812300B2 (en) | 1998-03-25 | 2014-08-19 | International Business Machines Corporation | Identifying related names |
US6243678B1 (en) * | 1998-04-07 | 2001-06-05 | Lucent Technologies Inc. | Method and system for dynamic speech recognition using free-phone scoring |
US6163768A (en) * | 1998-06-15 | 2000-12-19 | Dragon Systems, Inc. | Non-interactive enrollment in speech recognition |
US6195635B1 (en) | 1998-08-13 | 2001-02-27 | Dragon Systems, Inc. | User-cued speech recognition |
US6266637B1 (en) * | 1998-09-11 | 2001-07-24 | International Business Machines Corporation | Phrase splicing and variable substitution using a trainable speech synthesizer |
US7263489B2 (en) * | 1998-12-01 | 2007-08-28 | Nuance Communications, Inc. | Detection of characteristics of human-machine interactions for dialog customization and analysis |
KR100828884B1 (en) | 1999-03-05 | 2008-05-09 | 캐논 가부시끼가이샤 | Database annotation and retrieval |
US7058573B1 (en) * | 1999-04-20 | 2006-06-06 | Nuance Communications Inc. | Speech recognition system to selectively utilize different speech recognition techniques over multiple speech recognition passes |
US7120582B1 (en) | 1999-09-07 | 2006-10-10 | Dragon Systems, Inc. | Expanding an effective vocabulary of a speech recognition system |
JP4067716B2 (en) * | 1999-09-13 | 2008-03-26 | 三菱電機株式会社 | Standard pattern creating apparatus and method, and recording medium |
US7310600B1 (en) | 1999-10-28 | 2007-12-18 | Canon Kabushiki Kaisha | Language recognition using a similarity measure |
JP3689670B2 (en) * | 1999-10-28 | 2005-08-31 | キヤノン株式会社 | Pattern matching method and apparatus |
US6882970B1 (en) | 1999-10-28 | 2005-04-19 | Canon Kabushiki Kaisha | Language recognition using sequence frequency |
US8645137B2 (en) | 2000-03-16 | 2014-02-04 | Apple Inc. | Fast, language-independent method for user authentication by voice |
GB0011798D0 (en) * | 2000-05-16 | 2000-07-05 | Canon Kk | Database annotation and retrieval |
GB0015233D0 (en) * | 2000-06-21 | 2000-08-16 | Canon Kk | Indexing method and apparatus |
GB2364814A (en) * | 2000-07-12 | 2002-02-06 | Canon Kk | Speech recognition |
GB0023930D0 (en) | 2000-09-29 | 2000-11-15 | Canon Kk | Database annotation and retrieval |
GB0027178D0 (en) | 2000-11-07 | 2000-12-27 | Canon Kk | Speech processing system |
GB0028277D0 (en) | 2000-11-20 | 2001-01-03 | Canon Kk | Speech processing system |
ITFI20010199A1 (en) | 2001-10-22 | 2003-04-22 | Riccardo Vieri | SYSTEM AND METHOD TO TRANSFORM TEXTUAL COMMUNICATIONS INTO VOICE AND SEND THEM WITH AN INTERNET CONNECTION TO ANY TELEPHONE SYSTEM |
DE10207895B4 (en) | 2002-02-23 | 2005-11-03 | Harman Becker Automotive Systems Gmbh | Method for speech recognition and speech recognition system |
US8229957B2 (en) | 2005-04-22 | 2012-07-24 | Google, Inc. | Categorizing objects, such as documents and/or clusters, with respect to a taxonomy and data structures derived from such categorization |
US7797159B2 (en) * | 2002-09-16 | 2010-09-14 | Movius Interactive Corporation | Integrated voice navigation system and method |
US8849648B1 (en) * | 2002-12-24 | 2014-09-30 | At&T Intellectual Property Ii, L.P. | System and method of extracting clauses for spoken language understanding |
US8818793B1 (en) | 2002-12-24 | 2014-08-26 | At&T Intellectual Property Ii, L.P. | System and method of extracting clauses for spoken language understanding |
AU2003273357A1 (en) * | 2003-02-21 | 2004-09-17 | Harman Becker Automotive Systems Gmbh | Speech recognition system |
US7480615B2 (en) * | 2004-01-20 | 2009-01-20 | Microsoft Corporation | Method of speech recognition using multimodal variational inference with switching state space models |
US20070005586A1 (en) * | 2004-03-30 | 2007-01-04 | Shaefer Leonard A Jr | Parsing culturally diverse names |
DE102004055230B3 (en) * | 2004-11-16 | 2006-07-20 | Siemens Ag | Method for speech recognition from a predefinable vocabulary |
EP1734509A1 (en) * | 2005-06-17 | 2006-12-20 | Harman Becker Automotive Systems GmbH | Method and system for speech recognition |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US7633076B2 (en) | 2005-09-30 | 2009-12-15 | Apple Inc. | Automated response to and sensing of user activity in portable devices |
ATE405088T1 (en) | 2006-08-30 | 2008-08-15 | Research In Motion Ltd | METHOD, COMPUTER PROGRAM AND APPARATUS FOR CLEARLY IDENTIFYING A CONTACT IN A CONTACT DATABASE THROUGH A SINGLE VOICE UTTERANCE |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
WO2008033439A2 (en) | 2006-09-13 | 2008-03-20 | Aurilab, Llc | Robust pattern recognition system and method using socratic agents |
US9830912B2 (en) | 2006-11-30 | 2017-11-28 | Ashwin P Rao | Speak and touch auto correction interface |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US9053089B2 (en) | 2007-10-02 | 2015-06-09 | Apple Inc. | Part-of-speech tagging using latent analogy |
US8620662B2 (en) | 2007-11-20 | 2013-12-31 | Apple Inc. | Context-aware unit selection |
US10002189B2 (en) | 2007-12-20 | 2018-06-19 | Apple Inc. | Method and apparatus for searching using an active ontology |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
EP2081185B1 (en) | 2008-01-16 | 2014-11-26 | Nuance Communications, Inc. | Speech recognition on large lists using fragments |
US8065143B2 (en) | 2008-02-22 | 2011-11-22 | Apple Inc. | Providing text input using speech data and non-speech data |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US8464150B2 (en) | 2008-06-07 | 2013-06-11 | Apple Inc. | Automatic language identification for dynamic text processing |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
US8768702B2 (en) | 2008-09-05 | 2014-07-01 | Apple Inc. | Multi-tiered voice feedback in an electronic device |
US8898568B2 (en) | 2008-09-09 | 2014-11-25 | Apple Inc. | Audio user interface |
JP2012502325A (en) * | 2008-09-10 | 2012-01-26 | ジュンヒュン スン | Multi-mode articulation integration for device interfacing |
US8583418B2 (en) | 2008-09-29 | 2013-11-12 | Apple Inc. | Systems and methods of detecting language and natural language strings for text to speech synthesis |
US8712776B2 (en) | 2008-09-29 | 2014-04-29 | Apple Inc. | Systems and methods for selective text to speech synthesis |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US8645131B2 (en) * | 2008-10-17 | 2014-02-04 | Ashwin P. Rao | Detecting segments of speech from an audio stream |
US9922640B2 (en) | 2008-10-17 | 2018-03-20 | Ashwin P Rao | System and method for multimodal utterance detection |
WO2010067118A1 (en) | 2008-12-11 | 2010-06-17 | Novauris Technologies Limited | Speech recognition involving a mobile device |
US8862252B2 (en) | 2009-01-30 | 2014-10-14 | Apple Inc. | Audio user interface for displayless electronic device |
EP2221806B1 (en) * | 2009-02-19 | 2013-07-17 | Nuance Communications, Inc. | Speech recognition of a list entry |
US8380507B2 (en) | 2009-03-09 | 2013-02-19 | Apple Inc. | Systems and methods for determining the language to use for speech generated by a text to speech engine |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10540976B2 (en) | 2009-06-05 | 2020-01-21 | Apple Inc. | Contextual voice commands |
US9431006B2 (en) | 2009-07-02 | 2016-08-30 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US8682649B2 (en) | 2009-11-12 | 2014-03-25 | Apple Inc. | Sentiment prediction from textual data |
US8600743B2 (en) | 2010-01-06 | 2013-12-03 | Apple Inc. | Noise profile determination for voice-related feature |
US8311838B2 (en) | 2010-01-13 | 2012-11-13 | Apple Inc. | Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts |
US8381107B2 (en) | 2010-01-13 | 2013-02-19 | Apple Inc. | Adaptive audio feedback system and method |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
WO2011089450A2 (en) | 2010-01-25 | 2011-07-28 | Andrew Peter Nelson Jerram | Apparatuses, methods and systems for a digital conversation management platform |
US20110184736A1 (en) * | 2010-01-26 | 2011-07-28 | Benjamin Slotznick | Automated method of recognizing inputted information items and selecting information items |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US8713021B2 (en) | 2010-07-07 | 2014-04-29 | Apple Inc. | Unsupervised document clustering using latent semantic density analysis |
US8719006B2 (en) | 2010-08-27 | 2014-05-06 | Apple Inc. | Combined statistical and rule-based part-of-speech tagging for text-to-speech synthesis |
US8719014B2 (en) | 2010-09-27 | 2014-05-06 | Apple Inc. | Electronic device with text error correction based on voice recognition data |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US10515147B2 (en) | 2010-12-22 | 2019-12-24 | Apple Inc. | Using statistical language models for contextual lookup |
US8781836B2 (en) | 2011-02-22 | 2014-07-15 | Apple Inc. | Hearing assistance system for providing consistent human speech |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US20120310642A1 (en) | 2011-06-03 | 2012-12-06 | Apple Inc. | Automatically creating a mapping between text data and audio data |
US8812294B2 (en) | 2011-06-21 | 2014-08-19 | Apple Inc. | Translating phrases from one language into another using an order-based set of declarative rules |
US8706472B2 (en) | 2011-08-11 | 2014-04-22 | Apple Inc. | Method for disambiguating multiple readings in language conversion |
US8994660B2 (en) | 2011-08-29 | 2015-03-31 | Apple Inc. | Text correction processing |
US8914288B2 (en) | 2011-09-01 | 2014-12-16 | At&T Intellectual Property I, L.P. | System and method for advanced turn-taking for interactive spoken dialog systems |
US8762156B2 (en) | 2011-09-28 | 2014-06-24 | Apple Inc. | Speech recognition repair using contextual information |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9280610B2 (en) | 2012-05-14 | 2016-03-08 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US8775442B2 (en) | 2012-05-15 | 2014-07-08 | Apple Inc. | Semantic search using a single-source semantic model |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
WO2013185109A2 (en) | 2012-06-08 | 2013-12-12 | Apple Inc. | Systems and methods for recognizing textual identifiers within a plurality of words |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
US8935167B2 (en) | 2012-09-25 | 2015-01-13 | Apple Inc. | Exemplar-based latent perceptual modeling for automatic speech recognition |
CN113470640B (en) | 2013-02-07 | 2022-04-26 | 苹果公司 | Voice trigger of digital assistant |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US10572476B2 (en) | 2013-03-14 | 2020-02-25 | Apple Inc. | Refining a search based on schedule items |
US9733821B2 (en) | 2013-03-14 | 2017-08-15 | Apple Inc. | Voice control to diagnose inadvertent activation of accessibility features |
US9977779B2 (en) | 2013-03-14 | 2018-05-22 | Apple Inc. | Automatic supplementation of word correction dictionaries |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US10642574B2 (en) | 2013-03-14 | 2020-05-05 | Apple Inc. | Device, method, and graphical user interface for outputting captions |
US10078487B2 (en) | 2013-03-15 | 2018-09-18 | Apple Inc. | Context-sensitive handling of interruptions |
WO2014144579A1 (en) | 2013-03-15 | 2014-09-18 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
CN110096712B (en) | 2013-03-15 | 2023-06-20 | 苹果公司 | User training through intelligent digital assistant |
CN105027197B (en) | 2013-03-15 | 2018-12-14 | 苹果公司 | Training at least partly voice command system |
US9390708B1 (en) * | 2013-05-28 | 2016-07-12 | Amazon Technologies, Inc. | Low latency and memory efficient keywork spotting |
WO2014197336A1 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
WO2014197334A2 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
KR101922663B1 (en) | 2013-06-09 | 2018-11-28 | 애플 인크. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
KR101809808B1 (en) | 2013-06-13 | 2017-12-15 | 애플 인크. | System and method for emergency calls initiated by voice command |
DE112014003653B4 (en) | 2013-08-06 | 2024-04-18 | Apple Inc. | Automatically activate intelligent responses based on activities from remote devices |
US10296160B2 (en) | 2013-12-06 | 2019-05-21 | Apple Inc. | Method for extracting salient dialog usage from live data |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
JP6596924B2 (en) * | 2014-05-29 | 2019-10-30 | 日本電気株式会社 | Audio data processing apparatus, audio data processing method, and audio data processing program |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
TWI566107B (en) | 2014-05-30 | 2017-01-11 | 蘋果公司 | Method for processing a multi-part voice command, non-transitory computer readable storage medium and electronic device |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9761227B1 (en) | 2016-05-26 | 2017-09-12 | Nuance Communications, Inc. | Method and system for hybrid decoding for enhanced end-user privacy and low latency |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
DK179588B1 (en) | 2016-06-09 | 2019-02-22 | Apple Inc. | Intelligent automated assistant in a home environment |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
DK179049B1 (en) | 2016-06-11 | 2017-09-18 | Apple Inc | Data driven natural language event detection and classification |
DK179343B1 (en) | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10650812B2 (en) | 2018-08-13 | 2020-05-12 | Bank Of America Corporation | Deterministic multi-length sliding window protocol for contiguous string entity |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4489434A (en) * | 1981-10-05 | 1984-12-18 | Exxon Corporation | Speech recognition method and apparatus |
US4720863A (en) * | 1982-11-03 | 1988-01-19 | Itt Defense Communications | Method and apparatus for text-independent speaker recognition |
US4860358A (en) * | 1983-09-12 | 1989-08-22 | American Telephone And Telegraph Company, At&T Bell Laboratories | Speech recognition arrangement with preselection |
US4718094A (en) * | 1984-11-19 | 1988-01-05 | International Business Machines Corp. | Speech recognition system |
US4783803A (en) * | 1985-11-12 | 1988-11-08 | Dragon Systems, Inc. | Speech recognition apparatus and method |
DE3690416T1 (en) * | 1986-04-16 | 1988-03-10 | ||
US4903305A (en) * | 1986-05-12 | 1990-02-20 | Dragon Systems, Inc. | Method for representing word models for use in speech recognition |
US4829578A (en) * | 1986-10-02 | 1989-05-09 | Dragon Systems, Inc. | Speech detection and recognition apparatus for use with background noise of varying levels |
US4837831A (en) * | 1986-10-15 | 1989-06-06 | Dragon Systems, Inc. | Method for creating and using multiple-word sound models in speech recognition |
US4829576A (en) * | 1986-10-21 | 1989-05-09 | Dragon Systems, Inc. | Voice recognition system |
US4914703A (en) * | 1986-12-05 | 1990-04-03 | Dragon Systems, Inc. | Method for deriving acoustic models for use in speech recognition |
US4803729A (en) * | 1987-04-03 | 1989-02-07 | Dragon Systems, Inc. | Speech recognition method |
US4805218A (en) * | 1987-04-03 | 1989-02-14 | Dragon Systems, Inc. | Method for speech analysis and speech recognition |
US4805219A (en) * | 1987-04-03 | 1989-02-14 | Dragon Systems, Inc. | Method for speech recognition |
US5202952A (en) * | 1990-06-22 | 1993-04-13 | Dragon Systems, Inc. | Large-vocabulary continuous speech prefiltering and processing system |
-
1990
- 1990-06-22 US US07/542,520 patent/US5202952A/en not_active Expired - Fee Related
-
1991
- 1991-06-17 JP JP3512095A patent/JPH06501319A/en active Pending
- 1991-06-17 CA CA002085895A patent/CA2085895A1/en not_active Abandoned
- 1991-06-17 DE DE69127818T patent/DE69127818T2/en not_active Expired - Lifetime
- 1991-06-17 WO PCT/US1991/004321 patent/WO1992000585A1/en active IP Right Grant
- 1991-06-17 AT AT91912811T patent/ATE158889T1/en not_active IP Right Cessation
- 1991-06-17 EP EP91912811A patent/EP0535146B1/en not_active Expired - Lifetime
-
1993
- 1993-04-09 US US08/045,991 patent/US5526463A/en not_active Expired - Lifetime
Also Published As
Publication number | Publication date |
---|---|
WO1992000585A1 (en) | 1992-01-09 |
US5526463A (en) | 1996-06-11 |
ATE158889T1 (en) | 1997-10-15 |
US5202952A (en) | 1993-04-13 |
EP0535146A4 (en) | 1995-01-04 |
DE69127818T2 (en) | 1998-04-30 |
DE69127818D1 (en) | 1997-11-06 |
EP0535146A1 (en) | 1993-04-07 |
JPH06501319A (en) | 1994-02-10 |
EP0535146B1 (en) | 1997-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2085895A1 (en) | Continuous speech processing system | |
US6442519B1 (en) | Speaker model adaptation via network of similar users | |
US5794197A (en) | Senone tree representation and evaluation | |
CA2130218C (en) | Data compression for speech recognition | |
US4903305A (en) | Method for representing word models for use in speech recognition | |
US6529866B1 (en) | Speech recognition system and associated methods | |
US5497447A (en) | Speech coding apparatus having acoustic prototype vectors generated by tying to elementary models and clustering around reference vectors | |
CA2089786C (en) | Context-dependent speech recognizer using estimated next word context | |
US5809462A (en) | Method and apparatus for interfacing and training a neural network for phoneme recognition | |
US5025471A (en) | Method and apparatus for extracting information-bearing portions of a signal for recognizing varying instances of similar patterns | |
EP0302663B1 (en) | Low cost speech recognition system and method | |
JPH0585916B2 (en) | ||
JP3110948B2 (en) | Speech coding apparatus and method | |
JPH06175696A (en) | Device and method for coding speech and device and method for recognizing speech | |
EP0458859A1 (en) | Text to speech synthesis system and method using context dependent vowell allophones. | |
JP2700143B2 (en) | Voice coding apparatus and method | |
JPH01202798A (en) | Voice recognizing method | |
JPS59216242A (en) | Voice recognizing response device | |
JPH0774960B2 (en) | Method and system for keyword recognition using template chain model | |
US20020016709A1 (en) | Method for generating a statistic for phone lengths and method for determining the length of individual phones for speech synthesis | |
EP0681729B1 (en) | Speech synthesis and recognition system | |
JPH09160585A (en) | System and method for voice recognition | |
JPH1165589A (en) | Voice recognition device | |
JPH04271397A (en) | Voice recognizer | |
Dongre et al. | Speech Recognition Based Web Browsing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FZDE | Discontinued |