CA1299752C - Optical decoding technique for speech recognition - Google Patents

Optical decoding technique for speech recognition

Info

Publication number
CA1299752C
CA1299752C CA000525298A CA525298A CA1299752C CA 1299752 C CA1299752 C CA 1299752C CA 000525298 A CA000525298 A CA 000525298A CA 525298 A CA525298 A CA 525298A CA 1299752 C CA1299752 C CA 1299752C
Authority
CA
Canada
Prior art keywords
frames
template
frame
word
representative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CA000525298A
Other languages
French (fr)
Inventor
Ira Alan Gerson
Brett Louis Lindsley
Philip Jerome Smanski
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Motorola Solutions Inc
Original Assignee
Motorola Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Inc filed Critical Motorola Inc
Application granted granted Critical
Publication of CA1299752C publication Critical patent/CA1299752C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61PSPECIFIC THERAPEUTIC ACTIVITY OF CHEMICAL COMPOUNDS OR MEDICINAL PREPARATIONS
    • A61P31/00Antiinfectives, i.e. antibiotics, antiseptics, chemotherapeutics
    • A61P31/12Antivirals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61PSPECIFIC THERAPEUTIC ACTIVITY OF CHEMICAL COMPOUNDS OR MEDICINAL PREPARATIONS
    • A61P35/00Antineoplastic agents
    • CCHEMISTRY; METALLURGY
    • C07ORGANIC CHEMISTRY
    • C07KPEPTIDES
    • C07K2319/00Fusion polypeptide
    • C07K2319/01Fusion polypeptide containing a localisation/targetting motif
    • C07K2319/02Fusion polypeptide containing a localisation/targetting motif containing a signal sequence

Abstract

ABSTRACT

Described herein, is an arrangement and method for processing speech information in a speech recognition system. In such a system where the speech information is depicted as words, each word representing a sequence of frames and where the recognition system has means for comparing present input speech to a word template, the word template stored in template memory and derived from one or more previous input word, the prevent invention is best employed. The invention describes combining contiguous acoustically similar frames derived from the previous input word or words into representative frames to form a corresponding reduced word template, storing the reduced word template in template memory in an efficient manner, and comparing frames of the present input speech to the representative frames of the reduced word template according to the number of frames combined in the representative frames of the reduced word template. In doing so, a measure of similarity between the present input speech and the word template is generated.

Description

7`~i~

WORD RECOGNITION IN A SPEECH R~COGNI~ION SYSTE~
USING DATA REDUCED WORD TEMPL~TES
~ . _ BACKGROUND

The presQnt invention ralates to word recognition for a speech recognition system and, more particularly, to word recognition using word templates having a data reduced format.
Typically, qpeech recognition systems represent spokan words as word templates stored in system memory.
When a system user speaks into the system, the system must digitally represent the speech for comparison to the word templates stored in memory.
Two particular aspects of such an implementation have received a great deal of attention. The first aspect pertains to the amount of memory which is required to store the word templates. The representation of speech i5 such that the data used for matching to an input word typically requires a significant amount oF
memory to be dedicated for each particular word.
Moreover, a large vocabulary causes extensive computation time to be consumed ~or the match. In general, the computakion time increases linearly with amount of memory required for the template memory. Practical implementation in real time requires that this computation time be reduced. Of course, a faster processor architecture could be employQd to reduce this computation time, but due to co~t cons:iderations, it is pre~ered that tho data representing the word templates be reduced to reduce the computation.
The second aspeck pertains to the particular matching techniquas used in the system. Most word recognitlon techniques have been directed to the accuracy of the recognition process for a particular type of feature data used to represent the speech. ~ypically, channel bank information or ~C parameters represent the speech. When using feature data of a reduced format, the word recognition process must be sensitive to the format for an effective implementation.
The speech recognition system, described herein, clusters frames within the word templates to reduce the representative data, for which a word recognition technique requires special consideration to the combined frames. Data reduced word templates represent spoken words in a compacted form. Matching an incoming word to a reduced word template without adequately compensating for its compacted form will result in degraded recognizer performance. An obvious method for compensating for data reduced word templates would be uncompacting the reduced data before matching. Unfortunately, uncompacting the reduced data defeats the purpose of data reduction.
Hence, a word recognition method i5 needed which allows reduced data to be directly matched against an incoming spoken word without degrading the word recognition - process.
OBJECTS AND SUMMARY OF THE PRESENT INVENTION

Accordingly, an object of the present invention is to provide a system of word recognition which reduces template data and recognizes the reduced data in an efficient manner.
The present invention teaches an arrangement and methud for procassing speech in~ormation in a ~peech recognition system. In a system where the in~ormation is depicted as words, each word represented by sequence of frames and where the recognition system has means for comparing present input speech to a word template, the word template stored in template memory and derived from one or more previous input words, the processing method includes (1) combining continguous acoustically similar frames derived from the previous input word (3) into representative frames to form a corresponding reduced word template, (2) ~toring the reduced word template in template memory in an efficient manner, and (3) comparing frames of the present input speech to the representative frames of the reduced word template according to the number of frames combined in the representative frames of the reduced word template to produce a measure of similarity between the present input speech and the word template.

Briaf De~cription of th~ Drawings Additional ob;ect~, feature3, and advantages in accordance with the present invention will be more celarly understood by re~erence to the following description taken in connection with the accompanying drawing , in the saveral figures of which like re~erence numerals identify like elements, and in which:
Figure 1 is a general block diagram illustrating the technique of synthesizing speech from speech recognition templates according to the present invention;
Figure 2 is a block diagram of a speech communications device having a user-interactive control system employing speech recognition and speech synthesis in accordance with the present invention;
Figure 3 is a detailed block diagram of the preferred embodiment of the present invention illustrating a radio transceiver having a hands-free speech recognition/speech synthesis control system;
Figure 4a is a flowchart showin~ the sequence of steps performed by the energy normalization block 410 of Figure 4a;
Figure 4b is a flowchart showing the sequence of steps performed by the energy normalization block 410 of Figure 4a;
Figure 4c is a detailed block diagram of the particular hardware configuration of the segmentation/compression block 420 of Figure 4a;
Figure 5a is a graphical representation of a spoken word segemented into frames for forming a cluster according to the present invention;
Figure 5b is a diagram exemplifying output clusters being formed for a particular word template, according to the present invention;

Figure 5c i9 a table showing the po~sible ~ormations of an arbitrary partial cluster p~th according to the presen~ invention:
Figures 5d and 5e show a flowchart illustxatiny a basic implementation of the data raduction process performed by the segmentation/compression block 420 of Figure 4a;
Figure 5f i8 a detailed flowchart of the traceback and output clusters block 582 of Figure 5e, showing the formation o~ a data reduced word template from previously determined clusters;
Figure 5g is a traceback pointer table iliustrating a clustering path for 24 frames, according to the present invention, applicable to partial traceback;
Figure 5h is a graphical representation of the traceback pointer table of Figure 5g illustrated in the form of a frame connection tree;
Figure 5i is a graphical representation of Figure 5h showing the frame connection tree after three clusters have been output by tracing back to common frames in the tree;
Figures 6a and 6b comprige a flowchart showing the sequence of steps performed by the differential encoding block 430 of Figure 4a;
Figure 6c is a generalized memory map showing the particular data format of one frame of the template memory 160 of Figure 3;
Figure 7a i9 a graphical representative of frames clustered into avarage frames, each average frame represented by a state in a word model, in accordance with the present invention:
Figure 7b is a detailed block diagram of the recognition processor 120 of Figure 3, illustrating its relationship with the template memory 160;

r.

Figura 7c i5 a flowchart illustrating one embodiment of the seguenca o~ steps reg~lired for woxd decoding aacording to the prssent invention;
Figures 7d and 7e comprise a flowchart illustrating one embodiment of the steps required for stats decoding according to the present invention:
Figure 8a iB a detailed block diagram of the data expander block 346 of Figure 3;
Figure 8b is a flowchart showing the sequence of steps performed by the differential decoding block 802 of Figure 8a;
Figure 8c is a flowchart showing the se~uence of steps performed by the energy denormalization block 804 of Figure 8a;
Figure 8d is a flowchart showing the sequence of steps performed by the frame repeating block 806 of Figure 8a;
Figure 9a is a detailed block diagram of the channel bank speech synthesizer 340 of Figure 3;
Figure 9b is an alternate embodiment of the modulator/bandpass filter configuration 980 of Figure 9a;
Figure 9c is a detailed block diagram of the preferred embodiment of the pitch pulse source 920 of Figure 9a;
Figure 9d is a graphic representation illustrating various waveforms of Figures 9a and 9c.

Description of the Preferxed Embodiment 1. System Configuration Referring now to the accompanying drawings, Figure 1 shows a general block diagram of user-interactive control system 100 of the pre ent 1 invention. Electronic device 150 may include any elactronic apparatus that i8 sophisticated enough to warrant tha incorporation of a speech recognition/speech synthesis control system. In the preferred embodiment, electronic device 150 represents a speech communications de~ice such as a mobile radiotelephone.
User-spoken input speech is applied to miorophone 105, which acts as an acoustic coupler providing an electrical input speech signal for the control system.
Acoustic processor 110 performs acoustic feature extraction upon the input speech signal. Word features, defined as the amplitude/frequency parameters of each user-spoken input word, are thereby provided to speech recognition processor 120 and to training processor 170.
Acoustic processor 110 may also include a signal conditioner, such as an analog to-digital converter, to interface the input speech signal to the speech recognition control system. Acoustic processor 110 will be further described in conjunction with Figure 3.
Training processor 170 manipulates this word feature information from acoustic processor 110 to pro~ide word recognition templates to be stored in template memory 160. During the training procedure, the incoming word ~eatures are arranged into individual words by locating their endpoints. If the training procedure 37~
- 8 ~

is designed to accommodate multiple training utterances for word feature consistency, then the multiple utterances may be averaged to ~orm a single word templa~e. Furthermore, since most speech recognition systems do not require all o~ the speech in~ormation to be stored as a template, some type of data reduction is often per~or~ed by trainin~ processor 170 to reduce the tamplate memory requirements. The word templates are stored in template memory l~o for use by speech recognition processor 120 as well a~ by speech synthesis processor 140. The exact training procedure utilized by the preferred embodiment of the present invention may be ~ound in the description accompanyi~g Figure 2.
In the recognition mode, speech recognition proces~or 120 compares the word feature information provided by acoustic processor 110 to the word recognition templates provided by template memory 160.
If the acoustic features of the present word ~eature information derived from the user-spoken input speech sufficiently match the acoustic features of a particular prestored word template derived from ~he template memory, then recognition processor 120 provides device control data to device controller 130 indicative o* the particular word recognized. A further discussion of an appropria~e speech recognition apparatus, and how the preferred embodiment incorporates data reduction into the training process may be ~ound in the description accompanying Figures 3 through 5.
Device controller 130 interfaces the entire control system to electronic device 150. Device controller 130 translates the device control data provided by recognition processor 120 into control signals adaptable for use by the particular electronic device. These control signals direct the device to perform specific operating functions as instructed by the -- g --user. (Device controller 130 may also perform additional supervisory ~unctions related to other element~ shown in Figure 1.) An example of a device controller known in the art and suitable for use with the present invention is a microcomputer. Rsfer to Figure 3 for further details of the hardware implementation.
Device controller 130 also provides device status data representing the operating status of electronic device 150. This data is applied to speech synthesis processor 140, along with word recognition templates from template memory 160. Synthesis processor 140 u~ilizes the status data to determine which word recognition template is to be synthesized into user~recognizable reply speech. Synthesis processor 140 may also include an internal reply memory, also controlled by the status data, to provide "canned" reply words to the user. In either case, the user is informed of the electronic device operating status when the speech reply signal i5 output via speaker 145.
Thus, Flgure 1 illustrates how the present invention provides a user-interactive control system utilizing speech recognition to control the operating parameters of an electronic device, and how a speech recognition template may be utilized to generate reply speech to the user indicative of the operating status of the device.
Figure 2 illustrates in mora detail the application of the user-interactive control system to a speech communications device comprising a part of any radio or landline voice communications system, such as, for example, a two-way radio syskem, a telephone system, an intercom system, etc. Acoustic processor 110, recognition processor 120, template memory 160, and device controller 130 are the same in structure and in operation as the corresponding blocks of Figure 1.

However, control system 200 illustrates the internal structura of speech communications device 210. Speech communication terminal 225 represent~ the main electronic network of device 210, such as, ~or example, a telephone terminal or a communications console. In this embodiment, mlcrophone 205 and speaker 245 are incorporated into the speech communications device itself. ~ typical example of this microphone/speaker arrangement would be a telephone handset. Speech communications terminal 225 interfaces operating status information o~ the speech communications device to device controller 130. This operating status information may comprise ~unctional ~tatus data o~ the terminal itself (e.g., channel data, service information, operating mode messages, etc.), user-feedback information of the speech recognition control ystem (e.g., directory contents, word recognltion verification, operating mode status, etc.), or may include system status data pertaining to the communications link (e.g., loss-of-line, system busy, invalid access code, etc.).
In either the training mode or the recognition mode, the features of user spoken input speech are extracted by acoustic processor 110. In the training mode, which is represented in Figure 2 by poæition "A"
o~ switch 215, the word ~eature information is applied to word averager 220 of training processor 170. As previously mentioned, if the system is designed to average multiple utterances together to ~orm a single word template, the averaging is performed by word averager 220. Through the use of word averaging, the training processor can take into account the minor variances between two or more utterances o~ the same word, thereby producing a more reliable word template.
Numerous word averaging techniques may be used. For example, one method would be to combine only the similar 7~2 -- 11 ~

word features of all training utterance~ to produce a "best" sst of featuras ~or the word templata. Another techni~ue may be to simply compare all training utterances to determine which ono provides the "be6t"
template. Still another word averaging technique is described by L.R. Rabiner and J.G. Wilpon in "A
Simpli~ied Robust Training Procedure for Speaker Trained, Isolated Word Recognition Systems", Journal of the Acoustic Society of America, vol. 68 (Nove~ber 1980), pp.
.

Data reducer 230 then performs data reduction upon either the averaged word data ~rom word averager 220 or upon the word feature signals direckly from acoustic processor 110, depending upon the presence or absence of a word averager. In either case, the reduction process consists of segmenting this "raw" word ~eature data and combining the data in each segment. The storage re~uirements for the template are then ~urther reduced by di~erential encoding of the segmented data to produce ~0 "reduced" word ~eature data. This specific data reduction technique of the present invention is fully described in conjunction with Figuras 4 and 5. To summari2e, data reducer 230 compresses the raw word data to minimize the template storage requirements and to reduce the speech recognition computation time.
The reduced word feature data provided by training processor 170 is stored as word recognition templates in template memory 1~0. In the recognition mode, which is illustrated by position "B" of switch 215, recognition processor 120 compares the incoming word feature signals to the word recognition templates. Upon recognition of a valid command word, recognition processor 120 may instruct device controller 130 to cause a corresponding speech communications device control ~unction to be executed by speech communications terminal 225. Terminal 225 may respond to dQvice contxoller 130 by s~nding operating status information back to controller 130 in the ~orm o~ terminal status data. This data can be used by the control system to synthesize the appropriate speech raply signal to inform the user of the prasent device operating status. This sequence o~ events will be more clearly under~tood by referring to the subsequent example.
Synthesis processor 140 i8 comprised of speech synthesizer 240, data expander ~50, and reply memory 260.
A synthesis processor of this configuration is capable of generating "canned" replies to the user from a prestored vocabulary (stored in reply memory 260), as well as generating "template" responses from a user-generated vocabulary (stored in template memory 160). Speech synthesizer 240 and reply memory 260 are further described in conjunction with Figure 3, and data expander 250 is fully described in the text accompanying Figure 8a. In combination, thz blocks of synthesis processor 140 generate a speech reply signal to speaker 245. Accordingly, Figure 2 illustrates the technique of using a single template memory for both speech recognition and speech synthesis.
The simplified example of a "smart" telephone terminal employing voice-controlled dialing ~rom a stored telephone number directory is now used to describe the operation of the control system of Figure 2. Initially, an untrained speaker-dependent speech recognition system cannot recognize command words. Therefore, the user must manually prompt the device to begin the training procedure, perhaps by entering a particular code into the telephone keypad. Device controller 130 then directs switch 215 to enter the training mode (position l'A").
Device controller 130 then instructs speech synthesizer 240 to respond with the predefined phrase TRAINING

~ d VOCABULARY ON~, which is a "canned" rssponse obtained ~rom reply memory 260. The user then begins to build a command word vocabulary by uttering command words, such as STORE or RECALL, into microphons 205. Tha features of the utterance are ~irst extracted by acoustic processor 110, and then applied to either word averager 220 or data reducer 230. I~ the particular speech recognition system is designed to accept multiple utterances o~ the same word, word averager 220 produces a set of averaged word features representing the best representatlon of that particular word. If the system does not have word averaging capabilities, the single utterance word ~eatures ~rather than the multiple utterance averaged word features) are applied to data reducer 230. The data reduction process removes unnecessary or duplicate feature data, compresses the remaining data, and provides template memory 160 with "reduced" word recognition templates. A similar procedure is followed for training the system to recognize digits.
once the system is trained with the command word vocabulary, the user must continue the training procedure by entering telephone directory names and numbers. To accomplish this task, the user utters the previously-trained command word ENTER. Upon recognition of this utterance as a valid User command, device controller 130 instructs speech synthesizer 240 to reply with the "canned" phrase DIGITS PLEASE? stored in reply memory 260. Upon entering the appropriate telephone number digits (e.g., 555-1234), the user says TERMINATE and the system replyg NAME PLEASE? to prompt user-entry of the corresponding directory name (e.g., SMITH). This user-interactive process continues unkil the telephone number directory is completely filled with the appropriate telephone names and digits.

p~

~ o place a phone call, the user simply utter~ the command word RECAL~. When the utterance is recoynized as a valid user command by recognition proces~or 120, devlce controller 130 direct3 ~peech ~ynthesi2er 240 to generate ths verbal reply N~ME7 via synthe~izing information provided by reply memory 260. ~he user then responds by speaking the name in the directory index corre~ponding to the telephone number that he desires to dial (e.g.
JONES). The word will be recognized as a valid directory entry if it correspond~ to a predetermined name index stored in template memory 160. If valid, device controller 130 directs data expander 250 to obtain the appropriate reduced word recognition template from template memory 160 and perform the data expansion process for synthesis. Data expander 250 "unpacks" the reduced word feature data and restores the proper energy contour for an intelligible reply word. The expanded word template data is then fed to speech synthesizer 240.
Using both the template data and the reply memory data, speach synthesizer 240 generates the phrase JONES...
(from template memory 160 through data expander 250) ...FIVE-FIVE-FIVE, SIX-SEVEN-EIGHT-NINE (~rom reply memory 260).
The user then says the command word SEND which, when recognized by the control system, instructs device controller 130 to send telephone number dialing information to speech communications terminal 225.
Terminal 225 outputs this dialing information via an appropriate sommunications link. When the telephone connection is made, speech communications terminal 225 interfaces microphone audio from microphone 205 to the appropriate transmit path, and receive audio from the appropriate receive audio path to speaker 245. If a proper telephone connection cannot be made, terminal controller 225 provides the appropriate communications - 15 ~

link statu~ information to device controller 130.
Accordingly, device controller 130 instructs speech ~ynthesizer 240 to generat~ the appropriate reply word corresponding to the statu~ information provided, such as the reply word SYSTEM BUSY. In this manner, the user is informed of the communications link status, and us~r-interactive voice-controlled directory dialing is achieved.
The above operational descrip~ion is merely one application of synthesizing speech from speech recognition templates according to the present inven~ion.
Numerous other applications o~ khis novel techniqus to a speech communications device are contemplated, such as, for example, a communications console, a two-way radio, etc. In the preferred embodiment, the control system of the present invention is used with a mobile radiotelephone.
Although speech recognition and speech synthesis allows a vehicle operator to keep both eyes on the road, the conventional handset or hand-held microphone prohibits him from keeping both hands on the steering wheel or from executing proper manual ~or automatic) transmission shifting. For this reason, the control system of the preferred embodiment incorporates a speakerphone to provide hands-free control of the speech communications device. The speakerphone performs the transmit/receive audio switching function, as well as the received/reply audio multiplexing function.
Referring now to Figure 3, control system 300 utilizes the same acoustic processor block 110, training processor block 170, recoynition processor block 120, template memory block 160, device controller block 130, and synthesis processor block 140 as the corresponding blocks of Figure 2. However, microphone 302 and speaker 375 are not an integral part of the speech communications terminal. Instead, input ~peech signal from microphone 302 is directed to radiotelephone 350 via speakerphone 360. Similarly, speakerphone 360 also control~ tha multiplexing of the synthesized audio from the control system and the receive audio from the communications link. A mora detailed analysis of the switching/
multiplexing con~iguration of the speakerphone will be described later. Additionally, the speech communications terminal is now illustrated in Figure 3 as a radiotelephone having a transmitter and a receiver to provide the appropriate communications link via radio frequency (RF) channels. A detailed description of the radio blocks is also provided later.
Microphone 302, which is typically remotely-mounted at a distance from the user's mouth (e.g., on the automobile sun visor), acou~tically couples the user's voica to control system 300. This speech signal is usually amplified by preamplifier 304 to provide input speech signal 305. ~his audio input is directly applied to acoustic processor 110, and is switched by speakerphone 360 before being applied to radiotelephone 350 via switched microphone audio line 315.
As previously mentioned, acoustic processor 110 extracts the features o~ the user-spoken input speech to provide word feature information to both training processor 170 and recognition processor 120. Acoustic proce~sor 110 first convert~ the analog input speech into digital form by analog-to-digital (A/D) converter 310.
This digital data is then applied to feature extractor 312, which digitally per~orms the feature extraction function. Any feature extraction implementation may be utilized in block 312, but the present embodiment utilizes a particular form of "channel bank" feature extraction. Under the channel bank approach, the audio 7~

lnput ~ignal frequency spectrum i5 divided into individual spectral bands by a bank of bandpass ~ilter~, and the appropriate word ~eature data i8 generated according to an estimate of the amount of energy present in each band. A feature extractor of this type is described in the article: "The E~fect~ o~ Selected Signal Processing Techniques on the Per~ormanc~ of a Fil~er Bank Based Isslated Word Recognizer'~, B~A. Dautrich, L.R.
Rabiner, and T.B~ Martin, Bell Sy~tem Technical Journal, vol. 62, no. 5, (May-June 19~33), pp. 1311-1335. An appropriate digital filter algorithm is de~cribed in Chapter 4 of L.R. Rabiner and B. Gold, Theory_and Application of Digital Siqnal Processing, (Prentice Hall, Englewood Cliffs, N.J., 1975).
Training processor 170 utilizes this word feature data to generate word recognition templates to be stored in template memory 160. First o~ all, endpoint detector 318 locates the appropriate beginning and end locations of the user's words. These endpoints axe based upon the time-varying overall enexgy estimate of the input word feature data. An endpoint detector of this type is described by L.R. Rabiner and M.R. Sambur in "An Algorithm for Determining the Endpoints of Isolated Utterances", Bell SYstem Technical Journal, vol. 54, no. 2, (February 1975), pp. 297-315.
Word averager 320 then combines tha several utterances of the same word spoken by the user to provide a more reliable templata. As previously described in Figure 2, any appropriate word averaging scheme may be utilized, or the word averaging ~unction may be entirely omitted.
Data reducer 322 utilizes the "raw" word feature data ~rom word averager 320 to generate "reduced" word ~eature data for storage in template memory 160 as 3~ reduced word recognition templates. The data reduction process basically consists of normalizing the energy data, se~menting the word ~eature data, and combining the data in each segment. After the combined ~egments have been generatQd, the storage requirements are further reduced by differential encodiny of the filter data. The actual normalization, segmentation, and differential encoding steps of data reducer 322 are described ln detail in conjunction with Figures 4 and 5. For a general memory map illustrating the reduced data ~ormat of template memory 160, re~er to Figure 6c.
Endpoint detector 318, word averager 320, and data reducer 322 comprise training processor 170. In the training mode, training control signal 325, from device controller 130, instructs these three blocks to generate new word templates for storage in template memory 160.
However, in the recognition mode, training control signal 325 directs these blocks to suspend the process of generating new word templates, since this function is not desired during speech recognition. Hence, training processor 170 is only used in the training mode.
Template memory 160 stores word recognition templates to be matched to the incoming speech in recognition processor 120. Template memory 160 is typically comprised of a standard Random Acces~ Memory (RAM), which may be organized in any desired address configuration. A general purpose RAM which may be used with a speech recognition system i5 the Toshiba 5565 8k x 8 static RAM. However, a non-volatile RAM is preferred such that word templates are retained when the system is turned o~f. In the present embodiment, an EEPROM
(Electrically-erasable, programmable read-only memory) functions as template memory 160.
Word recognition templates, stored in template memory 160, are provided to speech recognition processor 120 and speech synthesis processor 140. In the recognition mode, recognition processor 120 compares these prsviously stored word templates ayainst the input word ~eatures provided by acoustic processor 110. In the present embodiment, recognition proce~sor 120 may be thought of as being comprised of two distinct blocks --templatQ decoder 328 and speech recognizer 326. Template decoder 328 interprets the reduced ~eature data provided by the template memory, such that speech recognizer 326 can perform its comparison ~unction. Briefly described, template decoder 32R implements an efficient "nibble-mode access technique" of obtaining the reduced data from template storage, and per~orms differential decoding on the reduced data such that speech recognizer 326 can utilize the information. Template decoder 328 is described in detail in the text accompanying Figure 7b Hence, the technique of implementing data reducer 322 to compress the feature date into a reduced data format for storage in template memory 160, and the use of template decoder 328 to decode the reduced word template information, allows the present invention to minimize template storage requirements.
Speech recognizer 326, which perEorms the actual speech recognition comparison process, may use one of several speech recognition algorithms. The recognition algorithm of the present embodiment incorporates near-continuou~ speech recognition, dynamic time warping, energy normalization, and a Chebyshev distance metric to determine a template match. Refer to Figure 7a et seq.
for a detailed description. Prior art recognition algorithms, such as described in J.S. Bridle, M.D. Brown, and R.M. Chamberlain, "An Algorithm ~or Connected Word Recognition,~' IEEE International Conference on Acoustics, Speech, and Signal Processing, May 3-5 19~2, vol. 2, pp.
899-902, may also be used.

In the present embodiment, an 8-bit microcomputer per~orm~ ~he ~unction of speech recognizer 326.
Moreover, several other control system block~ of Figure 3 are implemented in part by the same microcomputer with the aid of a CODEC/FIhTER and a DSP (Digital Signal Processor). An alternate hardware configuration for speech recognizer 326, which may be used in the present invention is described in an article by J. Peckham, ~. Green, J. Canning, and P. Stevens, entitled IIA Real-Time Hardware Continuou~ Speech Recognition System," IEEE International Conference on Acoustics, S~eech, and Si~nal Processing, ~May 3-5 1982), vol. 2, pp. 863-866, and the references contained therein.
Hence, the present invention is not limited to any specific hardware or any specific type of speech recognition. More particularly, the present invention contemplates the use of: isolated or continuous word recognition; and a software-based or hardware-hased implementation.
Device controller 130, consisting of control unit 334 and directory memory 332, serves to interface speech recognition processor 120 and speech synthesis processor 140 to radiotelephone 350 via two-way interface busses.
Control unit 334 is typically a controlling microprocessor which is capable of interfacing data from radio logic 352 to the other blocks of the control system. Control unit 334 also performs operational control of radiotelephone 350, such as: unlocking the control head; placing a telephone call; ending a telephone call; etc. Depending on the particular hardware inter~ace structure to the radio, control unit 334 may incorporate other sub-blocks to perform specific control function~ as DTMF dialiny, interface bus multiplexing, and control-function decision-making.
Moreover, the data-interfacing function of control unit 334 can be incorporated into the existiny hardware of radio logic 352. Hence, a hardware-specific control program would typically be provided ~or each type o~
radl~ or for each kind of electronic device application.
Directory memory 332, an EEPROM, stores the plurality of telephone numbers, thexeby permitting directory dialing. Stored telephone number directory information i~c sent from control unit 334 to directory memory 332 during the training process of entering telephone numbers, while this directory in~ormation is provided to control unit 334 in response to the recognition o~ a valid directory dialing command.
Depending on the particular device used~ it may be more economical to incorporate directory memory 332 into the telephone device itself. In general, however, controller block 130 per~orms the telephone directory storage function, the telephone number dialing function, and the radio operational control function.
Controller block 130 also provides different types of status information, representing the operating status of the radiotelephone, to speech synthesis processor 140. This status information may include information as to the telephone numbers stored in directory memory 332 ("555-1234", etc.), directory names stored in template memory 160 ("Smith", "Jones", etc.), directory status information ("Directory Full", "Name?", etc.), speech recognition status infoxmation ("Ready", "User Number?", etc.), or radiotelephone status information ("Call Dropped", "System Busy", etc.).
Hence, controller block 130 is the heaxt of the user interactive speech recognition/speech synthesis control system.
Speech synthesis processor bloak 140 performs the voice reply function. Word recognition templates, stored in template memory 160, are provided to data expander 346 wh~never speech synthesis from a template is require~.
~s previously mentioned, data expander 346 "unpacks" the reduced word feature data from template memory 160 ancl provides "template" voice response data for channel bank speech synthesizer 340. Refer to Figure 8a et seq. for a detailed explanation of data expander 346.
If the system controller determines that a "canned" reply word is desired, reply memory 344 supplies voice reply data to channel bank speech synthesizer 340.
In the preferred embodiment an EPROM (such as TD27256 EPROM available form Intel Corporation) is used as reply memory 344.
Using either the "canned" or "template" voice reply data, channel bank speech synthesizer 340 synthesizes these reply words, and outputs them to digital-to-analog (D/A) converter 342. The voice reply is then routed to the user. In the present embodiment, channel bank speech synthesizer 340 is the speech synthesis portion of a 14-channel vocoder. An example of such a vocoder may be found in J.N. Holmes, "The JSRU
Channel ~ocoder", IEE PROC., vol. 127, pt. F, no. 1, (February, 1980), pp. 53-60. The information provided to a channel bank synthesizer normally includes whether the input speech should be voiced or unvoiced, the pitch rate if any, and the gain of each of the 14 filters. However, as will be obvious to those skllled in the art, any type of speech synthesizer may be utilized to perform the basic speech synthesis function. The particular configuration of channel bank speech synthesizer 340 is fully described in conjunction with Figure 9a et seq.
As we have seen, the present invention teaches the implementation of speech synthesis from a speech recognition template to provide a user-interactive control system for a speech communications device. In the present embodiment, the speech communications devlce is a radio transceiver, such as a cellular mobile radiotelephone. Mowever, any speech communicatlons device warranting hands-free user-interactive operation may be used. For example, any simplex radio transceiver requiring hands-free control may also take advantage of the improved control system of the present invention.
Referring now to radiotelephone block 350 of Figure 3, radio logic 352 performs the actual radio operational control function. Specifically, it directs frequency synthesizer 356 to provide channel information to transmitter 353 and receiver 357. The function of frequency synthesizer 356 may also be performed by crystal-controlled channel oscillators. Duplexer 354 interfaces transmitter 353 and receiver 357 to a radio freguency (RF) channel via antenna 359. In the case of a simplex radio transceiver, the function of duplexer 354 may be performed by an RF switch. For a more detailed explanation of representative radiotelephone circuitry, refer to Motorola Instruction Manual 68P81066E40 entitled "DYNA T.A.C. Cellular Mobile Telephone."
Speakerphone 360, also termed a vehicular speakerphone in the present application, provides hands-free acoustic coupling of: the user-spoken audio to the i! 25 control s~stem and to the radio telephone transmitter audio; the synthes~ze~ speech reply signal to the user;
and the received audio from the radiotelephone to the user. As previously noted, preampli~ier 304 may perform amplification upon the audio signal provided by microphone 302 to produce input speech signal 305 to acoustic processor 110. This input speech signal is also applied to transmit audio switch 362, which routes input signal 305 to radio transmitter 353 via transmit audio 315. Transmit switch 362 is controlled by signal detector 364. Signal detector 364 compares input signal 305 amplitude against that of receive audio 355 to perform the speakerphone switching function.
When the mobile radio user is talking, signal detector 364 provides a positive control signal via detector output 361 to close transmit audio switch 362, and a negative control signal via detector output 363 to open receive audio switch 368. Conversely, when the landline party is talking, signal detector 364 provides the opposite polarity signals to close receive audio switch 3~8, while opening transmit audio switch 362.
When the receive audio switch is closed, receiver audio 355 from radiotelephone receiver 357 is routed through receive audio switch 368 to multiplexer 370 via switched receive audio output 367. In some communications systems, it may prove advantageous to replace audio switches 362 and 368 with variable gain devices that provide equal but opposite attenuations in response to the control signals from the signal detector.
Multiplexer 370 switches between voice reply audio 345 and switched receive audio 367 in response to multiplex signal 335 fron~ control unit 334. Whenever the control unit sends status information to the speech synthesizer, multiplexer signal 335 directs multiplexer 370 to route the voice reply audio to the speaker. Speakerphone audio 365 is usually amplified by audio amplifier 372 before being applied to speaker 3754. It is to be noted that the vehicle speakerphone embodiment described herein is only one of numerous possible configurations which can be used in the present invention.
In summary, Figure 3 illustrates a radio-telephone having a hands-free user-interactive speech-recognizing control system for controlling radio-telephone operating parameters upon a user-spoken command. The control system provides audible feedback to the user via speech synthesis from speech recognition template memory or a "canned" response reply memory. The - 24a -vehicle speakerphone provides hands-free acoustlc coupling of the user-spoken input speeah to the control system and to th8 radio transmi~ter, the speech r~ply signal ~rom the control system to the user, and tha receiver audio to the user.
The implementation of speeah ~ynthesi~ from recognition templates signlficantly improves the performanca and versatility of the radiotelephona's speech recognition control sy~tem.

~s~

2. Data Reduction and ~emplate Storaqe Referring to Figuxa 4a, an expanded block diayram of data reducer 322 is shown~ A~ previously statad, data reducer block 322 utilizes raw word feature data from word averager 320 to generate reduc~d word feature data for storage in template memory 160. The data reduction function is performed in three step~: (1) energy 10 normalization block 410 reduces the range of stored value~ for channel anergies by subtracting the average value of the channel energie ; (2) segmentation/
compression block 420 segments the word feature data and sombine~ acoustically similar frame~ to form "clusters";
and (3) differential encoding block 430 generates the differences between adjacent channels for storage, rather than the actual channel energy data, to further reduce storage requirements. When all three processes have been performed, the reduced data format for each frame is stored in only nine bytes as shown in Figure 6c. In short, data reducer 322 "packs" the raw word data into a reduced data format to minimize storage requirements.
The flowchart of Figure 4b illustrates the sequence of steps performed by energy normalization block 2 410 of the previous figure. Upon starting at block 440, block 441 initializes the variables which will be used in later calculations. Frame count FC is initialized to one to correspond to the first frame of the word to be data reduced. Channel total CT is initialized to the total number of channels corresponding to those of the channel bank feature extractor 312. In the preferred embodiment, a 14-channel feature extrackor is used.
Next, the frame total FT is calculated in block 442. Frame total FT is the total number of frames per word to be stored in the template memory. This frame total information i~ available from training processor 170. To illustrate, say that tha acoustic ~eatures of a 500 millisecond duration input word are (dig~tally) sampled every 10 millisacond~. Each 10 millisecond time segmant is called a frame. The 500 millisecond word then comprises 50 ~rames. Thus, FT would equal 50.
Block 443 tests to see if all the frames of the word have been processed. If the present frame count EC
is greater than the frame total FT, no frames of the word would be left to normalize, so the enargy normalization process for that word will end at block 444. If, however, FC is not greater than FT, the energy normalization process continue~ with the next frame of the word. Continuing with the above example of a so-frame word, each frame of the word is energy normalized in blocks 445 through 452, the frame count FC
is incremented in block 453, and FC is tested in block 443. After the 50th frame of the word has been energy normalized, FC will be incremented to 51 in block 453.
When a frame count FC of 51 is compared to the frame total FT of 50, block 443 will terminate the energy normalization process at block ~44.
The actual energy normalization procedure is accomplished by subtracting the average value of all of the channels from each individual channel to reduce the range of values stored in the templata memory. In block 445, the average frame energy tAVGENG) is calculated according to the formula:

i=CT
AVGENG = SUM CH(i) / CT.
i=l where CH~i) i5 khe individual channel energies, and where CT equals the total number of channels. It should be noted that in the present embodiment, energies ara stored a~ log energies and the energy normalization process actually subtract~ the average log energy from the log energy of each hannPl~
The avarage frame energy AVGENG is output in block 446 to be stored at tha end location of the channel data for each frame. (See Figure 6c byte 9.) In order to afficiently storP the avsrage frame energy in four bits, AVGENG is normalized to the peak energy value of the entire template, and then quantized to 3 dB steps. When the peak energy is assigned a value of 15 (the four-bit maximum), the total energy variation within a template would be 16 steps x 3 dB/step = 48 dB. In the preferred embodiment, this average energy normalization/
quantization is performed after the differential encoding of channel 14 (Figure 6a) to permit higher precision calculations during the segmentation/compression process (block 420).
Block 447 sets the channel count CC to one.
Block 4~8 reads the channel energy addres~ed by the channel counter CC into an accumulator. Block 449 subtracts the average energy calculated in block 445 from the channel energy read in block 448. This step generates normalized channel energy data, which is then output (to segmentation/compression block 420) in block 450. Block 451 increments the channel counter, and block 452 tests to see if all channels have been normalized.
If the new channel count is not greater than the channel total, then the process returns to block 448 where the next channel energy i9 read. If, however, all channels of the frame have been normalized, the frame count is incremented in block 453 to obtain the next frame of data. When all ~rames have been normalized, the energy normalization process of data reducer 322 ends at block 444.

- 29 ~

Refering now to Figure 4a, shown i~ a block diagram illustrating an implentation of tha data reducer, block 420. The input feature data is stored in frame~ in initial ~rame storage, block 502. The memory used ~or storage is preferred to be ~AM. A segmentation controller, block 504, is used to control and to designate which frames will be considered for clustering.
A number of microprocessors can be used ~or this purpose, such as ~he Motorola type 6805 micropxocessor.
The present invention requires that incoming frames be considered for averaging by first calculating a di~tortion measure associated with the frames to determine the similarity between the frames before averaging. The calculatlon is preferably made by a microprocessor, similar to, or the same as that used in block 504. Details of the calculation are subsequently discussed.
Once it has been determined which ~rames will be combined, the frame averager, block 508, combines the frames into a representative average frame. Again, similar type processing means, as in block 504, can be used for combining the specified frames for averaging.
To effectively reduce the data, the resulting word templates should occupy as little template storage as possible without being distorted to the point that the recognition process is degraded. In other words, the amount of information representing the word templates should be minimized, while, at the same time, maximizing tha recognition accuracy. Although the two extremes are contradictory, the word template data can be minimized if a minimal level of distortion is allowed for each cluster.
Figure 5a illustrates a method for clustering frames for a given level of distortion~ Speech is `~
3~ -depicted a~ feature data grouped in frames 510. The five ~enter frames 510 form a cluster 512. Th~ clu~ker 512 i5 combined into a representative avarage frama 514. The averaga ~rame 514 can ba generated by any number o~ known averaging methods according to the particular type o~
feature data used in the system. To determine whether a cluster meets the allowable distortion level, a prior art distortion test can be usedO However, it is preferred that the average ~rame 514 be compared to each of the frames 510 in the cluster 512 for a measure o~
similarity. The distance between the average frame 514 and each frame 510 in the cluster 512 is indicated by distances Dl-D5. If one of these distances exceeds the allowable distortion level, the threshold distance, the clus~er 512 is not considered for the resulting word template. If the threshold di~tance i~ not exceeded, the cluster 512 is considered as a possible cluster represented as the average frame 514.
This technique for determining a valid cluster is referred to as a peak distortion measure. The present embodiment uses 2 types of peak distortion criteria, peak energy distortion and peak spectural distortion.
Mathematically, this is stated as ~ollows:

D = max [Dl, D2, D3, D4, D5], where Dl-D5, as discussed above, represent each distance.

These distortion measures are used as local constraints ~or restricting which ~rames may be combined into an average frame. If D exceeds a predetermined distortion threshold for either energy or spectral distortion, the cluster is rejected. By maintaining the same constraints for all clusters, a relative quality of the resulting word template is realized.

This clustering technique i9 used with dynamic programming to optimally reduce ~he data representing the word template. The principle o~ dynamlc programming can be mathematically stated as follows:

Yo - O and Yj = min [Yi + ci~, for all i, where Y; is the cost of the least cost path from node O
to node ; and Ci; is the cost incurred in moYing from node i to node ~ The integer values of i and ; range over the possible number of nodes.
To apply this principle to the reduction of word templates in accordance with the present invention, several assumptions are made. They are:

The information in the templates is in the form of a series o~ frames, spaced equally in time;
A suitable method of combining ~rames into an average frame exists:
A meaningful distortion measure exists for comparing an average frame to an original frame;
and Frames may be combined only with adjacent ~rames.

The end objective o~ the present invention is to find the minimal set of clusters representing the template, subject to the constraint that no cluster exceeds a predetermined distortion threshold.
The following definitions allow the principle of dynamic programming to be applied to data reduction according to the present invention.

Y~ is tha combination of clusters for the first ; ~rames;
Yo is the null path, meaning there are no clusters at this point;
~i; = 1 if the cluster of framas, i ~ 1 through ;, meets the distortion critaria, Cij =
in~inity othsrwiseO

The clu~tering method generates optimal cluster path~ starting at the first frame of the word template.
The cluster paths as3igned at each frame within the template are referred to as partial paths since they do not completely define the clustering for the entire word.
The method begin~ by initializing the null path, associated with 'frame 0', to 0, i.e. Yo = 0. This indlcates that a template with zero ~rames has zero clusters associated with it. A total path distortion is as igned to each path to describe it~ relative quality.
~lthough any total distortion measure can be used, the implementation described herein uses the maximum of the peak spectral distortions from all the clusters defining the current path. Accordingly, the null path, Yo, is assigned zero total path distortion, TPD.
q~o find the first partial path or combination of clusters, partial path Yl is defined as follows:

Yl (partial path at frame one) = Y0 + C0,1 This states that the allowable clusters of one frame can be formed by taking the null path, Y0, and appending all frames up to frame 1. Hence, the total cost for partial path Yl is 1 cluster and the total path distortion is zero, since the average frame is identical to the actual frame.

~Z~ 2 ~ 33 ~

The formation of ths second partial path, Y2, requires that two possibilities be con~idered. They are:

Y2 - min [Y0 + C0,~;
Yl + Cl,2].

The first possibility i8 the null path, Y0, with frames 1 and 2 combined into one cluster. Ths second possibility is the first frame as a cluster, partial path Yl, plus the second frame as the second cluster.
The first possibility has a cost of one clu~ter while the second has a cost of two clusters. Since the ob~ect in optimizing the reduction is to obtain the fewest clusters, the first possibility is preferredD The total cost for the first possibility is one cluRter. Its TPD is equal to the peak distortion between each frame and the average of the two frames. In the instance that the first possibility has a local di~tortion which exceeds the predetermined threshold, the second possibility is chosen.
To form partial path Y3, three possibilities exisk:

Y3 = min [Y0 + C0,3;
Yl -~ Cl,3;
Y2 ~ C2,3].

The formation of partial path Y3 depends upon which path was chosen during the formation of partial path ~2. One of the first two possibilities i~ not consider~-d, since partial path Y2 was optimally formed. Hence, the path that was not chosen at partial path Y2 need not bs considered for partial path Y3. In carrying out this technique for large number~ of frames, a globally optimal "`` ~2~3752 solution i8 realized without searching path~ that will never become optimum. Accordingly, the computation time required for data reduction is sub~tantially xeduced.
Figure 5b illustrates an example of forming the optimal partial path in a ~our frame word template. Each par~ial path, Yl through Y4, i~ shown in a ~epara~e row.
Tha frame~ to be considerQd for clu~tering are underlined. The first partial path, defined as Y0 +
Co,l, ha3 only on~ choice, 520. The 3ingl2 frame is clustered by itself.
For partial path Y2, the optimal formation includes a cluster with the first two frames, choice 522.
In this example, assume the local distortion threshold is exceeded, therefore the second choice 524 is taken. The X over these two com~ined frames 522 indicates that combining these two frames will no longer be held as a consideration for a viable average frame. Hereinafter, this is referred to as an invalidated choice. The optimal cluster formation up to frame 2 comprises two clusters, each with one frame 524.
For partial path Y3, there are three sets of choice~. The ~irst choice 526 i5 the most desirable but it would typically be rejected since combining the first two frames 522 of partial path Y2 exceeds the threshold.
It should be noted that this is not always the case. A
truly optimal algorithm would not immediately reject this combination based solely on the invalidated choice 522 of partial path Y2. The inclusion of additional frames into a cluster which alraady exceeds the distortion threshold occasionally causes the local distortion to decrease.
However, this is rare. In this example, such an inclusion is not considered. Larger combinations of an invalidated combination will also be invalidated. Choice 530 is invalidated because choice 522 was rejected.

Accordingly, an X i~ depioted over the ~ir~t and third choices 526 and 530, indicating an invalidation o~ each.
Hence, the third partial path, Y3, has only two choices, the second 528 and the fourth 532. The ~econd choice 528 5 i5 more optimal (fewer clusters) and, in this example, is round no~ ~o exceed the local dis~ortion threshold.
Accordingly, the fourth choice 532 is invalidated since it is not optimal. This invalidation is indicated by the XX over the fourth choice 532. The optimal cluster formation up to frame 3 comprises two clusters 528. The first cluster contains only the ~irst frame. The second cluster contains frames 2 and 3.
The fourth partial path, Y4, has four conceptual sets from whlch to choo~e. The X indicates that choices 534, 538, 542 and 54~ are invalidated as a consequence of choice 522, ~rom the second partial path, Y2, being invalidated. This results in consideration of only choices 536, 540, 544 and 546. Since choice 54~ is known to be a non-optimal choice, since the optimal clustering up to Y3 is 528 not 532, it is invalidated, as indicated by XX. Choice 536, of the remaining three choices, is selected next, since it minimizes the number of representative clusters. In this example, choice 536 is found not to exceed the local distortion threshold.
Therefore, the optimal cluster formation for the entire word template comprises only two clusters. The first cluster contains only the first frame. The second cluster contains frames 2 through 4. Partial path Y4 represents the optimally reduced word template.
Mathematically, this optimal partial path is defined as:
Yl ~ Cll~.
The above path forming procedure can be improved upon by selectively ordering the cluster formations for each partial path. The frames can be clustered from the last frame of the partial path toward the ~ir3t frama of the partial path. For example, in forming a partial path Y10, tha order Or clustaring is: ~9 ~ cs,lo; Y8 + C8,10;
Y7 ~ C7,10; etc. The cluster conslsting of frame 10 is con~idered first. Information defining this cluster is saved and frame 9 is added to the cluster, ~8,10. If clustering fxames 9 and 10 exceeds the local distortion threshold, then the information defining cluster C~,lo is not considered an additional cluster appended to partial path Y9. I~ clustering frames 9 and 10 does not exceed the local distortion threshold, then cluster C8,10 i5 considered~ Frames are added to the cluster until the threshold is exceeded, at which time the search for partial paths at Y10 is completed. Then, the optimal par~ial path, path with least clusters, is chosen from all the preceding partial paths for Y10. This selective order of clustering limits the testing of potential cluster combination~, thereby reducing computation time.
In general, at an arbitrary partial path Yj, a maximum of j cluster combinations are tested. Figure 5c illustrates the selective ordering for such a path. The optimal partial path is mathematically defined as:

Yj = min [Y~-l + Cj-l,j; . . . ; Yl + Cl,j; Y0 + co,j].
where min is min number of clusters in cluster path that satisfies distortion criteria. Marks are placed on the horizontal axis of Figure 5c, depicting each frame. The rows shown vertically are cluster formation possibilities for partial path Yj. The lowest set of brackets, cluster possibility number 1, determines the first potential cluster ~ormation. This formation includes the single frame, ;, clustered by itself and the optimal partial path Yj-l. To determine if a path exists with a lower cost, possibility two is tested. Since partial path Y~-2 i~ optimal up to frame j-2, clustering frames ~ and j-l dQtermlnes if another formation exists up ~o frame ;.
~rame ~ is clustered with additional ad~acent ~rame~
until the dlstortion threshold is axceeded. When tha distortion thrashold is exceeded, the search for partial path Y; i8 completed and the path wikh the fewe~t clusters is taken as Y;.
Ordering the cluskering in this manner forces only ~rames i~mediately ad~acent to frame j to be clustered. An additional benefit i8 that invalidated choices, are not used in determining which frames should be clustered. Hence, for any single partial path, a minimum number o~ frames are tested for clustering and only in~ormation defining one clustering per partial path is stored in memory.
The information defining each partial path includes three parameters-1) The total path cost, i.e., the number of clusters in the path.
2) A trace-back pointer indicating khe previous path formed. For example, if partial path Y6 is defined as (Y3 ~ C3,6), then the trace-back pointer for ~6 points to partial path Y3.
3) The total path distortion (TPD) for the current path, re~lecting the overall distortion of the path.
The traceback pointers define the clusters within the path.
The total path distortion reflects the ~uality of the path. It is used to determine which of two possible path formations, each having equal minimal cost (number of clusters), is the most de~irable.

7~2 The following example ~llustrates an application o~ tha~e parameters.
~et the followlng combinations exist for partial path ~:
Y8 ~ Y3 + C3,8 or Y5 + C5,8.

~et the cost o~ partial path Y3 and partial path Y5 be e~ual and let clusters C3,8 and C5,8 both pass the local di~tortion constraints.

The desired optimal formation is that which has the least TPD. Using the peak distortion test, the optimal ~ormation ~or partial path Y8 is determined as:

min~max ~Y3TpD; peak distortion of cluster 4-8]; max[Y5TpD; peak distortion of cluster 6-8]].
The trace-back pointer would be set to either Y3 or Y5, depending on which formation has the least TPD.

Now re~erring to Figure 5d, shown is a flowchart illustrating the formation of partial paths for a seguence of j frames. Discusslon of this flowchart pertains to a word template having 4 ~rames, i.e. N=4.
The resulting data reduced template is the same as in the example from Figure 5b, where Yj = Yl + Cl,4.
The null path, partial path Y0, is initialized along with the cost, the traceback pointers and the TPD, block 550. It should be noted that that each partial path has its own set o~ values ~or TPD, cost and TBP. A

~Z~7~Z

frame pointer, j, is initialized to 1, indicating the first partial path, Yl, block 552. Continuing on to the second part of the flowchart, at Figure 5e, a second frame pointer, k, is initialized to 0, block 554. The second frame pointer is used to specify how far back frames are considered for clustering in the partial path.
Hence, the frames to be considered for clustering are specified from k+l to j.
These frames are averaged, blocks 556, and a cluster distortion is generated, block 558. A test is performed to determine if the first cluster of partial path is being formed, block 562. In this instance, the first partial path is being formed. Therefore, the cluster is defined in memory by setting the necessary parameters, block 564. Since this is the first cluster in the first partial path, the traceback pointer (TBP) is set to the null word, the cost is set to 1 and the TBP
remains at 0.
The cost for the path ending at frame j is set as the cost of the path ending at j (number of clusters in path j) plus one for the new cluster being added.
Testing for a larger cluster formation begins by decrementing the second frame pointer, k, depicted in block 566. At this point, since k is decremented to -1, a test is performed to prevent invalid frame clusters, block 568. ~ positive result from the test performed at block 568 indicates that all partial paths have been formed and tested for optimality. The first partial path is mathematically defined as Y1=YO~C0,1. It is comprised of one cluster containing the first frame. The test illustrated in block 570 determines whether all frames have been clustered. There are three frames yet to cluster. The next partial path is initialized by incrementing the first frame pointer j, block 572. The second frame pointer is initialized to one frame before s~

;, block 554. Accordingly, ; points to frame 2 and k point~ to frame 1~
Frame 2 i~ averaged by itsel~ at hlock 556.
The test performed at block 562 detQrmines that ~ ls equal to k~l and flow proceeds to block 564 to define the ~irst partial path Y2. The pointer k is decremented at blocX 566 for the next cluster consideration.
Frames 1 and 2 are averaged to form Y0 + C0,~, block 556, and a distortion measure is generated, block 558. Since this i~ not the ~irst path being formed, block 562, flow proceeds to block 560. The distortion measure is compared to the threshold, block 560. In this example, combining frames 1 and 2 exceeds the threshold.
Thus, the previously saved partial path, i.e., Yl+Cl,2, is saved for partial path Y2 and the flowchart branches to block 580.
The step depicted in block 580 performs a test to determine whether any additional frames should be clustered with these frames that have exceeded the threshold, block 580. Typically, due to the nature of most data, adding additional frames at this point will also result in an exceeded distortion threshold.
However, it has been found that if the generated distortion measure does not exceed the threshold by more than about 20~, additional frames may cluster without exceeding the distortion threshold. If further clustering is desired, the second frame pointer is decremented to specify the new cluster, block 566.
otherwise, the test is performed to indicate whether all frames have been clustered, block 570.
The next partial path is initialized with j set e~ual to 3, block 572. The second frame pointer is initialized to 2. Frame 3 is averaged by itself, block 556, and a distortion measure is generated, block 558.

5~

Since this is the first path formed ~or Y3, thi~ new path i5 defined and saved in memory, block 564. The second frame polnter i9 decrementad, block 566, to speci~y a larger cluster. The larger cluster compri~es ~rames 2 and 3.
These ~rames are averaged, block 556, and a distortion is generated, block 558. Since this is not the first path ~ormed, block 562, flow proceeds to block 560. In this example, the threshold iæ ~ot exceeded, block 560. Since this path Yl + Cl, 3 is more optimal, with two clusters, than path Y2 ~ C2,3, with three clusters, path Yl~Cl,3 replaces the previou~ly saved path Y2+C2,3 as partial path Y3. A larger cluster is specified as K is decremented to 0, block 566.
Frames 1 through 3 are averaged, block 556, and another distortion measure is generated, block 558. In this example, the threshold is exceeded, block 560. No additional frames are clustered, block 580, and the test is again performed to determine whether all the frames have been clustered, block 570. Since frame 4 is still not yet clustered, j is incremented for the next partial path, Y4. the second ~rame pointer is set at ~rame 3 and the clustering process repeats.
Frame 4 is averaged by itself, block 556. Again, this is the first path formed, in block 562, and the path is defined ~or Y4, block 564. This partial path Y3+C3,4 has a cost of 3 clusters. A larger cluster is specified, block 566, and ~rames 3 and 4 are clustered.
Frames 3 and 4 are averaged, block 556. In this example their distortion measure does not exceed the threshold, block 5600 This partial path Y2 + C2,4 has a cost o~ 3 clusters. Since this has the qame cost as the previous path (Y3 + C3,4), ~low proceeds thru blocks 57 and 576 to block 578, and the TPD is examined to ~z~

determine which path has the least distortion. If the current path (Y2 + C2,4)( has a lower TPD, Block 578, than the previous path (Y3 + C3,4), then it will replace the current path, block 564 otherwise flow proceeds to block 566. A larger cluster is specified, block 566, and frames 2 through 4 are clustered.
Frames 2 through 4 are averaged, block 556. In this example, their distortion measure again does not exceed the threshold~ This partial path Y1 + C1,4 has a cost of 2 clusters. Since this is a more optimal path for partial path Y4, block 574 than the previous, the path is defined in place of the previous, block 564. A
larger cluster is specified, block 566, and frames l through 4 are clustered.
Averaging frames l through 4, in this example, exceeds the distortion threshold, block 560. Clustering is stopped, block 580. Since all the frames have been clustered, block 570, the stored information defining each cluster ~efines the optimal path for this 4-frame data reduced word template, block 582, mathematically defined as Y4=Yl+C1,4.
This example illustrates the formation of the optimal data reduced word template from Figure 3. The flowchart illustrates clustering tests for each partial path in the following order:
Y1: 1 2 3 4 Y2: 1 2 3 4 *1 2 3 4 Y3: 1 2 3 4 1 2 3 4 *1 2 3 4 Y4: 1 2 3 4 1 2 3 4 1 2 3 4 ~ 2 3 4.
The numbers indicating the frame are underlined ~or each cluster test. Those clusters that exceed the threshold are indicated as such by a preceding '*'.
In this example, lO cluster paths are searched.
In general, using this procedure requires at most %

- ~3 -[N(N ~ 1)]/2 cluster paths to search for the optimal cluster formation, where N is the number of frames in the word template. For a 15 frame word template, this procedure would require searching at most 120 paths, compared to 16,384 paths for a search attempting to try all possible combinations. Consequently, by usiny such a procedure in accordance with the present invention, an enormous reduction in computation time is realized.
Even further reduction in computation tlme can be realized by modifying blocks 552, 568, 554, 562, and 580 of Figures 5d and 5e. Block 568 illustrates a limit being placed on the second frame pointer, k. In the example, k is limited only by the null path, partial path Y0, at frame 0. Since k is used to define the length of each cluster, the number of frames clustered can be constrained by constraining k. for any given distortion threshol~, there will almost always be a number of frames that, when clustered, will cause a distortion that exceeds the distortion threshold. On the other extreme, there is always a minimal cluster formation that will ¦ never cause a distortion that exceeds the distortion threshold. Therefore, by defining a maximum cluster size, MAXCS, and minimum cluster size, MINCS, the second frame pointer, k, can be constrained.
MINCS would be employed in blocks 552, 554 and 562. For block 552, j would be initialized to MINCS.
For block 554, rather than subtract one from k in this step, MINCS would be subtracted. This forces k back a certain number of frames for each new partial path.
Consequently, clusters with frames less than MINCS will not be averaged. It should also be noted that to accommodate MINCS, block 562 should depict the test of j=k+MINCS rather than j=}c~1.
- 4~ -MAXCS would be employed in block 568. The limit becomes either f~ames before o (k<0~ or frames before that, designated by MAXCS (k<0-MAXCS). This prevents testing clusters that are known to exceed MAXCS.
According to the notation used with Figure 5e,these constraints can be mathematically expressed as follows:
k > j - MAXCS and k > 0; and k < j - MINCS and ~ > MINCS.
For example, let MAXCS = 5 and MINCS = 2 for a partial path Y15. Then the first cluster consists of frames 15 and 1~. The last cluster consists of frames 15 through 11. The constraint that j has to be greater or equal to MINCS prevents clusters from forming within the first MINCS frames.
Notice (block 562) that clusters at size MINCS
are not tested against the distortion threshold (block 560). This insures that a valid partial path will exist for all yj, j>MINCS.
By utilizing such constraints in accordance with the present invention, the number of paths that are searched is reduced according to the difference between MAXCS and MINCS.
Now referring to Figure 5f, block 582 from Figure 5e is shown in further detail. Figure 5f illustrates a method to generate output clusters after data reduction by using the trace back pointer (TBP in block 564 of Figure 5e) from each cluster in reverse direction. Two frame pointers, TB and CF are initialized, bloc~ 590. TB is initialized to the trace back pointer of the last frame. CF, the current end frame pointer, is initialized to the last frame of the word template. In the example fro~ Flgure 5d and 5e, TB would point at ~rame 1 and CF
would point at Erame 4. Frames TB+l through C~' are averaged to form an output frame for the resulting word template, block 592. A variable ~or aach averaged frame, or cluster, stores the numbex of ~rames combined. It is referred to as "repeat count" and can be calculated ~rom CF-TB. See Figure 6c, infra. A te5t i9 then pergormed to determine whether all clusters have been output, block - 594. I~ not, the next cluster is pointed at by setting CF equal to TB and setting TB to the trace back pointer of new frame CF. This procedure continue~ until all clusters are averaged and output to form the resultant word template.
Figures 5g, 5h and 5i illustrates a unique application of the trace back pointers. The trace back pointers are used in a partial trace back mode for outputting clusters from data with an indefinite number of frames, generally referred to as infinite length data.
This is different than the examples illustrated in Figures 3 and 5, since those examples used a word template with a finite number of frames, 4.
Figure 5g illustrates a series of 24 frames, each assigned a trace back pointer defining the partial paths.
In this example MINCS has been set to 2 and MAXCS has been set at 5. Applying partial trace back to infinite length data requires that clustered frames be output continuously to deflne portions of the input data.
Hence, by employing the trace back pointers in a scheme of partial ~race back, continuous data can be reduced.
Figure 5h illustrates all partial paths, ending at frames 21-24, converging at frame 10. Frames 1-4, 5-7 and 8-10 were found to be optimal clusters and since the convergence point is frame 10, they can be output.

Figure 5i shows the remaininy tree a~ter frames 1-4, 5-7 and 8-10 have been output. Figures 5g and 5h shows ~hs null pointer at frama 0. After the forma~ion of Figure 5i, the convergence point of frame 10 designates the location of the new null pointer. By tracing back through to the convergence point and outputting ~rames through that point, infinite length data can be accommodated.
In general, if at ~rame n, the points to start trace back are n, n-l, n-2, .. n MAXCS, since these paths are still active and can be combined with more incoming data.
The flowchart of Figures 6a and 6b illustrates the sequence of steps performed by differential encoding block 430 of Figure 4a. Starting with block 660, ths differential encoding process reduce template storage requirements by generating the difference~ between adjacent channels for torage rather than eaah channel's actual energy data. The differential encoding process operates on a frame-by-frame basis as described in Figure 4b. Hence, initialization block 661 sets the frame count FC to one and the channel total CT to 14.
Block 662 calculates the frame total FT as before. Block 663 tests to see if all frames of the word have been encoded. If all frames have been processed, the di~ferential encoding ends with block 664.
Block 665 begins the actual dif~erential encoding procedure by setting the channel count CC e~ual to 1.
The energy normalized data for channel one is read into the accumulator in block 666. Block 667 guantizes the channel one data into 1.5 dB steps for reduced storage.
The channel data from featurQ extractor 312 is initially represented as 0.376 dB per step utilizing 8 bits per byte. When guantized into 1.5 dB increments, only 6 bits -- ~7 ~

are xequired to represent a 9 6 dB energy range (26 x 1.5 dB). The first channel is not differentially encoded so as to form a basis for determining adjacent channel differences.
~ ~ignificant quantization error could ba introduced into the differential encoding procQss of block 430 if the quantized and limited value~ of the channel data are not used for calculating the channel differentials. Therefore, an internal varia~le RQV, the reconstruc~ed quantized value of the channel data is introduced inside the differential encoding loop to take this error into account. Block 668 forms the channel one RQV for later use by simply assigning it a value of the channel one quantized data, since channel one is not differentially encoded. Block 675, discussed below, forms the RQV for the remaining channels. Hence, the quantized channel one data is output (to template memorv 160) in block 669.
The channel counter is incremented in block 670, and the next channel data i8 read into the accumulator at block 671. Block 672 quantizes the energy of this channel data at 1.5 dB per step. Since differential encoding store~ the differences between channels rather than the actual channel values, block 673 determines the adjacent channel differences according to the equation:
Channel(CC)differential = CH(CC)data - CH(CC-l)RQV

where CH(CC-l)RQV is the reconstructed quantlzed value of the previous channel formed in block 675 of the previous loop, or in block 668 for CC-2.
Block 674 limits this channel differential bit value to a -8 to +7 maximum. By restricting the bit value and quantizing the energy value, the range of adjacent channel differences becomes -12 dB/+10.5 dB.

Although different applications may require different ~uantization values ox bit limits, oux result~ indicake thess values sufficient for our application.
Furthermore, sinc~ the llmitad channal di~ference is a ~our-bit signed number, two values per byte may be stored. Hence, the limiting and quantization procedures described here substantially reduce the amount of required data storage.
However, if the limited and quantized values of each di~ferential were not used to form the next channel differential, a significant reconstruction error could result. Block 675 takes this error into account by reconstructing each channel differential from quantized and limited data before forming the next channel differential. The internal variable RQV is formed for each channel by the equation:

Channel(CC)RQV = CH(CC-l)RQV + CH(CC)differential where CH(CC-l)RQV i9 the reconstructed quantized value of the previous channel differential. Hence, the use of the RQV variable inside the differential encoding loop prevents quanti~ation errors from propagating to subsequent channels.
Block 676 outputs the quantized/limited channel differential to the template memory such that the difference is stored in two values per byte (see Figure 6c). Block 677 tests to see if all the channels have been encoded. If channels remain, the procedure repeats with block 670. If the channel count CC e~uals the channel total CT, the frame count FC is incremented in block 678 and tested in block 663 as before.
The following calculations illustrate the reduced data rate that can be achieved with the present invention. Feature extractor 312 generates an 8-bit logarithmic channel energy v~lue for each of the 14 chann~ls, wherain the least significant bit repre~ents thrQe-eights of a d~. Hence, one frame of raw word data applied to data reducar block 322 comprises 14 bytes o~
data, at 8 bits per byte, at 100 frame~ per second, which equals 11,200 bits per second.
~ fter the energy normalization and segmentation/
compression proc~dures have been performed, 16 bytes of data per frame are required. (One byte for each of the 1~ channels, ona byte for the average frame energy AVGENG, and one byte for the repeat counk.) Thus, the data rate can be calculated a~ 16 byte~ o~ data at 8 bits per byte, at 100 frames per second, and assuming an average of 4 frames per repeat count, gives 3200 bits per second.
After the differential encoding process of block 430 is completed, each frame of template memory 160 appears as shown in the reduced data formak of Figure 6c.
The repeak count is stored in byte 1. The quantized, energy-normalized channel one data is stored in byte 2.
Bytes 3 through 9 have been divided such that two channel differences are stored in each byte. In other words, the differentially encoded channel 2 data is stored in the upper nibble of byte 3, and that of channel 3 is stored in the lower nibble of the same byte. The channel 14 differential is stored in the upper nibble of byte 9, and the avarage frama energy, AVGENG~ is stored in the lower nibble of byte 9. At 9 bytes per frame of data, at 8 bits per byte, at 100 frames per second, and assuming an average repeat count of 4, the data rate now equals 1~00 bits per second.
Hence, differential encoding block 430 has reduced 16 bytes of data into 9. If the repeat count values lie between 2 and 15, then the repeat count may also be stored in a four-bit nibble. one may then r~arrange the repeat count data format to further reduce storage re~uirements to 8.5 bytes per Prame. Moreover, the data reduction proce~s ha3 also reduced the data rate by at least a ~actor of 8iX (11, 200 to 1800).
Consequently, the complexity and storage requirements of the speech recognition system are dramatically reduced, thereby allowing for an increase in speech recognition vocabulary.

p '7~i~

3. Decoding Al~orith~

Re~erring to Figure 7a, shown i8 an improved word model having frames 720 combined into 3 average frames 722, as discussed with block 420 in Figure 4a. Each 5 average frame 722 is depicted as a state in a word model.
Each skate contains one or mor~ substates. ~he number o~
substates is dependent on the nu~bPr of frame6 combined to form the state. Each ~ubstat~ ha~ an associated distance accumulator for accumulating similarity lo measures, or distance scores between input frames and the average frames. Implementation of this improved word model i~ subsequently discu~ed with Figure 7b.
~ igure 7b ~hows block 120 from Figure 3 expanded to show specific detail including it~ relationship with 15 template memory 160. The speech recognizer 326 is expanded to include a recognizer control block 730, a word model decoder 732, a distance ram 734, a distance calculator 736 and a state decoder 738. The template decoder 328 and template memory are discussed immediately following discussion of the speech recognizer 326.
The recognizer control block 730 is used tO
coordinate the recognition process. Coordination includes endpoink detection (for isolated word recognition), tracking best accumulated distance scores of the word models, maintenance of link tables used to link words (for connected or continuous word recognition), special distance calculations which may be required by a specific recognition process and initializing the distance ram 734. The recognizer 30 control may also buffer data from the acoustic processor.
For each frame of input speech, the recognizer updates all active word templates in the templake memory.
Specific requirements of the recognizer control 730 are di~cussed by Bridle, Brown and Chamberlain in a paper entitled "An Algorithm for Connected Word Recognition", Proceeding~ of the 1982 IEEE Int. Conf. on Acoustics, Speech and Signal Procassing, pp. 899-902. A
corresponding control processer used by the recognizer control block is decribed by Peckham, Green, Canning and Stephens in a paper entitled 'IA Real-Time HardwarQ
Continuos Speech Recognition System", Proceeding~ oP the 1982 IEEE IntO Conf. on Acoustics, Speech and Signal lo Processing, pp. 863-866.
The distance ram 734 contain~ accumulated distances used ~or all substates current to the decoding process. If beam decoding i~ used, as decribed by B.
Lowerre in "The Harpy Speech Recognition System" Ph . D.
15 Dissertation, Computer Sclence Dept., Carnegie-Mellon University 1977, then the distance ram 734 would also contain flags to identify which substates are currently active. If a connected word recognition process is used, as described in "An Algorithm for Connected Word 20 Recognition", supra, then the distance ram 734 would also contain a linking pointer for each substate.
The distance calculator 736 calculates the distance between the currrent input frame and the state being processed. Di~tances are usually calculated according to the type of feature data used by the system to represent the speech. Bandpass filtered data may use Euclidean or Chebychev distance calculations as described in "The Effects of Selected Signal Processing Techniques on the Performance of a Filter-Bank-Based Isolated Word Recognizer" B.A. Dautrich, L.R/ Rabiner, T.B. Mar~in, Bell System Technical Journal, Vol. 62, No. 5, May-June, 1983 pp. 1311-1336. LPC data may use log-likelihood ratio distance calculation, as described by F. Itakura in "Minimum Prediction Residual Principle Applied to Speech Reeoynition", IEEE Trans. Aeoustics, Speech and Signal Proeessing, vol. ASSP-~3, pp. 67-72, Feb. 1975. The pre ent embodiment uses filtered data, also re~erred to as channel bank in~ormation, henee either Chebychev or Euelidean ealaulations would be approprlate.
The state decoder 738 updates the distance ram for each currantly active stata during the procassing of the input frame. In other words, for each word model processed by the word model decoder 732, the state 10 deeoder 738 updates the required aecumulated distances in the distance ram 734. The state decoder also makes use of the distance between the input fxame and the current state determined by the di~tance calculator 736 and, of eourse, the template memory data rapresenting the current 15 state.
In Figure 7c, steps performed by the word model decoder 732, for processsing each input frame, are shown in flowchart form. A number of word searching techniques can be used to coordinate the deeoding process, including 20 a truncated searching technique, such as Beam Decoding, described by B. Lowerre in "Th~ Harpy Speech Recognition System'7 Ph.d. Dissertation, Computer Science Dept., Carnegie-Mellon University 1977. It should be noted that implementing a truncated search technique requires the speech recognizer eontrol 730 to keep track of threshold levels and best accumulated distances.
At block 740 of Figure 7c, three variables are extracted from the recognizer control (block 730 of Figure 7b). The three variables are PCAD, PAD and Template PTR. Template PTR is used to direct the word model decoder to the correct word template. PCAD
represents the accumulated distance ~rom the previous state. This is the distance which is accumulated, exiting from the previous state of the word model, in sequence.

PAD represents the previous accumulated distance, although not necessarily from the previous contiguous state. PAD may differ from PCAD when the previous state has a minimum dwell time of 0, i.e., when the previous state may be skipped all together.
In an isolated word recognition system PAD and PCAD would typically be initialized to 0 by the recognizer control. In a connected or continuous word recognition system the initial values of PAD and PCAD
may be determined from outputs of other word models.
In block 742 of Figure 7c, the state decoder performs the decoding function for the first state of a particular word model. The data representing the state is identified by the Template PTR provided from the recognizer control. The state decoder block is discussed in detail with Figure 7d.
A test is performed in block 744 to determine if all states of the word model have been decoded. If not, flow returns back to the state decoder, block 742, with an updated Template PTR. If all states of the word model have been decoded, then accumulated distances, PCAD
and PAD, are returned to the recognizer control at block 748. At this point, the recognizer control would typically specify a new word model to decode. Once all word models have been processed it should start processing the next frame of data from the acoustic processor. For an isolated word recognition system when the last frame of input is decoded, PCAD returned by the word model decoder for each word model would represent the total accumulated distance for matching the input utterance to that word model. Typically, the word model with the lowest total accumulated distance would be chosen as the one represented by the utterance which was recognized. Once a template match has been determined, this information is passed to control unit 334.

Now refsring to Figure 7d, shown i~ a flowchark for per~orming the actual stata decoding for each state of each word model, i.a., block 742 of Figure 7a axpanded. Tha accumulated distances, PCAD and PAD, are passed along to block 750. At block 750, the distance from the word model state to the input frame i~ computed and stored as a variable called IFD, for input frame distance.
The maxdwell for the state is transferred from template memory, block 751. The maxdwell is determined from the number of frames which are combined in each average frame of the word template and is equivalent to the number of substates in the state. In fact, this system defines the maxdwell as the number of frames which are combined. This is becauRe during word training, the feature extracter (block 310 of Figure 3) samples the incoming speech at twice the rate it does during the recognition process. Setting maxdwell equal to the number of frames averaged allows a spoken word to be 20 matched to a word model when the word spoken during recognition is up to twice the time length of the word repre~ented by the template.
The mindwell for each state is determined during the state decoding process. Since only the state's 25 maxdwell i~ passed to the state decoder algorithm, mindwell is calculated as the integer part of maxdwell divided by 4 (block 752). This allows a spoken word to be matched to a word model when the word spoken during recognition is half the time length of the word 30 represented by the template.
A dwell counter, or substate pointer, i, is initialized in block 754 to indicate the current dwell count being processed. Each dwell count is referred to as a substate. The maximum number of substates for each state is defined according to maxdwell, as previously 7S;~
- 5~ -discussed. In ~his embodiment, the ~uhstate~ ar~
procassQd ln revarse order to facilitats the deaodinq proces~. Accordingly, ~inc~ maxdw~ defined as the total number oP ~ubstates in the ~tate, "i" is initialy set e~ual to maxdwell.
In block 756, a temporary accumulated distance, TAD, is ~et equal ~o substa~e i'~ accumulated di~tance, referred to as IFAD(i), plu5 the current input frame distance, IFD. The accumulated distance iB presumed to have been updated from the previously processed input frame, and stored in distance ram, block 734 from Figure 7b. IFAD is set to 0 prior to the initial input frame of the recognition process for all substates of all word models.
The substate pointer is decremented at block 758.
If the pointer has not reached 0, block 760,the substata's new accumulated distance, IFAD(i+l), is set equal to the accumulated distance for the previous substate, IFAD(i), plus the current input frame distance, IFD, block 762. Otherwise, flow proceeds to block 768 of Figure 7e.
A test is pPrformed in block 764, to determine whether the state can be exited from the current substate, i.e. if "i" is greater or equal to mindwell.
25 Until "i" is less than Mindwell, the temporary accumulated distance, TAD, is updated to the minimum of either the previous TAD or IFAD(i+l), block 766. In othar words, TAD is dePined as the best accumulated distance leaving the current state.
Continuing on to block 768 of Figure 7e, the accumulated distanc2 for the first substate is set to the best accumulated distance entering the state which is PAD.
A test is then performsd to determine if mindwell for the current state is 0, block 770. A mindwell of 3L~ 4~

2ero indicates that the current state may be skipped over to yield a more accurate match in the decoding of this word template. If mindwell for the state is not zero, 5 PAD is set equal to the temporary accumulated distance, TAD, since TAD contains the best accumulated distance out of this state, block 772. If mindwell is zero, PAD is set as the minimum of either the previous state's accumulated distance out, PCAD, or the best accumulated distance out of this state, TAD, block 774. PAD
represents the best accumulated distance allowed to enter the next state.
In block 776, the previous contiguous accumulated distance, PCAD, is set equal to the best accumulated distance leaving the current state, TAD.
This variable is needed to complete PAD for the following state if that state has a mindwell of zero. Note, the minimum allowed maxdwell is 2, so that 2 adjacent states can never both be skipped.
Fin~lly, the distance ram pointer for the current state is updated to point to the next state in the word model, block 778. This step is re~uired since the substates are decoded from end to beginning for a more efficient alyorithm.
The tab~e shown in appendix A illustrates the flowchart of Figures 7c, 7d an~ 7e applied in an example where an inp~t frame is processed through a word model (similar to Fig. 7a) with 3 states, A, B and C. In the example, it is presumed that previous frames have already been processed. Hence, the table includes a column showing "old accumulated distances (IFAD)" for each substate in states A, B and C.
Above the table, information is provided which will be referenced as the example develops. The 3 states have maxdwells of 3, 8 and 4 respectively for A, B and c.
The mindwells for each state are shown in the table as 0, 2 and 1 respectively. It should be noted khat thes2 have been calculated, according to block 752 o~ Figure 7d, as the integer part of ~axdwell/4. Also provide~ at the top of the table is the input frame distance tIFD) for each state according to block 750 o~ Figure 7d. This in~ormation could as well have been shown in the table, but it has bean excluded to shorten the table and simplify the example. Only pertinent blocks are shown at the left side of the table.
The example begin~ at block 740 of Figure 7c.
The previous accumulated distances, PCAD and PAD, and the template pointer, which points to the ~irst state of the word template being decoded, are received from the recognizer control. Accordingly, in the first row of the table, state A is recorded along with PCAD and PAD.
Moving onto Figure 7d, the distance (IFD) is calculated, maxdwell is retrieved from template memory, mindwell is calculated and the substate pointer, "i", is initialized. Only the initialization of the pointer is needed to be shown in the table since maxdwell, mindwell and IFD information is already provided above the table.
The second line shows i set equal to 3, the last substate, and the previous accumulated distance is retrieved from the distance ram.
A~ block 756, the temporary accumulated distance, TAD, is calculated and recorded on the third line of the table.
The test performed at block 760 is not recorded in the table, but the fourth line o~ the table shows flow moving to block 762 since all substates have not been processed.
The fourth line of the table shows both the decrement of the substate pointer, block 758, and the calculation of the new accumulated distance, block 762.
Hence, recorded is i=2, the corresponding old IFAD and ~ 59 -tha new accumulated distance set at 14, i.e. the previous accumulated distance for the current substata plus the input ~rame dis~ance for the state.
~he test per~ormed at block 764 re~ults in the affirmative~ The fifth line o~ the table shows the temporary accumulated distance, TAD, updated as the minimum of either the current TAD or IFAD(3). In this case, it is the latter, TAD =14.
Flow raturns to block 758. The pointsr is decremented and the accumulated distance for the second substate is calculated. This is shown on line six.
The first substate is processed similarly, at which point i i5 detected as equal to 0, and flow proceeds ~rom block 760 to block 768. At block 768, IFAD
is set for the irst substate according to P~D, the accumulated distance into the current state.
At block 770, the mindwell is tested against zero. If it equals zero, flow proceeds to block 774 where PAD is determined from the minimum of the temporary accumulated distance, TAD, or the previous accumulated distance, PC~D, since the current state can be skipped due to the 2ero mindwell. Since mindwell a O for state A
PAD is set to mindwell of 9(TAD) and 5(PCAD) which is 5.
PCAD is subsequently set equal to TAD, block 776.
Finally, the first state is completely processed with the distance ram pointer updated to the next state in the word model, block 778.
Flow returns to the flowchart in Figure 7c to update the template pointer and back to Figure 7d, block 750, for the next state of the word model. This state is processed in a similar manner as the former, with the exceptions that PAD and PCAD, 5 and 9 respectively, are passed ~rom the former state and mindwell for this state is not equal to zero, and block 766 will not be executed 7E~.~

for all substates. Hence, block 772 is processed rather than block 774.
The third state of the word model is processed along the same lines as the first and second. After completing the third state, the Elowchart of Figure 7c is returned to with the new PAd and PCAD variables for the recognizer control.
In summary, each state of the word model is updated one substate at a time in reverse order. Two variables are used to carry the most optimal distance from one state to the next. The first, PCAD, carries the minimum accumulated distance from the previous contiguous state. The second variable, PAD, carries the minimum accumùlated distance into the current state and is either the minimum accumulated distance out of the previous state (same as PCAD) or if the previous state has a mindwell of 0, the minimum of the minimum accumulated distance out of the previous state and the minimum ; 20 accumulated distance out of the second previous state.
To determine how many substates to process, mindwell and maxdwell are calculated according to the number of frames which have been combined in each state.
The flowcharts oE Figures 7c, 7d and 7e allow for an optimal decoding of each data reduced word template. By decoding the designated substates in reverse order, processing time is minimized. However, since real time processing requires that each word template must be accessed quickly, a special arrangement is required to readily extract the data reduced word templates.
The template decoder 328 of Figure 7b is used to extract the specially formatted word templates from the template memory 160 in a high speed fashion. Since each frame is stored in template memory in the differ-'7~;~

ential form of Figure 6c, the template decoder 328 utilizes a special accessing technique to allow the word model decoder 732 to access the encoded data without excessive overhead.
The word model decoder 732 addresses the template memory 160 to specify the appropriate template to decode. The same information is provided to the template decoder 328, since the address bus is shred by each. The address specifically points to an average frame in the template. Each frame represents a state in the word model. For every state requiring decoding, the address typically changes.
Referring again to the reduced data format of Figure 6c, once the address of a word template frame is sent out, the template decoder 328 accesses bytes 3 through 9 in a nibble access. Each byte is read as 8-bits and then separated. The lower four bits are placed in a temporary register with sign extension. The upper four bits are shifted to the lower four bits with sign ~ extension and are stored in another temporary register.
-i Each of the differential bytes are retrieved in this manner. The repeat count and the channel one data are retrieved in a normal 8-bit data bus access and temporarily stored in the template decoder 328. The repeat count (~a~dwell) is passed directly to the state decoder while t~e chan~el one data and channel 2-14 differential data (separated and expanded to 8 bits as just described)are differenti~lly decoded according to the flowchart in Figure 8~ in~`ra before being passed to distance calculator 736.

4. Data Expansion and s~Qech Synthesi~

Referring now to Figure 8a, a detailed block diagram of data expander 346 of Figure 3 ls illustrated.
As will be shown below, data expan~ion block 346 performs the raciprocal fun~tion of data reduction block 322 of Figura 3. Reduced word data, from template memory 160, is applied to differential decoding block 802. The decoding function performed by block 802 is essentially the inverse algorithm performed by differential encoding block 430 of Figure 4a. Briefly stated, the differential decoding algorithm of block 802 "unpacks" the reducad word feature data stored in template memory 160 by adding the present channel difference to the previous channel data. This algorithm is fully described in the flowchart of Figure 8b.
Next, energy denormalization block 804 restores the proper energy contour to the channel data by effecting the inverse algorithm performed in energy normalization block 410 of Figure 4a. The denormalization procedure adds the average energy value of all channels to each energy-normalized channel value stored in the template. The energy denormalization algorithm of block 804 is fully de~cribed in the detailed flowchart of Figure 8c.
Finally, ~rame repeating block 806 determines the number of frames compressed into a single frame by segmentation/compression block 420 of Figure 4a, and performs a frame repeat function to compensate accordingly. As the flowchart of Figure 8d illustrates, frame repeating block 806 outputs the same frame data "R"
number o~ times, where R is the prestored repeat count obtained from template memory 160. Hence, reduced word data from the template memory i~ expanded to Porm "unpacked~' word data which can be interpreted by the speech synthesizer.
rrha flowchart of Figure 8b lllustrate~ the ~teps performed by diffsrential decoding block 302 of data expander 346. Following start block 810, block all initializes the variable~ to be usad in later steps.
Frame count FC is initialized to one to corre pond to the first frame of the word to be synthe~ized, and channel total CT is initialized to the total number o~ channels in the channel-bank synthesizer (14 in the present embodiment).
Next, the frame total F'r i~ calc~lated in block 812. Frame total FT is the total number of frames in the word obtained from the template memory. Block 813 tests whether all frames of the word have been di~ferentially decoded. If the present frame count FC is greater than the frame total FT, no frames of the word would be left to decode, so the decoding process for that word will end at block 814. If, however, FC is not greater than FT, the differential decoding process continues with the next frame of the word. The test of block 813 may alternatively be performed by checking a data flag (sentinel) stored in the template memory to indicate the ~5 end of all channel data.
The actual differential decoding process of each frame begins with block 815. First, the channel count Cc is set equal to one in block 815, to determine the channal data to be read first from template memory 160.
Next, a full byte of data corresponding to the normalized energy of channel 1 is read from the template in block 816. Since channel 1 data is not differentially encoded, this single channel data may be output (to energy denormalization block 804) immediately via block 817.
Irhe channel counter CC is then incremented in block 818 to point to tha location of the next channel data. Block 819 reads the differentlally encoded channel data (differential) for channal CC lnto an accumulator. Block 8~0 then perform~ the di~ferential decoding function of ~orming channel ~C data by adding channel CC 1 data to the channel CC differential. For example, if CC=2, then th~ eguation of block 820 i8:

Channel 2 data - Channel 1 data + Channel 2 Di~ferential.
Block 821 then outputs this channel cC data to energy denormalization block 804 ~or further processing.
Block ~22 tests to see whether the present channel count CC is equal to the channel total CT, which would indicate the end of a frame of data. If CC is not equal to CT, then the channel count is incremented in block 818 and the differential decoding process i8 performed upon the next channel. If all channels have been decoded (when cc e~uals CT), then the frame count FC is incremented in block 823 and compared in block 813 to per~orm an end-of-data test. When all frames have been decoded, the differential decoding process of data expander 346 ends at block 814.
Figure 8c illustrates the sequance of steps performed by energy denormalization block 804. After starting at block 825, initialization of the variables takes place in block 826. Again, the frame count FC is initialized to one to correspond to the first frame of the word to be synthesiæed, and the channel total CT is initiallzed to the total number of channels in the channel bank synthesizer (14 in this casa). The frame total FT is calculated in block 827 and the frame count is tested in block 8~8, as previously done in blocks 812 and 813. If all frames of the word have been processed ~FC greater than FT), the sequence of steps ends at block 7~i~
- ~5 -829~ If, however, frames still need to be proce~ed (FC
not greater than FT), thsn the energy denormallzation function is performad.
In block 830, the avexage frame energy AVGENG is obtained ~rom the template for frame FC. Block 831 then sets the channel coun~ CC equal to one. The channel data, formed f om the channel di~ferential in differentlal decoding block 802 (block 820 of Figure 8b), i6 now read in block 832. Since the frame i9 normalized by subtracting the average energy from each channel in energy normalization block 410 (Figure 4), it is similarly restorQd (denormalized) by adding the average energy back to each channel. ~ence, the channel is denormalized in block 833 according to the formula shown.
If, for example, CC=l, then the equation of block 833 is:

Channel 1 energy = Channel 1 data + average energy.

This denormalized channel energy is then output (to ~rame repeating block 806) via block 834. The next channel is obtained by incrementing the channel count in block 835, and testing the channel count in block 836 to see if all channels have been denormalized. If all channels have not yat been processed ~CC not greater than CT), then the denormalization procedure repeats starting with block 832. If all channels of the frame have been processed (CC greater than CT), then the frame count is incremented in block 837, and tested in block 8~8 as before. In review, Fiqure 8c illustrates how the channel energies are denormalized by adding the average energy back to each channel.
Referring now to Figure 8d, the sequence of steps performed by frame repeating block 806 o~ Figure 8a is illustrated in the flowchart. Again, the process starts at block 840 by first initializing the frame count FC to - 6~ -on~ and the channel total CT to 14 at block 841. In block 842, the frame total, FT, representing the number of framQ3 in the word, i~ calculated as be~ore.
Unlike the previous tWQ flowcharts, all channel energies o~ the frame are simultaneously obtained in block 843, ~ince the individual channel proc~s~ing has now been completed. Next, tha repeat count RC o~ frame FC is then read from the template data in block 844.
This repeat count RC corre~ponds to the number o~ ~rames 10 combined into a single frame from the data compression algorithm performed in segmentation/compresslon block 420 of Figure 4. In other words, the RC is the l'maxdwell" of each frame. The repeat count is now utilized to output the particular frame "RC" number of times.
Block 845 outputs all the channel energies CH(1-14)ENG of frame FC to the speech synthe~izer. This represent~ the first time the "unpacked" channel energy data is output. The repeat count RC is then decremented by one in block 846. For example, i~ frame FC wa6 not prQviously co~bined, the stored value of RC would equal one, and the d~cremented value of RC would equal zero.
Block 847 then tests the repeat count. If RC is not equal to zero, then the particular frame of channel energies is again output in block 845. RC would again be decremented in block 846, and again tested in block 847.
When RC is decremented to zero, the next frame of channel data is obtained. Thus, the repeat count RC represents the number of times the same frame is output to the synthesizer.
To obtain the next frame, the frame count FC is incremented in block 848, and tested in block 849. If all the frames o~ the word have been processed, the sequence of steps corresponding to frame repeating block 806 ends at block 850. If more frames need to be processed, the fxame repeating function continues with block 843.

- 67 7~5~

As we have seen, data expander block 346 essentially per~orms the inver~e function o~ "unpacking"
the ~ored template data which has been "packed" by data reduction block 322. It is to be noted that the separate func~ions of blocks 802, ~04, and 806 may also be performed on a frame-by-frama basis, instead of the word-by-word baæi~ illustrated in the ~lowcharts o~
Figures 8b, 8c, and 8d. In either case, it is the combination of data reduction, reduced template format, and data expansion techniques which allows the present invention to synthesize intelligible speech from speech recognition templates at a low data rate.
As illustrated in Figure 3, both the "template"
word voice reply data, provided by data expander block 34~, and the ~canned~ word voice reply data, provided by reply memory 344, are applied to channel bank speech synthesizer 340. Speech synthesizer 340 selects one o~
these data sources in response to a command signal from control unit 334. Both data sources 344 and 346 contain prestored acoustic feature information correspondiny to the word to be synthesized.
This acoustic feature in~ormation comprises a plurality of channel gain values (channel energies), each representative o~ tha acoustic energy in a specified frequency bandwidth, corresponding to the bandwidths of feature extractor 312. There is, however, no provision in the reduced template memory ~ormat to store other speech synthesizer parameters such as voicing or pitch information. This is due to the fact that voicing and pitch information is not normally provided to speech recognition processor 120. Therefore, this information iB usually not retained primarily to reduce template memory requirements. Depending on the parkicular hardware configuration, reply memory 344 may or may not provide voicing and pitch information. The ~ollowing 7~ -- ~8 -channal bank ~ynthesizer de~cription a~suma~ that voicing and pitch information are not stored in either memory.
Hence, channel banX speech synthesizer 340 mu~t synthesiæe words from a data source which i~ absent 5 voicing and pitch in~ormation. One ~mportant aspect of the present invention directly addr~sses thi~ problem.
Figure 3a lllustrates a d~tailed block diagram o~
channel bank speech synthesizer 340 having N channels.
Channel data inputs 912 and 914 represent the channel data outputs o~ reply memory 344 and data expander 346, respectively. Accordingly, swit¢h array 910 represents the "data source decision" provided by device controller unit 334. For example, i~ a "canned" word is to be synthesized, channel data inputs 912 from reply memory 344 are selected as channel gain valuas 915. If a template word is to be synthesized, channel data inputs 914 from data expander 346 are selected. In either case, channel gain value~ 915 are routed to low-pas filters 940.
Low pass filters 940 function to smooth the step discontinuities in frame-to-frame channel gain changes before feeding them to the modulators. These gain smoothing filters are typically configured as ~econd-order Butterworth lowpa~s filters. In the preæent em~odiment, lowpass filters g40 have a -3 dB cuto~
frequency o~ approximately 28 Hz.
Smoothed channel gain valuee 945 are then applied to channel gain modulators 950. The modulators serve to adjust the gain of an excitation signal in response to the appropriate channel gain value. In the present embodiment, modulators 950 are divided into two predetermined groups: a ~irst predetermined group (numbered 1 through M) having a first excitation signal input; and a second group of modulators (numbered M-~l through N) having a second excitation signal input. As s~
- 6g -can be seen from Figure 9a, the flr~ exaitation signal 925 i~ output ~rom pitch pulse source 920, and the second excitation s~gnal 935 i5 output ~rom noi~s ~ourca 930.
~hese excitation sourcss will ba describad in ~uxther detail in tha ~ollowing figura~.
Spaech synthesizer 340 employ~ the technique called "split voicing" in accordance with the present invention. Thi~ technique allows the speech synthe~izer to reconstruct speech from externally-generated acoustic feature information, such as channel gain values 915, without using external voicing information. The prc~erred e~bodiment does not ukilize a voicing ~witch to distinguish between the pitch pulse source (voiced excitation) and the noise source (unvoiced excitation) to generate a singls voiced/unvoiced excitation signal to the modulators. In contrast, the present invention "splits" the acoustic feature information provided by the channel gain values into two predetermin~d groups. The first predetermined group, usually corresponding to the low frequency channels, modulates the voiced excitation signal 925. A second predetermined group of channel gain values, normally corresponding to the high frequency channels, modulates the unvoiced excitation signal 935.
Together, the low frequency and high frequency channel gain values are individually bandpasR filtered and combined to generate a high quality speech signal.
It has been ~ound that a "9/5 split" (M - 9) for a 14-channel synthesizer (N = 14) has provided excellent results ~or improving the q~lality of speech. However, it 30 will be apparent to those skllled in the art that the voiced/unvoiced channel "split" can be varied to maximize the voice quality characteristics in particular synthesizer applications.
Modulators 1 through N serve to amplitude 35 modulate the appropriate excitation signal in response to the acoustic feature information of that par~icular channal. In other words, the pitch pulse ~buzz) or noi~e (hiss) excitation signal for channel M i~ multiplied by the channel gain value for channel M. The amplituds modification per~ormed by modulators 950 can readily be implemented in ~oftware using digital ~ignal processing (DSP) technique~. Similarly, modulator~ 950 may be implemented by analog linear multiplier~ a~ known in the art.
Both groups of modulated excitation signals 955 (l through M, and M~l through N) are then applied to bandpas~ filters 960 to recon~truct the N speech channels. As previously noted, the present embodiment utilizes 14 channels covering the frequency range 250 Hz to 3400 H2. Additionally, the preferred embodiment utilizes DSP techniques to digitally implement in softwaxe the function of bandpass filters 960.
Appropriate DSP algorithms are described in chapter ll of L.R. Rabiner and B. Gold, Theory and ApPlication o~
Digital Siqnal Processinq, (Prentice Hall, Englewood Cliffs, N.J., 1975).
The filtered channel outputs 965 are then combined at summation circuit 970. Again, the summing function of the channel combiner may be implemented either in software, using DSP techniques, or in hardware, utilizing a summation circuit, to combine the N channels into a single reconstructed speech signal 975.
An alternate embodiment of the modulator/bandpass filter con~iguration 980 i5 shown in Figure 9b. This figure illustrates that it is functionally eguivalent to first apply excitation signal 935 (or 925) to bandpass filter 960, and then amplitude modulate the filtered excitation signal by channel gain value 945 in modulator 950. This alternate configuration 980' produces the equivalent channel output 965, since the function of recons~ructing the channels is still achieved.

~ ~. 33 Noise sourca 930 produces unvoiced excitation signal 935, called "hiss". The noise source oukput is typically a series of random amplitude pulses of a constant average power, as lllustrated by waveform 935 of Figure 9d. Conversely, pitch pulse source 920 generates a pulse train of voiced excitation pitch pulse8, also of a constant average power, called "buzz". A typiaal pitch pulse source would have its pitch pulse rate determined by an external pitch period fo. This pitch period in~ormation, determined from an acoustic analysiR of the desired synthesizer speech signal, is normally transmitted along with the channel gain information in a vocoder application, or would be stored, along with the voiced/unvoiced decision and channel gain info~mation, in a "canned" word memory. However, as noted above, there is no provision in the reduced template memory format of the preferred embodiment to store all of these speech synthesizer parameters, since they are not all required for speech recognition. Hence, another aspect of the pre5ent invention is directed toward providing a high quality synthesized speech signal without prestored pitch information.
Pitch pulse source 920 of the preferred embodiment is shown in greater detail in Figure 9c. It has been found that a significant improvement in synthesized voice quality can be achieved by varying the pitch pulse period such that the pitch pulse rate decreases over the length of the word synthesized.
Therefore, excitation signal 925 is preferably comprised of pitch pulses of a constant average power and of a predetermined variable rate. This variable rate is determined as a function of the length of the word to be synthesized, and a~ a funckion of empirically-determined constant pitch rate changes. In khe present embodiment, the pitch pulse rate linearly decreases on a ~L2~ 2 - 72 ~

frama-by ~rame ba~i~ over the length o~ khe word.
However, in other application~, a differenk variable rate may be desired to produce other speech sound characteri~tics~
Re~erring now to Figure 9c, pitch pul~e source 920 iq comprised oE pitch rate control unit 940, pitch rate generator 942, and pitch pulse generator 944. Pitch rate control unit 940 determines the variable rate at which the pitch period is changed. In the preferred embodiment, the pitch rate decrease is determined from a pitch change constant, initialized from a pitch start constant, to provide pitch period information 922. The function of pitch rate control unit 940 may be performed in hardware by a programmable ramp generator, or in software by the controlling microcomputer. The operation of control unit 940 is fully described in con~unction with the next figure.
Pitch rate generator 942 utilizes this pitch period information to generate pitch rate signal 923 at regularly spaced intervalR. This signal may be impulses, rising edges, or any other type of pitch pulse period conveying signal. Pitch rate generator 942 may be a timer, a counter, or crystal clock oscillator which provides a pulse train equal to pitch period information 922. Again, in the present embodiment, the function of pitch rate generator 942 is performed in software.
Pitch rate signal 923 is used by pitch pulse generator 944 to create the desired waveform for pitch pulse excitation signal 925. Pitch pulse generator 944 may be a hardware waveshaping circuit, a monoshot clocked by pitch rate signal 923, or, as in the present embodiment, a ROM look-up table having the desired waveform information. Excitation signal 925 may exhibit the waveform of impulses, a chirp (~requency swept sine wave) or any other broadband waveform. Hence, the nature ~IZ~ 7~

o~ the pulse is dependent upon the particular excitation ~ignal de3ired.
since excitation signal 925 must be of a conskant average power, pitch pulse generator 944 also utilizes the pitch rate signal 923, or the pitch period 922, as an amplitude control signal. The amplitude o~ the pitch pulses are scaled by a ~actor proportional to the square root of the pitch period to obtain a constant average power. Again, the actual amplitude of each pulse is dependent upon the nature o~ the desired excitation signal.
The following discussion of Figure 9d, as applied to pitch pulse source 920 of Figure 9c, describes the sequence of steps taken in the preferred embodiment to produce the variable pitch pulse rate. First, the word length WL for the particular word to be synthesized is read from the template memory. This word length is the total number of frames of the word to be synthesized. In the preferred embodiment, W~ i5 the sum of all repeat counts for all frames of the word template. Second, the pitch start constant PsC and pitch change constant PcC
are read from a predetermined memory location in the synthesizer controller. Third, the number o~ word divisions are calculated by dividing the word length WL
by the pitch change constant PCC. The word division WD
indicates how many consecutive frames will have the same pitch value. For example, waveform 921 illustrates a word length of 3 frames, a pitch start constant of 59, and a pitch change constant of 3. Thus, the word division, in this simple example, is calculated by dividing the word length (3) by the pitch change constant (3), to set the number of frames between pitch changes equal to one. A more complicated example would be if WL=24 and PCC=4, then the word divisions would occur every 6 frames.

7~

Tha pitch s~art constan~ of 59 repre~ent~ the number o~ ~ample times between pitch pulse~. ~or Qxampls, at an 8 kHz sampling rata, there would be 59 sample times (each 125 microseconds in duration) between pitch pul~e8. Therefore, the pitch period would be 59 x 125 microseconds - 7 375 milliseconds or 13506 Hz. After each word division, the pitch ~tart constant i~
incremented by one (i.e. 60 = 133.3 Hz, 61 = 131.1 Hz) such that the pitch rate decreases over the length of the word. I~ the word length was longer, or the pitch change constant was shorter, several consecutive frames would have the same pitch value. Thi~ pitch period information is represented in Figure 9d by waveform 922. As wave~orm 922 illustrates, the pitch period information may be represented in a hardware sense by changing voltage levels, or in software by different pitch pariod values.
When pitch period information 922 is applied to pitch rate generator 942, pitch rate signal waveform 923 is produced. Waveform 923 generally illustrates, in a simplified manner, that the pitch rate is decreasing at a rate determined by the variable pitch period. When the pitch rate signal 923 is applied to pitch pulse generator 944, excitation waveform 925 is produced. Waveform 925 i5 simply a waveshaped variation of waveform 923 having a constant average power. Waveform 935, representing the output of noise source 930 (hiss), illustrates the dif~erence between periodic voiced and random unvoiced excitation signals.
As we have seen, the present invention provides a method and apparatus for synthesizing speech without voicing or pitch information. The speech synthesizer of the present invention employs the technique of "split voicing" and the technique of varying the pitch pulse period such that the pitch pulse rate decreases over the length of the word. Although either technique may be 75;~i u~ed by itself, the combination of ~plit voiaing and variabla pitch pulse rata allow~ natural-sounding ~peech to be generated without external voicing or pltch info~mati.on.
While specific embodiments o~ the preæent invention have been shown and described herein, further modifications and improvements may b~ made by those skilled in the art. All such modifications which retain the basic und~rlying principles disclosed and claimed herein are within the scope of this invention.
What iæ claimed is:

~2~

AE~X ~
Processing of one input frame for 3 state~ o~ a word mcdel, sta~ee ~, B an~ C.
State A: M~xdw~ 3, M~ndw~ll = O (752-Fig. 7(d)), IFD - 7 (750-Fig. 7(d)) State B: Max~well = 8, ~nd~all - 2 (752-Fig. 7(d)), IFD ~ 3 (750-Fig. 7(d)) State C: M~xdW~ll = 4, Mindwell - 1 (752-Fig, 7(d))/ IFD = 5 (750-Fig. 7(d)) IN cur Old IF~D(i) NEW
. State/Su~etata PAD PC~D PAD PC~D (Glven) IFAD(_+l) IAD
740/7 (c) A 5 5 754/7 (d) i=3 8 (3) 756 15=7+8 758,762 i=2 7(2)14(3)=7+7 14 758,762 i=l 2(1)9(2)=2+7 758 i=0 768 5 (1) 774,776 5 9 754 i=8 5 (83 756 8=3+5 758,762 i=7 9(7)l2(8)~g+3 3 758,762 i=6 3 (6)6 ~7) =3+38 758,762 i=5 8(5)11(6)=8+3 6 758,762 i 4 4(4)7(5)=4+3 6 758,762 i 4(3)7(4)=4+3 6 758,762 i=2 5(2)8(3)=5+3 6 758,762 i=l 2(1)5(2)=2+3 758 i=0 768 5 (1) 772,776 6 6 754 i=4 10 (4) 756 15=5+10 758,762 i=3 8(3)13(4)=8+513 758,762 i=2 6(2)11(3)=6+511 758,762 i=l 9(1)14(2)=9+511 758 i=0 768 6 (1) 772,776 11 11 744/7 (c)

Claims (29)

1. A method for processing speech information in a speech recognition system, wherein the information is represented by a sequence of frames, said speech recognition system being capable of comparing a given frame set to a template, and having template memory to store said template, said processing method comprising the steps of:
a) combining contiguous acoustically similar frames of a previous frame set into representative frames to form a reduced template;
b) storing said reduced template in template memory; and c) comparing frames of said given frame set to said representative frames of said reduced template according to the number of frames combined in said representative frames of said reduced template to produce a measure of similarity between the given frame set and the template.
2. The method of claim 1, wherein combining further includes the steps of generating a distortion measure for each representative frame and comparing each said distortion measure to a predetermined distortion threshold.
3. The method of claim 1, wherein combining further includes the step of determining a distortion measure corresponding to said reduced template.
4. The method of claim 1, further including the step of determining the number of frames combined in each representative frame.
5. The method of claim 1, wherein storing further includes the step of storing the number of frames combined into each representative frame.
6. The method of claim 1, further including the step of determining the number of representative frames representing said reduced template.
7. The method of claim 1, wherein storing further includes the step of storing a first element of frame data corresponding to the difference between a second element of frame data and said first element of frame data.
8. The method of claim 1, wherein storing further includes the step of storing an energy measure for each representative frame.
9. The method of claim 1, wherein comparing further includes the step of accumulating distance measures corresponding to the difference between the given input word and the reduced template.
10. A method for generating a measure of similarity for speech information in a speech recognition system, wherein the information is represented by a sequence of frames, said speech recognition system being capable of comparing a given input frame set to a template, said method comprising the steps of:
combining contiguous acoustically similar frames of a previous frame set into representative frames to form a reduced template;
comparing frames of said given frame set to said representative frames of said reduced template by accumulating a set of distance measures for each representative frame, each said set having a total number of accumulated distance measures corresponding to the number of frames combined in each said representative frame; and determining a measure of similarity between said given frame set and said template based on said accumulated distance measures.
11. The method of claim 10, wherein comparing further includes the step of defining a maximum number of accumulations for a particular representative frame corresponding to the number of frames combined into said particular representative frame.
12. The method of claim 11, wherein comparing further includes the step of defining a minimum number of accumulations proportional to said maximum number of accumulations for said particular representative frame.
13. The method of claim 10, wherein comparing further includes the step of sequentially accumulating similarity measures corresponding to two representative frames, two said representative frames being separated by at least one other said representative frame, but without accumulating a similarity measure from said other representive frame.
14. An arrangement for processing speech information in a speech recognition system, wherein the information is represented by a sequence of frames, said speech recognition system being capable of comparing a given frame sat to a template, and having template memory to store said template, said processing arrangement comprising:
a) means for combining contiguous acoustically similar frames of a previous frame set into representative frames to form a reduced template;
b) means for storing said reduced template in template memory; and c) means for comparing frames of said given frame set to said representative frames of said reduced template according to the number of frames combined in said representative frames of said reduced template to produce a measure of similarity between the given frame set and the template.
15. The arrangement of claim 14, wherein means for combining further includes means for generating a distortion measure for each representative frame and comparing each said distortion measure to a predetermined distortion threshold.
16. The arrangement of claim 14, wherein means for combining further includes means for determining a distortion measure corresponding to said reduced template.
17. The arrangement of claim 14, further including means for determining the number of frames combined in each representative frame.
18. The arrangement of claim 14, wherein means for storing further includes means for storing the number of frames combined into each representative frame.
19. The arrangement of claim 14, further including means for determining the number of representative frames representing said reduced template.
20. The arrangement of claim 14, wherein means for storing further includes means for storing a first element of frame data corresponding to the difference between a second element of frame data and said first element of frame data.
21. The arrangement of claim 14, wherein means for storing further includes means for storing an energy measure for each representative frame.
22. The arrangement of claim 14, wherein means for comparing further includes means for accumulating distance measures corresponding to the difference between the given input word and the reduced template.
23. An arrangement for generating a measure of similarity for speech information in a speech recognition system, wherein the information is represented by a sequence of frames, said speech recognition system being capable of comparing a given input frame set to a template, said arrangement comprising:
means for combining contiguous acoustically similar frames of a previous frame set into representative frames to form a reduced template;
means for comparing frames of said given frame set to said representative frames of said reduced template by accumulating a set of distance measures for each representative frame, each said set having a total number of accumulated distance measures corresponding to the number of frames combined in each said representative frame; and means for determining a measure of similarity between said given frame set and said template based on said accumulated distance measures.
24. The arrangement of claim 23, wherein means for comparing further includes means for defining a maximum number of accumulations for a particular representative frame corresponding to the number of frames combined into said particular representative frame.
25. The arrangement of claim 24, wherein means for comparing further includes means for defining a minimum number of accumulations proportional to said maximum number of accumulations for said particular representative frame.
26. The arrangement of claim 23, wherein means for comparing further includes means for sequentially accumulating similarity measures corresponding to two representative frames, two said representative frames being separated by at least one other said representative frame, but without accumulating a similarity measure from said other representive frame.
27. The method for processing speech information in a speech recognition system, wherein the information is represented by a sequence of frames, said speech recognition system being capable of comparing a given frame-set to a template using a word model comprising a plurality of states, and having template memory to s-tore said template, said processing method comprising the steps of:
(a) combining contiguous acoustically similar frames of a previous frame-set into representative frames;
(b) forming a word model having a plurality of states, each state corresponding to one of said representative frames; and (c) comparing at least a predetermined minimum number of frames of the given frame-set to a state of the word model according to the number of frames combined in said representative frames to produce a measure of similarity between the given frame-set and the template.
28. The method for processing speech information in a speech recognition system, wherein the information is represented by a sequence of frames, said speech recognition system being capable of comparing a given frame-set to a template using a word model comprising a plurality of states, and having template memory to store said template, said processing method comprising the steps of:
(a) combining contiguous acoustically similar frames of a previous frame-set into representative frames;
(b) forming a word model having a plurality of states, each state corresponding to one of said representative frames; and (c) comparing not more than a predetermined maximum number of frames of the given frame-set to a state of the word model according to the number of frames combined in said representative frames to produce a measure of similarity between the given frame-set and the template.
29. The method for processing speech information in a speech recognition system, wherein the information is represented by a sequence of frames, said speech recognition system being capable of comparing a given frame set to a template using a word model comprising of a plurality of states, and having template memory to store said template, said processing method comprising the steps of:
(a) combining contiguous acoustically similar frames of a previous frame-set into representative frames;
(b) forming a word model having a plurality of states, each state corresponding to one of said representative frames; and (c) comparing at least a predetermined minimum number of frames, but not more than a predetermined maximum number of frames, of the given frame-set to a state of the word model according to the number of frames combined in said representative frames to produce a measure of similarity between the given frame-set and the template.
CA000525298A 1986-01-03 1986-12-15 Optical decoding technique for speech recognition Expired - Lifetime CA1299752C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US06/816,161 US4797929A (en) 1986-01-03 1986-01-03 Word recognition in a speech recognition system using data reduced word templates
US816,161 1986-01-03

Publications (1)

Publication Number Publication Date
CA1299752C true CA1299752C (en) 1992-04-28

Family

ID=25219835

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000525298A Expired - Lifetime CA1299752C (en) 1986-01-03 1986-12-15 Optical decoding technique for speech recognition

Country Status (8)

Country Link
US (1) US4797929A (en)
EP (1) EP0249635B1 (en)
JP (1) JP3173001B2 (en)
KR (1) KR950008540B1 (en)
CA (1) CA1299752C (en)
DE (1) DE3688614T2 (en)
HK (1) HK40496A (en)
WO (1) WO1987004291A1 (en)

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5136653A (en) * 1988-01-11 1992-08-04 Ezel, Inc. Acoustic recognition system using accumulate power series
WO1990000298A1 (en) * 1988-06-30 1990-01-11 Motorola, Inc. Method and apparatus for programming devices to recognize voice commands
US5117460A (en) * 1988-06-30 1992-05-26 Motorola, Inc. Voice controlled pager and programming techniques therefor
KR900702504A (en) * 1988-06-30 1990-12-07 빈센트 죠셉 로너 Voice Controlled Phaser
US5040212A (en) * 1988-06-30 1991-08-13 Motorola, Inc. Methods and apparatus for programming devices to recognize voice commands
US5301227A (en) * 1989-04-17 1994-04-05 Sanyo Electic Co., Ltd. Automatic dial telephone
US5136654A (en) * 1989-10-19 1992-08-04 Kurzweil Applied Intelligence, Inc. Vocabulary partitioned speech recognition apparatus
US5258739A (en) * 1989-11-09 1993-11-02 Motorola, Inc. Efficient message storage within a selective call receiver
US5175759A (en) * 1989-11-20 1992-12-29 Metroka Michael P Communications device with movable element control interface
US5220639A (en) * 1989-12-01 1993-06-15 National Science Council Mandarin speech input method for Chinese computers and a mandarin speech recognition machine
US5020107A (en) * 1989-12-04 1991-05-28 Motorola, Inc. Limited vocabulary speech recognition system
JP2570448B2 (en) * 1989-12-28 1997-01-08 日本電気株式会社 Standard pattern learning method
EP1239456A1 (en) * 1991-06-11 2002-09-11 QUALCOMM Incorporated Variable rate vocoder
US5682464A (en) * 1992-06-29 1997-10-28 Kurzweil Applied Intelligence, Inc. Word model candidate preselection for speech recognition using precomputed matrix of thresholded distance values
US5566229A (en) * 1992-08-24 1996-10-15 At&T Voice directed communications system employing shared subscriber identifiers
US5671328A (en) * 1992-12-30 1997-09-23 International Business Machines Corporation Method and apparatus for automatic creation of a voice recognition template entry
CN1063554C (en) * 1994-03-03 2001-03-21 李琳山 Chinese phenetics input system and its method for Chinese computer
TW271524B (en) * 1994-08-05 1996-03-01 Qualcomm Inc
US5742734A (en) * 1994-08-10 1998-04-21 Qualcomm Incorporated Encoding rate selection in a variable rate vocoder
DE19521258A1 (en) * 1995-06-10 1996-12-12 Philips Patentverwaltung Speech recognition system
US5835570A (en) * 1996-06-26 1998-11-10 At&T Corp Voice-directed telephone directory with voice access to directory assistance
US5751901A (en) * 1996-07-31 1998-05-12 Qualcomm Incorporated Method for searching an excitation codebook in a code excited linear prediction (CELP) coder
FR2761848B1 (en) * 1997-04-04 2004-09-17 Parrot Sa RADIOTELEPHONE VOICE COMMAND DEVICE, ESPECIALLY FOR USE IN A MOTOR VEHICLE
US7630895B2 (en) * 2000-01-21 2009-12-08 At&T Intellectual Property I, L.P. Speaker verification method
US6012027A (en) * 1997-05-27 2000-01-04 Ameritech Corporation Criteria for usable repetitions of an utterance during speech reference enrollment
US6219641B1 (en) * 1997-12-09 2001-04-17 Michael V. Socaciu System and method of transmitting speech at low line rates
KR100277105B1 (en) * 1998-02-27 2001-01-15 윤종용 Apparatus and method for determining speech recognition data
US6141641A (en) * 1998-04-15 2000-10-31 Microsoft Corporation Dynamically configurable acoustic model for speech recognition system
US6321195B1 (en) * 1998-04-28 2001-11-20 Lg Electronics Inc. Speech recognition method
US6208969B1 (en) 1998-07-24 2001-03-27 Lucent Technologies Inc. Electronic data processing apparatus and method for sound synthesis using transfer functions of sound samples
US6073093A (en) * 1998-10-14 2000-06-06 Lockheed Martin Corp. Combined residual and analysis-by-synthesis pitch-dependent gain estimation for linear predictive coders
US6571210B2 (en) * 1998-11-13 2003-05-27 Microsoft Corporation Confidence measure system using a near-miss pattern
US6691084B2 (en) 1998-12-21 2004-02-10 Qualcomm Incorporated Multiple mode variable rate speech coding
JP4067716B2 (en) * 1999-09-13 2008-03-26 三菱電機株式会社 Standard pattern creating apparatus and method, and recording medium
US6539353B1 (en) 1999-10-12 2003-03-25 Microsoft Corporation Confidence measures using sub-word-dependent weighting of sub-word confidence scores for robust speech recognition
US6735563B1 (en) * 2000-07-13 2004-05-11 Qualcomm, Inc. Method and apparatus for constructing voice templates for a speaker-independent voice recognition system
GB2373670B (en) 2001-03-20 2005-09-21 Mitel Knowledge Corp Method and apparatus for extracting voiced telephone numbers and email addresses from voice mail messages
US8521529B2 (en) * 2004-10-18 2013-08-27 Creative Technology Ltd Method for segmenting audio signals
JP5141687B2 (en) * 2007-07-31 2013-02-13 富士通株式会社 Collation rule learning system for speech recognition, collation rule learning program for speech recognition, and collation rule learning method for speech recognition
US8352249B2 (en) * 2007-11-01 2013-01-08 Panasonic Corporation Encoding device, decoding device, and method thereof
JP5229234B2 (en) * 2007-12-18 2013-07-03 富士通株式会社 Non-speech segment detection method and non-speech segment detection apparatus
JP5883014B2 (en) * 2010-10-29 2016-03-09 科大訊飛股▲分▼有限公司iFLYTEK Co., Ltd. Method and system for automatic detection of end of recording
KR101904423B1 (en) * 2014-09-03 2018-11-28 삼성전자주식회사 Method and apparatus for learning and recognizing audio signal
KR200483534Y1 (en) 2015-08-11 2017-05-25 유근준 Round fan
WO2021034786A1 (en) 2019-08-21 2021-02-25 Dolby Laboratories Licensing Corporation Systems and methods for adapting human speaker embeddings in speech synthesis
CN113571064B (en) * 2021-07-07 2024-01-30 肇庆小鹏新能源投资有限公司 Natural language understanding method and device, vehicle and medium

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3582559A (en) * 1969-04-21 1971-06-01 Scope Inc Method and apparatus for interpretation of time-varying signals
US3812291A (en) * 1972-06-19 1974-05-21 Scope Inc Signal pattern encoder and classifier
DE2657365C2 (en) * 1976-12-17 1978-11-02 Siemens Ag, 1000 Berlin Und 8000 Muenchen Method and circuit arrangement for carrying out the method for frame synchronization of a time division multiplex system
US4227177A (en) * 1978-04-27 1980-10-07 Dialog Systems, Inc. Continuous speech recognition method
US4227176A (en) * 1978-04-27 1980-10-07 Dialog Systems, Inc. Continuous speech recognition method
US4181813A (en) * 1978-05-08 1980-01-01 John Marley System and method for speech recognition
US4412098A (en) * 1979-09-10 1983-10-25 Interstate Electronics Corporation Audio signal recognition computer
US4449233A (en) * 1980-02-04 1984-05-15 Texas Instruments Incorporated Speech synthesis system with parameter look up table
US4513436A (en) * 1980-09-16 1985-04-23 Oki Electric Industry, Co., Ltd. Speech recognition system
JPS585224U (en) * 1981-07-01 1983-01-13 三菱電機株式会社 gas butsing
US4489434A (en) * 1981-10-05 1984-12-18 Exxon Corporation Speech recognition method and apparatus
US4489435A (en) * 1981-10-05 1984-12-18 Exxon Corporation Method and apparatus for continuous word string recognition
US4415767A (en) * 1981-10-19 1983-11-15 Votan Method and apparatus for speech recognition and reproduction
US4449190A (en) * 1982-01-27 1984-05-15 Bell Telephone Laboratories, Incorporated Silence editing speech processor

Also Published As

Publication number Publication date
JPS63502303A (en) 1988-09-01
KR950008540B1 (en) 1995-07-31
JP3173001B2 (en) 2001-06-04
HK40496A (en) 1996-03-15
DE3688614D1 (en) 1993-07-29
EP0249635A4 (en) 1988-08-29
WO1987004291A1 (en) 1987-07-16
EP0249635A1 (en) 1987-12-23
DE3688614T2 (en) 1993-10-07
US4797929A (en) 1989-01-10
KR880700987A (en) 1988-04-13
EP0249635B1 (en) 1993-06-23

Similar Documents

Publication Publication Date Title
CA1299752C (en) Optical decoding technique for speech recognition
US4905288A (en) Method of data reduction in a speech recognition
CA1324833C (en) Method and apparatus for synthesizing speech without voicing or pitch information
US5873059A (en) Method and apparatus for decoding and changing the pitch of an encoded speech signal
EP0235127B2 (en) Hands-free control system for a radiotelephone
EP0680033A2 (en) Speech-rate modification for linear-prediction based analysis-by-synthesis speech coders
EP0255523B1 (en) Method and apparatus for synthesizing speech from speech recognition templates
EP1141939A1 (en) System and method for segmentation and recognition of speech signals
EP1076895B1 (en) A system and method to improve the quality of coded speech coexisting with background noise
JPS6262399A (en) Highly efficient voice encoding system
Dankberg et al. Development of a 4.8-9.6 kbps RELP Vocoder
US5715362A (en) Method of transmitting and receiving coded speech
Lecomte et al. Medium band speech coding for mobile radio communications
GB2266213A (en) Digital signal coding
Keiser et al. Parametric and Hybrid Coding

Legal Events

Date Code Title Description
MKLA Lapsed