CA1180812A - Method and apparatus for speech recognition and reproduction - Google Patents

Method and apparatus for speech recognition and reproduction

Info

Publication number
CA1180812A
CA1180812A CA000413666A CA413666A CA1180812A CA 1180812 A CA1180812 A CA 1180812A CA 000413666 A CA000413666 A CA 000413666A CA 413666 A CA413666 A CA 413666A CA 1180812 A CA1180812 A CA 1180812A
Authority
CA
Canada
Prior art keywords
sequence
amplitudes
spectral
signal
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA000413666A
Other languages
French (fr)
Inventor
Stephen P. Gill
Lawrence F. Wagner
Gregory G. Frye
Klaus-Peter A. Bantowsky
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VOTAN
Original Assignee
VOTAN
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VOTAN filed Critical VOTAN
Application granted granted Critical
Publication of CA1180812A publication Critical patent/CA1180812A/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Complex Calculations (AREA)

Abstract

METHOD AND APPARATUS FOR SPEECH RECOGNITION
AND REPRODUCTION

Abstract of the Disclosure A system capable of learning and subsequently recognizing a vocabulary of spoken words and synthetically reproducing such words as an audible voice when activated by an electronic command signal. Voice signals in the form of a time sequence of binary digits are processed by a digital spectrum analyzer in a "training" mode for the system to provide digital information to form a voice print which is stored in a memory. The digital spectrum analyzer comprises an arithmetic logic unit (ALU) that processes incoming data using only digital logic, without filter banks and in combination with a sequence or timing con-troller, an in/out controller and various internal memory components provided on a separate board or by an integrated circuit which may be combined with other external memory and external control circuitry to comprise the complete system. With voiceprints provided either from system training or from external storage and with the system in the recognition mode, pattern matching of spoken words is accomplished using the spectrum analyzer. The arithmetic logic unit, using basic logic functions, is programmed to provide a unique statistical analysis of the digital voice data so that accurate and rapid pattern matching, and hence word recognition, is accomplished even in the pre-sence of relatively high background noise. Voice synthesis of words may be accomplished using the same methods and apparatus as used for recognition by essentially reversing the operation of the spectrum analyzer to synthesize voice signals from previously stored voiceprints.

Description

1 SPECIFIC~TION

3 Background of the Invention This invention relates generally to waveform analysis 6 and synthesis apparatus and more specifically to a method and 7 system capable of learning a vocabulary of spoken words and 8 subsequently recognizing these words when they are spoken, ~ and synthetically reproducing these words as an audible 10 voice when activated by an electronic command signal.
11 ~ecogni~ion of human speech is extremely difficult 12 for a machine to accomplish. The perceptual qualities 13 and complexity of the human ear and ~rain far exceed the capa-14 bilities of any known or contemplated apparatus. One basic 15 problem in speech recognition is that of extracting recogniz-16 able features from the acoustic waveform. The most widely 17 accepte~ means for feature extraction is to decompose the 18 waveform into a spectrum of audible frequencies, creating a ~g spectrogram or "voiceprint" of voice energy as a function of 20 both frequency and time.
21 Heretofore, spectrum analyzers were difficult and 22 costly to implement on LSI (large scale integration) semicon-23 ductor chips. Prior art devices used analog electronic
2~ circuit components (such as resistors, capacitors, transistor 2~ amplifiers, detectors, etc.) to construct a bank of audio 26 frequency filters. Each analog filter provided information 27 on the acoustic energy in a specified frequency range. For 2~ example, Brodes (U. S. Patent No. 3,812,291) required si~teen 29 such analog filters, and Herscher et al (~J. S. Patent No.
3~ 3,588,363) used fourteen such analog filters. Browning et al 31 (U. S. Patent No. ~,087,630) disclosed a method for using a 32 digital spin register in conjunction with a single analog 33 filter to provide multiple channel spectrum analysis.
3~ ~nother problem in word recognition involves data 35 compaction and digital storage of the voiceprink. Brodes et 36 al (U. S. Patent No. 3,812,291) disclosed a binary digital 3? data encoder depending on spectral slopes (i.e., rate of 1 change of acoustic energy as a function of frequency).
2 Herscher et al ~U. S. Patent No. 3,588,363) al~o disclosed 3 an encoding technique depending on spectral slopes. The
4 present invention differs from the prior art in both the substance and the form of the encoding technique by providing 6 a binary encoding of voiceprint data which preserves ampli-7 tude information in all spectral channels, together with time 8 rate o~ change of amplitude~
9 Pattern matching, or the cornparison of one voiceprint with another, is an essential element of word recognition~
11 This is also a difficult problem, because dif~erences between 12 similar words must be distinguished, while at the same time ~3 accepting the normal variations between various utterances 14 Of the same word. Normal variations include: (a) differences in amplitude due to speaking loudly or softly or moving the 16 ~icrophone; (b) differences in duration or tempo due to speak-17 ing slowly or rapidly; (c) differences in spectral qualities 18 due to head colds or variations in microphone response; and 19 (d) back~round noise due to nearby conversation, machine noise, poor telephone connections r or other causes.
21 There have been many prior art means for pattern 22 matching designed to provide the most eEfective balance be-23 tween discrimination of different words and ac~eptance of 2~ vari~tions of the same word. A widely used means for elimin-atin~ amplitude effects is to use a logarithmic or decibel 26 energy scale ~or the acoustic energy in a channel. Spectral 27 slopes, i.e., the difference between signal levels in 28 selected frequency channels, is independent of the amplitude 2~ or loudness of the signal. ~n increase in amplitude, for example, by holding the microphone closer, causes each 31 channel to increase its level by the same logarithmic amount 32 as measured in decibels; by utilizing only spectral differ-33 ences between channels~ the effect of an increased number 34 of decibels in each channel is subtracted out. This method is used, for e~ample, by Herscher et al (U. S. Patent No.
36 3,588,363)~ and Brodes et al (U. S. Patent No. 3,812,291).
37 In the present invention an improved sta-tis-tical method i5 1 used to retain inforrnation on overall signal arnplitude that 2 is normally lost by the spectral slope method.
3 Accounting for variations in speech tempo created 4 yet another speech recognition problem. Prior art speech recognition techniques suitable for low cost implementation 6 used a time-division method, whereby word start and word end 7 are determined, and voice data was collected at fractional 8 intervals within the word. This method accounted in a crude 9 way for variation of the total dura~ion of the word, but did not take into account variations in timing and tempo of 11 syllables within a word. A far more effective technique lZ which is dif~icult to implement in a low cost system is the 13 method known as dynamic programming or dynamic time warping.
14 Dynamic programming is a complicated pattern recognition technique which warps the time axis to provide an optimum 16 match between words; for example, the technique arranges to 17 match words, syllable for syllable, even when the syllables 18 occur at different relative locations in the word. A descrip-19 tion of this rnethod may be found in an artic]e entitled "Dynamic Programming Algorithm Optimization for Spoken Word 21 Recognition" (IEEE Transactions on Accoustics, Speech, and 22 Signal Processing, Vol. ASSP-26, No. l, February 1978, pp.
23 43-49). Prior art of implementing dynamic programming in 24 digital computers is taught in Sakoe et al (U. S. Patent No.
2~ 3,~16,722)~ The present invention is an improYement on the 26 prior art method and means of dynamic programming in several 27 areas: (a) use of a no~Jel spectral feature comparison means 2~ to improve discrimination, noise immunity and calculation 29 speed; (~) an oytimal search techniq~e that provides for 30 effective pattern matching and word recognition even in the 31 presence of noise signals comparable to the speech signa]s;
32 (c) a means for implementing the method in low cost LSI semi-33 conductor chips.
34 Word reco~nition performance in the presence of back-35 ground noise, such as conversations or machine noise, has 36 also been a major problem with prior art word recognizers.
37 Most sys-tems failed when -the background noise ~as comparable 1 to unvoiced speech sounds in the word to be ~ecognized. The 2 present invention has greatly reduced and in many circumstances 3 eliminated this problem.
~ A general object of the present invention is to provide
5 an improved speech or word recognition system that solves the
6 aforesaid problems of prior art word recognition systems and
7 methods.
8 Another object of the invention is to provide a word
9 recognition system that accomplishes spectrum analysis of voice 1~ input without the need for analog filters and may be implemen~
11 ted on integrated circuit semiconductor (LSI) chips~
12 Yet another object of the invention is to provide a 13 speech recognition system that also provides a speech syn-1~ thesis capabiLity, since it utilizes a digital process for 15 converting an acoustic waveform into spectral components that 16 may be reversed.
17 Another object of the present invention is to provide 18 a word recognition system that is easily l'trained" and requires 19 only one entry of the spoken word, although other entries may 20 be made for improvements in discrimination or noise immunity.
21 Still another object of the invention is to provide a 22 word recognition system tha-t is particularly effective for ~3 speaker identification and verification based on voiceprints.
2~ Since spectral channels in the present invention are based on 25 digital means, t~ley may be readily changed to suit the need 26 or recognizing one word from many words spoken by the same 27 speaker or for identifying one speaker from many individuals 2~ speaking the same word. Analog filter banks in prior art were 29 adapted for accomplishing this only with considerable diffi-30 culty, usually re~uiring complicated circuit modifications.
3~
32 Summary of the Invention 34 The aforesaid and other objects of the invention are 35 accomplished by a circuit comprised of digital processing com-36 ponents which function toge-ther -to: (13 provide a spectral 37 analysis of each spoken word in digi-tal form; (2) s-tore the 1 encoded digital representation of the word in a memor~;
2 (3) perform a pattern matching survey to identify the digitized 3 word form; and (4) initiate a response when the identification ~ has been made. In broad terms, the circuit comprises an analoy 5 to digital converter for receiviny the analog ~aveform voice 6 input which is continuously converted to var~ing amplitudes 7 of signal at evenly spaced apart time intervals. Within the 8 circuit are bus lines to which are connected the components 9 that process the digitized data input. The circuit is oper-
10 ated by a central timing system that controls the various components in a repetitive four-phase arrangement. An arith-12 metic lo~ic unit (ALU) in combination with memory, such as a 13 two-port register file, is provided to accomplish standard 14 logic functior.s in the processing of data. The control and 1~ order of the various calculation functions of the circuit are 16 maintained by a sequence control section and an input/output 17 control subcircuit. Associated with these latter components 18 are RAM control sections for controlling the storage and 19 retrieval of data from external memory devices during circuit 20 operation. In the operation of the sys~em, a spoken word of 21 a finite length is divided into time frames, each comprised 22 of a preselected number of digitized data points having a par-23 ticular amplitude that may be identified by 8-bit encodation.
24 Frorn the amplitude vs. time domain for each frame, the ALU is 25 controlled to make calculations that convert the digitized 26 data samples to spectral lines or frequency range coefficients.
27 Further processing by the ALU and its related memory units 28 transforms the spec-tral coefficients of each frame to a lesser 29 number of frequency channels by a selective summation of 30 groups of contiguous spectra. For each such frame of fre-3~ quency channels a mean average (X) of the logarithm amplitude 32 is determined and from this average value the deviation of 33 actual amplitude is measured for each channel. The processing 3~ components also measure the instantaneous slope of the mean 35 value for each channel for pairs of adjacent frames. A11 of 36 the aforesaid measured charac-teristics of each frame, namely, 37 the mean value, the slope of the mean value relative to a continuous frame, and the deviations from the mean values for the various channels, are combined with digital encodiny to form a feature ensemble for each pair of a~jacent frames. The total number of feature ensembles comprisiny a template ~or an entire word are stored in the external memory.
~ atching a voiceprint to a stored templat~ in accord-ance with the present invention is accomplished by a novel feature comparison combined with a dynamic programming optimization tech-nlque .
Thus, in accordance with one broad aspect of the invention, there is provided a method for providing a spectral analysis of an analog signal waveform comprising the steps of: dividing the total incoming analog signal into time frames of equal duration; convert-iny the analog signal to a sequence of discrete signal amplitudes at equally spaced time intervals in each frame; transforming the sequence of discrete signal amplitudes to a sequence of complex spectral amplitudes, each such spec-tral amplitude representing the magnitude and phase of a function V(n/k) defined as:

~(n,k) = exp~ np-r kr~t 2 + ~)]
r=o t=o wherein k = time sequence index n = frequency sequence index r,t = integer summation indexes m = time function parameter defining the number of retained bits ~ = phase adjustment function and the subscripts (p-r) and (r-t) for n and k refer to bit loca-; .

tions in thelr binary representation with bit locations ranging from o to the maximum value p and subscript values outside this range representing vanishing values, In accordance with another broad aspect of the invention there is provided a method for producing an analog signal waveform compris'ing the steps of: providing a predetermined series o-f dig-ital signals representing a sequence of complex spectral amplitudes;
transforming the sequence of complex spectxal amplitudes to a sequ~
ence of discrete time waveform amplitudes, each such spectral amp-litude representing the magnitude and phase of a function V(n,k) defined as:

r-o t-o P r r t wherein k = time sequence index n = frequency sequence index r,t = integer summation indexes rn = time function parameter defining the number of retain-ed bits 0 = phase adjustment function converting the transformed digital da.ta into an anal.og OUtpllt signal.
In accordance with another broad aspect of the invention there is provided a method for producing audio analog output comprising the steps of: providing a predetermined series of en-coded digital signals representing the analog output to be produced;
decoding the encoded signals to provide a sequence of complex spec-tral amplitudes; transforming the sequence of complex spectral amplitudes to a sequence of discrete time waveform amplitudes, each r. ~ - 6a ~

such spectral amplitude representing the magnitude and phase of a function V(n,k) defined as:
p m k --t r-o t-o P r r t wherein k = time sequence index n = frequency sequence index r,t = integer summation indexes m = time function parameter defining the number of retain-ed bits ~ = phase adjustment function;
converting the transformed digital data into an analog output signal.
In accordance with another broad aspect of the lnvention there is provided a method for producing a voiceprint template for recognition of an analog waveform signal comprising the steps of:
di.viding the total signal into time frames of equal duration; con~
verting the analog signal to a sequence of discrete signal amplitud-es at equally spaced time intervals in each said frame; transforming the discrete signal amplitudes of each frame to a preselected number of spectral amplitudes representing values of various frequency components of the said series of signal amplitudes; compacting and converting the spectral amplitudes of each frame to a lesser number of channels, each channel being comprised of an energy summation of amplitudes within a designated frequency range expressed in loga--rithmic amplitudes, and al].ocated on the basis of predetermined acoustic significance; deriviny a mean amplitude value for all of said channels o:E each frame; measuring a deviation from said mean value :Eor each separate channel amplitude in each Erame; determining , ~ - 6b -a feature ensemble for a ~lurality of successive frames of sai.d to~
tal waveform siynal; and storing a di.gital representation of said feature ensembles for said -total waveform signal to form a digital coded -template thereof.
In accordance with another broad aspect of the invention there is provided a word recognition method cornprising the steps of: providing a digital data template representing preselected acoustic features of a spoken word which include time-rates-of~
change of spectral ampli-tudes; receiving a spoken word to be com-pared and performed a ~pectral analysis thereof to determine data representing its acoustic features including time-rates-of-changes of spectral amplitudes; comparing the template with the received spoken word spectral analysis data to determine a degree of simil-arity between features given by the metric function d = b (X_y)2 + ~ (~xj_~yi)2 j=O
l+a (X+y)2 where:
d = degree of similarity j = channel oE index a = a scaling factor to account for normal rates of speech b = a parameter for improviny recognition performance x = mean amplitude value of spoken word template y = mean amplitude value of stored word template x = time-rate-of-change of spoken word template y = time-rate-of-change of stored word template - 6c -~xj = deviation of channel amplitude from means value in spoken word template i = deviation o~ channel amplitude from mean value in stored word template; and produciny an output in response to a predetermined degree of similari-ty between said temp~-late and said spoken word data.
In accordance with another broad aspect of the invention there is provided a voice recognition system for produciny a voice--print template of an analog waveform signal comprising: means for converting an incomi.ng analog signal to a sequence of discrete digital signals; voice processor means including a timing generator for producing repetitive series of timing cycles, counter means for dividing the total incoming signal into time frames of equal length, sequence control means connected to said timing generator including ROM means for providing operating instructions for the processor during said timing cycles, an arithmetic logic unit for performing a spectral analysis of the received digital signals in response to instructions from said ROM means, said ROM means including instruc-tions for: transforming the discrete signal amplitudes to a pre-selected number of spectral amplitudes representing values of var-ious frequency components of the said series of signal amplitudes;
compacting and converting the spectral ampli-tudes of each frame to a lesser number oE channels, each channel being comprised of a sum mation of amplitudes within a des.ignated ~requency range allocat~d on the basis of predetermined acoustic significance, deriving a mean amplitude value for all of said channels of each frame, measurîng a deviation :Erom said means value for each separate channel amplitude - 6d -in each frame, and determining a feature ensemble for each pair of successive frames oE said total waveform signal; and external memor~
means for s-toring a digital representatlon of said feature en.sembles for said total waveform signal comprising a digi-tal coded template thereof.
In accordance with another broad aspect of the invention there is provided a voice recognition system for producing a voice-print templa-te of an analog waveform signal comprising: means for converting an incoming analoy signal to a sequence of discrete dig-ital signals; voice processor means including a timing generator for producing repetitive series of timing cycles, counter means for dividing the total incoming analog signal into time frames of equal length, sequence control means connected to said timing gener--ator including ROM means for providing operating instructionsfor the processor during said timing cycles, means including an arithme-tic logic unit for performing a spectral analysis of the received analog signal in response to instructions from said ROM means, said ROM
means including instructions for transforming the discre-te signal amplitudes of each frame to a sequence of complex spectral amplitudes each representing the magnitude and phase of a function v(n,k) def-ined as:
p m k 2-t r-o t-o P r r t wherein:
k = time sequence index n - frequency sequence index r,t = integer summation indexes m = time function parameter defining the number of re - ~e -tained biits 0 = phase ad~ustment Eunction said ROM means also including instructions for: compact.ing and converting the spectral amplitudes of each frame to a lesser number of channels, each channel being comprised of a summation of signal amplitudes within a designated frequency range allocated on the basis of predetermined acoustic signficance; deriving a mean amplitude value for all of said channels of each frame; measuring a deviation from said mean value for each separate channel amplitl1de in each frame, and determining a feature ensemble for each pair of successive frames of said total waveform signal; and external memory means for storing a digital representation of said feature ensembles for said total waveform signal comprising a digital coded template thereof .
In accordance with another broad aspect of the invention there is provided a voice synthesis device comprising: means pro-viding a predetermined series of digital signals representing a sequence of preselected complex spectral amplitudes; means for transforming said sequence of complex spectral amplitudes to a sequence of discrete time waveform amplitudes, each such spectral amplitude representing the magnitude and phase of a func-tion V(n,k) defined as:
p m 2-t V(n,k) = exp [i~ np-r kr-t + 0)]
r=o t=o wherein:
K = ti.me sequence index n = frequency sequence index r,t = integer summation indexes - 6f -m = time function parameter defininy the number of re-tained bits 0 = phase adjustment function and means for converting the transformed digital data into an analog output signal.
Other objects, advantages and features of the invention will become apparent from the following description presented in conjunction with the accompanying drawing.

- ~g _ ~' ;

1 Brief Description _f _ he Draw_ng 3 Fig~ l is a general block diagram of a ~oice recogni-4 tion and voice synthesis system ernbodying principles of the 5 present invention;
6 Fig. 2 is a block diagram of a voice recogni~ion 7 circuit according to the present inventioni 8 Fig. 2-A is a block diagram of a modified voice 9 recognition circuit system similar to Fig. 2;
Fig. 2-B i5 a block diagram of another modified form
11 of voice recognition circuit using discrete components;
12 Fig. 3 is a more detailed block diagram (on two
13 sheets) of the ~oice recognition circuit depicted in Fig. 2,
14 showing further features of the present invention;
Fig. 4 is a series of timing diagrams for the voice 16 recognition system according to the invention;
17 Fig. 5 is a diagram showing the designation of bits 18 for the micro-code word according to the invention;
19 Figs. 6~12 are a series of diagrams illustrating the 20 processing of a typical spoken word to form a template of a 21 voiceprint in accordance with the principles of the invention;
22 Fig. 13-is a diagram showing a typical word template 23 comprised of a series of feature ensembles with one ensemble 2~ enlarged to show its data content according to one embodiment 25 o~ the invention;
26 Fig. 14 is a diagram illustrating the difference 27 be-tween metric concepts used for voiceprint feature ~8 comparison.

1 Description of the Preferred Embodiment 2 _ _ 3 With reference to the drawing, Fig. 1 shows in block 4 diagram form a typical word recognition system 20 embodying principles of the present invention including provisions for 6 external control 22 and external equipment 24. The latter 7 may be connected to various components operable by or cap-8 able of using speech signals or to a llost computer (not 9 shown) capable of storing or transmitting voiceprint data.
Also connected to the external control is a speech synthesis 11 output path through a digital to analog (D-A) conver-ter 26 12 and an amplifier 28 to a speaker 30. As depicted in general 13 terms, the system's word recognition capabilities may be 14 utilized with various components connected to the external equipment such as robotic devices, display devices, data 16 retrieval and communication equipment.
17 The voice input to the system is applied through a 18 microphone 32 which supplies the voice signals in analog 19 electrical form to an ampli~ier 34 and thence to an analog to digital (A-D) converter 36. The latter converts the analog 21 signals to a time sequence of binary digits by providing a 22 binary representation of the analog voice signal a-t discrete 23 sampling intervals. In one embodiment of the invention the 24 analog voice signal is samplecl 8000 times per second with a 256-level (8 bit) A-D conver-ter; 128 samples are collected 26 to form a frame of 16 milliseconds duration. Obviously, each 27 spoken word will have a large mul-tiplicity o~ frames.
28 Digital information from the A-D converter 36 is fed ~9 to a voice processor 38 which is represented by a box in 30 Fig. 1 and which will be described in greater de-tail with 31 respect to ~igs. 2 and 3. Within the processor 3~, binary 32 logic and arithmetic functions are performed on the frames 33 of digital data and are analyzed in accordance with pre-34 determined or programmed instructions to provide digital 35 information on the frequency spectrum of the voice signal.
36 Thus, the voice signal (signal amplitude as a function of 37 time) is converted to a voiceprin-t (frequency content in -the 3~ voice signal as a function of time)~

l The yoiceprint contains in dlyital form the in~orma~io~
2 required to dis-tinguish one word from another; it also serves 3 to identi~y the particular speaker, since the voiceprink of a ~ word is unique to the person who speaks it. Voiceprints are S well known to -those versed in the art and have long been used 6 for both recognition and speaker identification. The present 7 invention provides a digital means for establi,shi,ng and repro-8 ducing a voiceprint.
9 The voice processor 38 is connected to a~ external lO memory bank 40 which may comprise one or more random access 11 memory devices (RP~1s) 40 connected in parallel. The'e~ternal ~2 control subcircuit 22 is connected by an 8-bit data line 44 13 to the voice processor. As previously described, an external 14 equipment interface circuit 24 is connected by a two-way data
15 path through a conductor 45. This interface circuit can be
16 adapted to connect with a host computer for supplying outside
17 data, such as preformed voiceprints, or to other equipment
18 using speech commands, such as robotic devices, display ~e-
19 vices, or data retrieval and communication equipment.
In Fig. 2 is shown a block diagram of the voice pro-21 cessor 38 which forms an important component of the present 22 invention. Physically, it can be made of discrete elements 23 mounted on a printed circuit board in the conventional manner, 2~ but it can also be made as an integrated circui~ semiconductor 25 device. As shown diagrammatically, an incoming lead 50 trans~
26 mitting analog data is supplied to the analog to digital con-27 verter 36. In this embodiment, the A-D converter is provided 28 as part of the voice processor circuit adaptable for implemen-29 tation as a single integrated circuit device.
Within the voice processor 38 are two conductor 31 buses, namel~, the D-bus 52 and the Y-bus 5~, and all of the 32 ~oice processor components are connected to either one or 33 both of these buses. The output of the A-D converter is 34 connected to the D-bus. An arithmetic logic unit (ALU) 56, 35 a main su~component of the voice processor, receives data 3~ frorn t~e D-bus and/or the register file, and supplies an out-37 put to the ~-bus a~ter performing one of 16 arithmetic/logic 38 operations~ Associated with the ALU is a 1 register file 58 which is essentially a two-port rnemory that 2 receivcs input from the Y-bus and provides output to the ALU.
3 Similarly, an input-output (I/O) control subcircuit 50 and a ~ random access memory (RAM) control subcircuit 62 are provided 5 ~or controlling the storage and retrieval of voiceprint data.
6 Each of these latter subcircuits has an input from the Y-bus 7 and an output to the D-bus and both have data paths 64 and 6~
8 respectively that are connected to the common 8-bit data path 9 42 which extends ~rom the voice processor to the external 10 control circuit 22 and memory 40. In addition, request, 11 acknowledge and grant, outpu-t ready lines 68 and 70 extend 12 to and from the I/O control to external control, while data 13 and control lines 72 and 74 (Sl00, Sl01 and RAS, CAS~, CASl 14 and WE) extend from the RAM control 62 to external memory 15 (RAM)40 . A macro read only memory (ROM) 76, which includes 1~ eomputation tables and macro instructions, is also connected 17 to the D-bus and provides additional memory within the voice 18 processor circuit.
19 As indicated diagrammatieally by the dotted lead line
20 80 in Fig. 2, all of the aforesaid components are inter-2~ connected, and timing control of the circuit is maintained 22 by a se~uence controller subcircuit 82 that includes a micro-23 ROM 84.
24 In Fig. 2-A is shown a somewhat modified circuit for 25 a voice processor 38 wherein the ADC 36a is furnished external 26 to the chip. In this embodimen-t, one 8-bit bus 86 is dedicated 27 exclu~ively to the transfer of RAM address data to the external 28 memory or RAM bank 40, while an additional 8--bit bus 88, called 29 the system bus, provides the data path between the voice pro-30 cessor and the external control circuit 22. This latter bus 31 88 may also serve as the data path between the external ADC
32 36a and the voice processor chip. Three control lines 90, 92, 33 and 94 (WR, RD, and CS) are provided from the I/O control 60a 34 to the external ADC 36a. In all o~her respects, the voice 35 processor 38a, using the external ADC, iS substantially the 36 same as the processor 38 wi-th its on-chip ADC 36.
37 As shown in Fig. 2-B, the invention may also be 1 embodied in an arranye~,ent wherein a voice processor 38b is 2 comprised of separate discrete components rather than being 3 implemented in the form oE an integrated circuit. Such a ~ circuitr as shown, comprises three major sections, narnely, a 5 high speed computation section 96, a macro and I/O control 6 section 98, and a common memory section 100. The high speed 7 computation section is comprised of a micro ROM 102 connected 8 to a sequence controller 104, a register file 106, and an ALU
9 108. In a typical implementation these lat-ter two components 1~ may be comprised of four identical high speed bit slice micro-11 processor elements, plus their support components. The high 12 speed computation components are interconnected by two buses 13 110 and 112 (D and Y) which also provide interconnection with 14 the common memory section 100.
The macro and I/O control section 98 comprises a micro-~6 processor 114 and associated system components including a 17 ~acro ROM 116 and a volatile (scratchpad) RAM 118 which are 18 interconnected by a pair of busses 120 and 122 (CD and CA) and 19 a plurality of control lines indicated by the dotted line 124.
20 ~lso connected to the buses CD and C~ is an analog to digital
21 conyerter ~ADC) 36br and other external equipment 22b adapted
22 to interface with external using apparatus or devices.
~3 The CA and CD buses also provide a means for accessing the common memory 100 which is comprised o~ a RAM control cir-2S cuit 126 and a main memory 128, such as a 32k RAM. As previ-2~ ously described, the RA~l control is a]so connected to the com-27 putation section 96 through the D and Y buses. In all other 28 functional respects, the circuit of Fig. 2-B is the same as 29 those of ~igs. 2 and 2-~.
3n Turning to Fig. 3, the voice processor 38 will now be 31 described in greater detail with an explanatlon of the rela-32 tionship and function o~ the components.
33 The components indicated by a single block in Fig. 2 3~ are expanded in Fig. 3 and each is surrounded by a dotted line 35 -to include subcomponents.
3~ The A/D converter 36 is connected to and receives an 37 input ~ro~ a real time clock interrupt (R~C) 13a. The A/D

1 output is supplied to a reyis-ter (HOLD 2) 132 whose output 2 passes through a switch (SR~l) 134 to a brarlch of the D-bus 52.
3 In the sequence controller 82 a micro program counter ~ (MPC) 136 presents an address to the micro-ROM 84 to speci~y the next micro code word that is to be fetched. As shown in 6 Fig. 5, a micro code word 137 consisting of a specified number 7 of bits of information (e.g., 43 bits) is provided to control 8 the operation of the voice processor during one cycle a~d is C3 described in greater detail below. The counter 136 may be lQ i~cremented, or parallel loaded from the output of a multi-11 plexor 138. Under micro program control, this multiplexor 12 passes either a real time clock (RTC) vector 140 or the con-13 tents of the D/Y bus to the micro program counter 136. The 14 output of this counter is also connected to a holding register (HOLD 1) 142 in which the current value of the counter may be 16 temporarily saved. The output of register 142 connects with 17 the D-bus via a bus switch 144. The output of the micro-ROM
18 84 is gated through a logic network (MASK 1) 146, into a PIPE
19 register 148. Another path through the MASK 1 into the PIPE
register originates at another logic network (~ECOD) 150 which 21 decodes the macro instruction contained in a register (IREG) ~2 152. The IREG register is loaded from the Y-bus through a
23 switch (DST 10) 153.
2~ The contents of the PIPE register control the operation of the system by way of specific control fields and their 26 associated decoders. These decoders (not shown) generate con-27 trol ~ignals for all the system compc~nents, such control ~8 signals being indicated by the letter "z". Micro-co~e flow 2C~ control is effected by means of another dedicated field in the 30 micro code word. The contents of this latter field are 31 either (a) logically combined with the output of the macro 32 instruction decoder (DECOD) via a logic network ~MASK 2) 154 33 or (b) brought directly out through MASK 2 without modification 3~ and onto the D-bus 52.
The macro-ROM block 76 comprises a ROM Hi register 156 36 and a ROM Lo register 158, both receiving inputs from the ALU
37 56 via the Y-bus 54. The outputs of ROM Hi and ROM Lo registers ~13-1 are both furnished to a macro-ROM subcircuit 160 which is 2 connected through a switch (SCR0) 162 to the D-bus.
3 The register flle 58 is essentiall~ a 2-port random 4 access memory whose input is from the Y-buso An A-port speci-fies the register whose contents are to be presented to an 6 R-multiplexor 164 and the B-port specifies the register whose 7 contents are to be presented to both mul-tiplexors 166. The 8 D-multiplexor is also connected to the D-bus 52. The D and R
9 multiplexors each have outputs that are connected to the arith-10 metic logic unit (ALU) 56 which comprises circuitr~ to perform 11 the basic logic arithmetic functions for the system. The out-12 put of the ALU is connected to a logic network to perform one 13 or more shift operations, an (L/R circuit) 168 whose output in 1~ turn is connected to the Y-bus. Another output from the ALU
is connected to a status device 170 which provides an output 16 through switch SRC 12 and also receives an input from either 17 the ALU or the Y-bus.
18 The I/O control 60 and its parallel I/O Port (P10) 172 19 are components that control the flow of data to and from the 20 external memory. The I/O control comprises a multiplexor 174 21 whose output is connected to a buffer 176 whose output in turn 22 is connected to-an 8-bit I/O Bus 178. This latter bus is also 23 supplied to a parallel input (PIN) cirçuit 180 of the parallel 2~ I/O Port whose output is supplied through a switch (SRC 10) 2S 182 to the D-bus. The parallel I/O Port also has a POUT
26 circui.t 184 whose input is from the Y-bus and whose output is 27 furnished -to the multiplexor 17~. The parallel I/O Port also 28 is connected to a 4-bit I/O control line 186.
29 The multiplexor 174 also receives inputs from ROW and 30 COL registers 188 and 190 in the first section 192 of the RAM
3~ contro3. circui-t 62. These ROW and COL registers are each con-32 nected to the Y-bus so as to receive inputs from the ALU.
33 A second section 193 of the RO~ control 62 comprises 3~ two 12-bit shift registers 19~ and 196, a demultiplex network (DEMU~) 198 for loading the shif-t registers from the Y-bus and 36 a multiplex network (MUX) 200 for unloading the shif-t registers 37 onto the D-bus through a switch (SCR 3-9) 202. The shift 1 registers are connected with the RAM array by serial input/
2 output lines (S10~ and SI01) 20~ and 206. The manner in which 3 these components are interconnected permits information trans-4 fer between the voice processor and the ~ array 40 in several 5 different formats. For e~ample, the contents of the two shift 6 registers may be -treated as three 8-bit quantities or four 7 6-bit ~uantities. Each ~-bit quantity may, in turn, be treated 8 as two 3-bit quantities at the time such a 6-bit quantity is 9 unloaded from the shift registers throu~h the ~UX 200 onto the 10 D-bus. These formats are related to the requirements of the ~1 voice processing algorithms, described in detail elsewhere.
12 To syncronize the multiplicity of the events throughout 13 the voice processor, a timing generation network (SYS TIMING), 14 desi~nated block 208, is provided. It comprises a master oscillator (oSC) 210 that operates at 16 MHz and drives several 16 counter and decoder stages (TMG) 212 with appropriate timing 17 output (I') leads 214.
18 The voice processor 38, as shown in Fig. 3, and as 19 just described, can be readily implemented as a single semi-20 conductor chip integrated circuit using known integrated cir-~1 cuit technology such as CMOS, N-Channel MOS, P-Channel MOS or 22 bipolar type design rules.
23 The operation of the voice processor 38 will now be ~4 described relative to the various components which are inter-25 connected by the D-bus, the Y-bus, a dedicated D/~ Bus termi-26 nating in tne sequence controller 82, and a variety of timing 27 and control signals collectively identified by (T) and (Z)~
2~ respectively.
~9 As shown in the timing diagram of Fig. 4, the (TMG) 30 stages 212 generate four, non-overlapping, 25~ duty cycle T-3~ states in an endlessly repeating timing chain (To, Tl, T2, and 3~ T3). The rising edge of ~O defines the beginning, the falling 33 edge of T3 the end of a basic machine cycle (Microcycle). The 34 various T durations and the rising and falling edges of To, Tl, 35 T2 and T3 define the time boundaries, within every microcycle, 36 which signify the beginning, dura-tion or termination of dis-37 crete intra-cycle events. The shaded areas indicate time 1 perlods when the data is in transition and may not be stable.
2 As indicated on the lower portion of Fig. 4, the rising 3 ed~e of To signifies the start of information transfer ~rom the ~ output of the micro-ROM to the PIPE, -the falling edge of To, the completion of this transfer. rrhe rising edge of Tl siyni-6 fies the start of micro-ROM access. The interval from this 7 edge until the falling edge of T3 is the micro-ROM access time.
8 Data sourcing components begin gating data onto the D~bus g sometime during T and keep gating this data onto the D-bus 10 until the rising edge oE the next To. The ALU performs its 11 operation(s) on the data being presented to its inputs starting 12 sometime during late Tol early Tl, and produces a stable output 13 on the Y-bus by not later than mid-T3. The falling edge of T3 14 clocks the contents of the Y-bus into the specified destination latch. Thi5 completes the sequence of intra-cycle events.
16 In synchronism with the aforementioned system timing, 17 the sequence control block specifies the information flow which 18 takes place between the several component blocks of the system.
19 This is accomplished by both the code pattern of each micro 20 code word and the sequence in which these words are e~ecuted.
21 During any one machine cycle, called a micro-cycle, the micro-~2 code woxd currently contained in the PIPE register 148 is 2~ executed while the next word is being fetched (accessed) from 2~ the micro code ROM 84. A-t the end of a micro cycle, the new 25 word emerging from the micro code ROM is latched into the PIPE, 26 to be e~ecuted during the following micro cycle. The micro 27 code word contains a number of control fields, each comprising ~8 a specified number of bits. These fields are decoded during 29 the execution of -the micro code word to provide the necessary 30 control impulses throughout the processor 38.
31 The start of a sequence of micro code words, as well 32 as the particular series in which several sub-sequences are 33 to be executed, may be specified by a macro instruction. Such 34 a rtlacro instruction is fetched from the macro-~OM 76 and held 35 in the IREG 152 for the duration of the execution of all the 3G m:icro code words which comprise the entire SeqUerlCe that 37 effects the operational intent of the macro instruction 1 By means of DECOD and ~SK 1 and M~SK 2 logic, the sequence 2 controller 82 is paced through the appropriate se~uence 3 implied by the macro instruction currently residing in the IREG.
4 Information flow between the several voice processor 5 components transpires over the data/address buses e~cept in the 6 case of register file to ALU transfers. In the latter instance, 7 dedicated data paths are provided. All sources for information 8 transfers gate such information onto the D-bus except in trans-9 fers from register file to ALU. All destinations for informa-10 tion transfers receive such information off the Y-bus, with the 11 e~ception of the micro program counter 136 which receives such 12 i~formation off the D/Y-bus. The latter bus may be viewed as 13 an extension of either the D-bus or the Y-bus, as the case may 1~ be, during information transfers invol.ving the micro program 15 Counter.
16 All information transfers from one source to a destina-17 tion, including transfers from register file 58 to some destin-18 ation or back into the register file, are routed through the 19 ALU. The only exception to this rule is a transfer from the 20 D-bus, via D/Y-bus, to the MPC. The ALU may be directed to ~1 merely "pass through" the contents at the D-bus to the Y-bus 22 without performing a logical or arithmetic operation on the 23 information in -transit, or it may be directed to perform a
24 logical or arithmetic operation on such information in transit
25 and output the result of said operation to the Y-bus. The ALU
26 performs such operations on two 8-bit quantities presented to
27 it by the outputs of the D--MUX 166 and the R-MUX 164. In turn,
28 the D-MUX may be directed to select either the D-bus or -the
29 B-Port of the register file as its information source, while
30 the R-MUX may be directed to s~lect the output of either the
31 A-Port or the B-Port of the register ~ile. The result of the
32 AL~ operation is output onto the Y-bus, whence it is routed to
33 its destination~
34 The external dynamic RAM array 40 provides the mass
35 memory in wh.ich a.ll the Yoice processing information is held .36 during the spectral analysis, template pac~ing, and word 37 reco~nition phases. This RAM arr~y is interconnected by means 3~

1 of the two serial I/O lines 72, which provide the data path, 2 and the I/O bus, over which the address information is output 3 to the array Data is exchanyed between the two 12-bit shift 4 registers 196 and 198 and the RAM array, while addresses are set up via the ROW and COL registers 188 and 190. During a 6 typical voice processor to RAM array transfer, the shift 7 registers are loaded up with the information that is to be 8 sent to the RAM and then the ROW and COL registers are loaded 9 with the starting address for the impending transfer. ROW
10 address is sent first, followed by COL address. RAM CNTL 62 11 and I/O CNTL 60 then transfer ROW and COL addresses to the 12 RAM array and activate the requisite array control lines ~3 (i.e., WE, RAS, CAS0, and CASI) to effect the actual double 14 bit serial inormation transfer.
A RAM array to voice processor transfer is largely a 16 repeat of the aforementioned operation, with a few exceptions.
17 ROW and COL are set up as before, and information is clocked 18 from the RAM array into the voice processor shift registers.
19 From there the information is gated onto the D-bus and routed through the ALU where it is operated upon in accordance with 21 the voice processing algorithm before being transferred to 22 the register file for temporary storage. The information 23 being gathered in the register file is, in turn, operated 2~ upon in conjunction with additional information having been input from the ~M array at some other time, and the resultant 26 transformed informa-tion is again sent to the RAM array.
27 This is an iterative, highly recursive process, both 28 during spectrum analysis and pattern match operations. Thus, ~9 the hardware structure in RAM CNTL 62 (A&B) and I/O CNTL 60, 30 as well as the data structure underlying the location of all 31 the information in the RAM array, has been tailored to optimize 32 throughput.
33 The I/O Bus over which COL and ROW address information 34 is output to the RAM array 40 also serves as a general purpose 35 I/O Port through which the voice processor may communicate
36 with an external controller. P10 Bus access contention is
37 resolved through the use of a fully in-terlocked, asynchronous
38 1 bandshake protocol implemented through the I/O CNTL signals 2 (BREQ, GRT, ORDY, ACK). For purposes of this t~pe of Pl0 3 transaction PIN served as an input, POUT as an OlltpU~ latch 4 for the in~ormation being transferred.
S The original source of the digital information, which 6 undergoes transformation as a result of the operations 7 described above, is the analog to digital converter (APC) 36.
8 This converter samples the analog waveform input to the voice 9 processor at precise intervals and converts these samples into digital notation corresponding to the instantaneous 11 amplitude of the sampled waveform at the time the sample was 12 taken. The interval between samples is controlled by the 13 real time clock (RTC) circuitry~
14 The RTC logic interrupts the sequence control logic ~5 and causes the RTC interrupt service routine to be executed.
16 This routine is responsible for saving machine context, 17 accessing the ADC 36, via HO~D 2, transferring the latest 18 conversion result into R~M and restor-,ng machine context so 19 that the previously preempted background task may resume execution.
21 Each conversion result is transferred to the RAM
22 array in accordance with the rules governing the data struc-23 tures in the array.
24 During the time interval in which the current samples are heing taken, converted into digital form and collected 2~ in the RAM array, all of which involves the periodic fore-27 ground activation of the RTC interrupt service routine, the 28 collection of samples from the previous interval are being 29 processed by a background task which performs a time to 30 frequency domain transformation and subsequent voiceprint 31 feature extraction. The processes which are responsible for 32 this transformation and feature extraction are described in 33 detail in the following section~

1 Digita1 Spectru~! Analysis 3 The major components of the voice processor 38, as 4 described in the previous section, function to process voice S signals in the form of a time sequence of binary digits to provide digital information on the frequency spectrum of the 7 voice signal. Thus, the voice signal (signal arnplitude as a ~ function of time) is transformed into a voiceprint (frequency g content in the voice signal as a function of time). The voice-~G print contains in digital form the information required to 11 distinguish one word from another; it also serves to identify 12 the particular speaker, since the voiceprint of a word is 13 unique to the person who speaks it. Voiceprints are well 14 known to those versed in the state-of-the-ar-t and have long ~5 been used for both recognition and speaker identification.
16 The present invention provides a digital means for obtaining 17 the voiceprint.
18 The analog-to-digital converter 36 provides a binary 19 representa~ion of the analog voice signal at discrete sampling 20 intervals; a collection of sampled voice signal data in binary 21 for~l is aggregated into a frame. In the preferred embodiment 22 of the invention the analog voice signal is sampled 8000 times 23 per second with a 256 level (8-bit) A-D converter; 128 samples 24 are collected to form a frame of 16 milliseconds duration.
To help explain the method of digi-tal spectrum analysis 26 according to the invention, a series of representative diagrams 27 is pr~vided to show the processin~ steps for a sinyle word.
28 Thus, Fig. 6 represents a highly idealized analog signal wave-29 form plot of amplitude vs. -time for a typical spoken word 30 having a finite length of 640 milliseconds and comprised of 40 31 frames of 16 milliseconds each.
32 The number N of samples in the frame is taken to be a 33 power oE two:
3~
N = 2P+1 (1) ~6 37 In the preferred embodiment N=128 and p--6. The sequen-tial 1 member of a voice signal sample within the frame may he 2 expressed as a binary number k which is p binar~ diyits lony:

4 k = kp2P + kp 12P ~ kp 22P 2 .... ko (2) 6 Eiere kp, kp 1' ''''ko are binary cligits, either o or 1, 7 representing in agyreyate the number k e~pressed in binary g form.
9 In Fig. 7 one frame of data is shown covering lb milli-seconds of time divided into 128 equal increments of 125 micro-11 seconds eachO At each time increment is an amplitude value 12 Of the voice signal at that instant represented by an 8-bit 13 digital signal. As indicaked, these amplitude values may vary 14 either positively or negatively from a base level during the time period of the frame depending on the voice characteristics 16 of the speaker and the word being spoken.
17 The digital processing method of the present invention 18 serves to convert the voice siynal data to a sequence of spec-19 tral amplitudes, as shown graphically in Fig. 8. ~ach ampli-tude, which may ]~e represented as a complex number, describes 21 the magnitude and phase of a particular frequency component of 22 the voice signal. Each spectral component is represented by 23 new oscillating time functions closely resembling conventional 24 sine and cosine functions, but having simplified binary 25 representaLions. These new functions allow a substantial 26 reduction in the digital processing steps required to trans-2~ form from voice signal data to spectral amplitude data.
2~ The new oscillating time functions may be represented 29 as complex operat:ions on the binary digits (kp, kp 1' ko) 30 representing the time sequence k and the binary digits 31 (np~ np 1 nO) representing the frequency sequence n. In 32 general, the functions are given by p m -t r-o t-o P r r t 36 The parameter m ~lay range from o to p; each choice provides a 37 selection of spec-tral time functions. The lowesk values of m 1 re~uire the minimum amount of data processiny at the cost of 2 some degradation in spectral purity. The phase correction 3 term 0, which may be zero, is symmetrically deperlderlk on k ~ and n. Elements of expression (3) may be defined as follows:
m = parameter (o-p~
6 r = an index for the summation 7 t = an i:ndex for the summation 8 p = top of range (6) g k = time sequence index n = frequency sequence index 11 The preferred choice of time function parameters providing -the 12 most satisfactory compromise between spectral purity and com-13 putation speed for the preferred embodiment is m=3 and:
14 _m p 0 ~ np-r kr-m~l (4) 16 r-o 17 The transformation from voi.ce signal data to spectral data is 18 accomplished by methods similar to those known in the art as ~9 "fast fourier transforms" (see for example, E. O. Brigham, 20 The Fast Fourier ~ransform, Prentice-Hall, 1974), except that 2L the new functions require computations which may be accomplished 22 using only the operations of add, subtract, and transformation ~3 by table look up. The resulting spectral analysis is substan-24 tially faster tha.n a fast fourier transform, and may be 25 implemented in low cost LSI since genera]. mul-tiplication logic 2~ is not required.
27 The processing operations are most conveniently repre 28 sented as complex arithmetic operations on a complex data 29 array A; this array is a sequence of N memory locations, each 30 location comprising a 16-bit real number and a 16-bit imagin 31 ary number.
32 The first step in the spectral analysis is to transfer 33 the voice signal data to the processing array:
3~
A(k f kp l...ko) = Z(kp, kp_l, O

37 Here Z represents the voice data, which is a sequence of N real -2~-1 numbers, and the superscript o represents that A is the 2 original or starting point of the process. Starting from the 3 original sequence of voice samples, one bit o the spectral 4 sequence n is substituted for one bit of the time sequence k.
The process takes p+l steps, corresponding to the number of 6 bits to describe the sequences. Each step in the process is 7 based on the results of the prior step, and may be most con-8 veniently represented by complex arithmetic operat ons:

Ar+l (nO, nl, . . .nri kp_r_l, o 12 ~ A (nO~---nr-l;kp-r ko) 13 kp r 14 m 2-t 6 exp [j~nr( ~ kp-r-t + k r m 12 )] (6) ~7 The last step oE the process consists of transferri.ng the 18 contents of the processing array in bit-reversed order to the 19 desired sequence S of complex spectral amplitudes:
21 P' p~ nO) = A (nO, nl,...n ) (7) ~2 23 In the p~eferred embodiment, the operations described 24 above reduce to addition, sub-traction, and multi.plication by 25 three quan~ities: sin (~5), sin (22.5), and sin (67.5).
26 Since these multiplications are by fixed cluan-tities and there 27 are so few of them, the multiplications are accomplished in 28 the preferred embodiment by table look up. Other multiplica-29 tion techniques, such as pre-compilecl shift-and-add operations 30 may also be used. These operations are extremely fast compared 31 to the multiplication processes required in the fast fourier 32 transform methods, and are also simpler -to implement in 33 digital logic.
3~ When the bit substitution process is complete, the 35 voice signal sequence is transformed into a sequence of 128 36 spectral amplitucles as shown in ~ig. 8. This process is 37 repeated for each 16 millisecond frame in the voice signal 3~

1 to generate a voiceprint comprising a series of spec~ral 2 amplitudes. Each frame represents 16 milliseconds time dura-3 tion and 128 spectral amplitu~es; this collection of voice-print data is shown yraphically in Fig. 9.
S The digital processing means described above for 6 obtaining the spectrum of a voice signal is reversible. As 7 described, the method processes a voice signal in the form of 8 a time sequence to provide a sequence of spectral amplitudes.
9 It may be shown that if the same process is used on the ~0 sequence of speclral amplitudes, the original voice signal 11 in the form of a time sequence is reconstituted.
12 The reversed processing operations are performed in 13 the same manner as the spectxum analysis process, using the 14 complex data array A. The first step in the process is to 15 transfer the provided sequence S of complex spectral amplitudes 16 to the processiny array:

18 p' p-l' ^ nO) = S (np,n 1 ...n ) (8) 20 Here S represents the complex conjugate of the provided 21 sequence S. Starting from the original sequence of spectral 22 amplitudes, one bit of the time sequence k is substituted for 23 one bit of the frequency sequence n. Each step in the process 24 is based on the results of the prior step:
26 Ar+l(k ,kl,.. kr~; np_r_l,.. nO) 28 ~ A (ko~-~-kr l;np r,... ..n )-m 2-t 2-m 31 r t-o P r t p-r-m-l )] (9) 33 The process takes p+l steps, corresponding to t~e number of 34 bits to describe the sequences. The last step of the process 35 consists of transEerring the conten-ts of -the processing array 36 in a bit--reversed order to the desired sequence Z of real-37 valued time waveform amplitudes:

-2~-2 Z(kp,kp L,...ko) - Re AP (ko,kl,...kp) (10) 4 The reconstituted voice signal may be converted to an analog signal means of an analog-to-diyital (D/A) converter. By the 6 addition of the l)/A converter 26 to the system as shown in 7 Fig. l, it is therefore possible to combine voice synthesis 8 capability with ~JOice recognition capability. This combination 9 of voice output, using shared digital processing means, is a unique feature o:E this invention.
~1 12 Voiceprint Feature Extraction for Recognition 14 In the preferred embodiment the voice signal is decom posed into 128 spectral amplitudes for each 16 millisecond 16 frame. This degree of refinement of spectral information is 17 more than re~uired for most voice recognition or synthesis 18 applications, an(~ voiceprint storage memory requirements may 19 be reduced by effective feature extraction and data compaction.
Methods of voiceprint data compaction differ dependin~
21 on whether the voiceprint is to be used for voice recognition 22 or voice synthes:is. The problem associated with data compac-23 tion for voice recognition is to preserve those features of 2~ the voiceprint necessary for accurate voice recognition while iynoring those qualities relating to speaker variations in 26 tempo and amplitude. The method must also be robust in the 27 presence of backgrouIld noise. The present inven~ion substan-2~ tially exceeds the prior art in recognition accuracy in the 29 presence of noise.
Voiceprint data from the preferred embodimen-t of the 31 voice processor .38 is in the form of 128 spectral amplitudes.
32 These amplitudes are collected together into spectral channels 33 selected on the basis of psychoacoustic information content 3~ as determined by experiment and by cost/performance goals.
In the preferred embodiment 16 channels are selected for 36 general purpose recognition. Allocation of spectral data to 37 a particular channel is accomplished on the basis of spectral 1 energy content. That is, the amplitudes are squared by means 2 of a binary look up table in which x is replaced by x2, and 3 then summed together to provide total spectral energy in the ~. channel. This energy value is then convexted to a decibel scale known by those skilled in the art to be most suitable 6 for representation of voice spectral information.
7 As shown in Fig. 10, the amplitude vs. frequency data 8 of each frame is compacted, that is, the 128 spectral lines 9 are reduced to 16 channels by summation of groups of contiguous lC spectra and the amplitude values are converted to a decibel scale.
12 At this point, the digital voiceprint data in the 13 preferred embodiment comprises 16 channels of spectral energy 14 data per 16 msec frame of voice signal, expressed on a decibel 15 scale. The data is then time smoothed, using well known prior 16 art digital smoothing techniques. The smoothed voiceprint ~7 data i5 denoted by x~, where j represents the spectral channel 18 index (ranging from 0 to 15) and k represents the frame index 19 (incremented every 16 ms~c). Every other frame (that is, every 20 32 msec in the preferred embodiment) the time average spectral 21 amplitude x and t:he time rate-of-change x of each spec-tral 22 amplitude is extr.acted:
2~
24 xk = (xk~ 2 xk + xk_l)/ (11) 26 Xk = (Xk~l -Xk_l)/2 (12) 28 Further reduction in the number of binary bits required 29 to store the voiceprint feature data may be accomplished by 30 well known techniques of encoding, such as storing the spectral 31 mean, and the deviations of each channel from the mean. Thus, 32 we may have:

34 Xk = x]~ + ax~ (13) ., . ~ .
36 ~ (14) l The spec-trum ave:rages are defined as:

3 x = 1 ~ ~ (15) 4 ~-0 l L5 6 ~k l6 ] O Xk (16) 8 Deviations of each feature from the average, axk and ~xk , g require less bit-, to store than the original feature.
Amplitude normalization is required for effective voice ll recognition. Variations in overall voice amplitude, as for l2 example, from speaking loudly or sottly, or from moving a 13 microphone closer or farther, are ignored in human conversa-14 tions. In the decibel scale, a variation in overall amplitude 15 of the speech level is represented by an additive constant in 16 the spectral amplitudes. Whenever data is processed by means 17 of subtracting sE~ectral amplitudes, the constant is removed, 18 and the resultant: is automatically independent of speech level.
l9 Thus, the time rate of change features x~ and the spectral 20 difference features ~x~ and ~xk are automa-tically normalized 21 with respect to variations in speech level. The only voice-22 print data in whi.ch voice level remains is the spectrum 23 amplitude average Xk. This invention provides a normalized 24 average Xk, norma~lized by means of the peak amplitude P of the 25 word:
26 P = max {x } ~17) 27 k 29 xk = Xk-P (18) 30 Since the spectral amplitude xk is represented as a difference 31 between peak level and actual level, it is automatically 32 independent of speech level. The normalizing parametèr P, 33 bei.ng based on averages both in fre~uency and t.ime, is insen-34 sitive to statistical fluctuations in spectral amplitude.
We will a.ssume in the remainder of this disclosure 36 that the acoustic features xk and x~, as described in equa-37 tions (13) and (l4), have been normalized and hence are 38 independent of speech level.

1 Fig. ll shows a diagram for a single frame illustrating 2 feature ensemble domain, part 1, wherein t:he arnplitude values 3 of Fig. 10 have been used to determirle a normalized channel 4 mean value ~) and a deviation from this mean value (~Xj~ for 5 each channel is ~btained.
6 In Fig. 12, a three dimensional plot illustrating the 7 feature ensemble domain, part ~, is shown wherein the succes-8 sive frames for the word (such as shown in Fig. 6) are arranged 9 in order according to their time sequence. Now, for each 10 channel, the maximum amplitude value at the midpoint of each 1l frame is connected to that of the adjacent frame and the 12 instantaneous slope of the mean value ~ (i.e., X) is determined ~3 for each frame. This feature ensemble domain is compressed to 1~ occupy a 32 millisecond slice i71 the time domain.
16 Word Recognition Digital processing means as described above are used lg to convert a voice signal into cL voiceprint. The voiceprint comprises a time sequence (data every 32 msec in the preferred 2~ embodiment) of t:ime averaged spectral amplitude and time-rate-22 of-change of spectral amplitude in each of 16 spectral channels.
~3 A person trains the unit by creating and storing digi-24 tal voiceprints. Each voiceprint incorporates the unique ~5 spectral characteristics of both the speaker and the word 26 being spoken. A minimum of one training voiceprint, called a 27 template, is required for each work to be recognized. One 28 template per word is adequate for many recognition purposes, 29 for example, practiced speakers in a relatively quiet environ~
30 ment. Increased robustness of recognition accuracy may be 31 achieved for novice speakers with highly variable voiceprints, 32 or for recogni-tion in an adverse noisy background, by providing 33 several templates per word. It has been found experimentally 3~ that two templates per word suffice for all but the most 35 critical applicat:ions.
36 ~hus, Flg. 13 shows a word template comprised of a set 37 of ~eature ensemb:Les (X) which -together characterize the word -2~-1 of Fi~. 1. Each feature ensemble consists of 56 bits o~ data 2 which represent the salient information derived from 2048 bits 3 of ADC sampling data (2 X 128 X 8). This 56 bits is comprised b of the mean value X (5 bits), the inslantaneous mean value slope X (3 hits) and the 16 devia-tion from mean values ~X -6 ~XIs (3 bits each). This data for each word template is ulti-7 mately stored in the external RAMs for the system.
8 To recognize a word, a digital voiceprint is created 9 and is compared l~o each of the templates in the vocabulary lQ storage memory. The best match, subject to an acceptance 11 criterion, is se:Lected as the recognized word. Recognition 12 accuracy and robustness (i.e., maintaining accuracy under ~3 adverse condi-tions) is strongly dependent on the word matching 14 process, which ln turn depends critically on the acoustic features and the means of comparison.
16 Matching a voiceprint to a stored template is accom-17 plished in our invention by a novel -feature comparison combined 18 with a dynamic progra~ning optimization technique.
19 The incoming voiceprint is defined by a sequence of 20 acoustic features, which are time~averaged spectral amplitudes 21 and time-rates-of-change of spectral amplitudes. The templates 22 are defined similarly. We shall consider first the comparison 23 of a single feature of -the incoming word comprising -the spec-24 tral sequence (x], xj), and a single fea-ture of the template ~y~, y3). The measure of the degree of similarity is given 26 by a novel metric function which is a feature of our invention:
~7 28 15 (x- _y )2 (l9) 29 2 j i 2 l-~a (x +y ) 31 Here "a" is a scalin~ factor to account for normal rates of 32 speech. In the p~referred embodiment it is taken to be 33 6 msec/dB.
34 The metric d differs from prior art in the use of time-35 rates-of-change of spectral amplitudes. The effect of this 36 usage is to provide a topological (i.e., continuous) metric 37 that is insensitive to high rates of amplitude variation 1 within a speech signal, and which proyides an i~portant element 2 of noise immunity.
3 Prior art: metrics ~or estirnating the similarity of ~ acoustic features, depend upon the instantaneous value of the spectral amplitudes, and do not include time-rates-of-chanye.
6 For example, the prior art Euclidean metric may be defined as:

8 dE ~ (x -y ) (20) 9 ~=0 In Fig. ]4 is shown graphically the difference between 11 metric concepts in the case of a rapidly changing speech signal 12 with a slight time misregistration between the word and the 13 template. The Euclidean distance dE between word and templa-te 14 in a region of high slope may be quite large due to even a 15 small time misrec~istration. The topological metric d of this ~6 invention may be represented as the diameter of a ball between 17 the two curves, not the vertical distance. Consequently, a 18 small misregistration of timing leads to a correspondingly 19 small distance. This topological metric using time-rates-of-20 change provides a consistently better measure of similarity 21 between acoustic features than the Euclidean metric, which is 22 s~nsitive to high rates of change.
23 A further advance of the topological metric over prior 2~ art is its contribution to noise immunity. To achieve a close 25 measure of similarity, not only must the spectral amplitudes 26 match, but also the time-rates-of-change of the spectràl 27 amplitudes. It is highly unlikely for noise signals to match 2~ both conditions at once.
~9 Those skilled in the art will recognize that the means 30 ko achie~e topological smoothness of the metric with regard to 31 time registration in highly fluctuating speech may also be 32 applied to other metrics, for example, the Chebyshev metric.
33 Thus, we may have as an alternate -to Equation (19):
3~ 15 d = ~¦x -Y I _ (21) 36 j=0 37 ~ (x +y )2 3~

1 The essen-tial feature of this invention is to provide a means 2 for reducing apparent differences in spectral amplitude in 3 regions of high rates oE chanye by utilizing corrections based 4 on time-rates-o~-change.
The major advantages of the topological metric may be 6 preserved and computation greatly reduced by storlng template 7 data in terms of average amplitude and spectral differences, g and by using the average time-rate-of-change to provide the ~ topological correction. The formula used in the preferred embodiment is:
11 _ 15 12 d = [b (~_y)2 + ~ (~xj_~yj)2] (22) 13 _ _ j=0 14 l+a2(x+y)~
16 Here b is a consl~ant which may be 16 for closest equivalence 17 to Equation (19), or may be varied as a further parameter in 18 improving recogn:ition performance. In the preferred embodi-l9 ment b=8.
The topological metric of Equation (22) is computed 21 in the preferred embodiment by means of a series of table look 22 ups (in which a value X is replaced by its square X ), addi-23 tions, and a tab]e look up to perform the slope correction.
24 Prior art: dynamic programming optimization techniques, 25 well ]~nown to those versed in the art, may be use~ to achieve 26 optimum time registration between the voiceprint of the 27 incoming wor~ ancl the template under comparison.
28 The topological metric of this invention provides two 29 improvements over prior art speech recognizers based on 30 dynamic programming: l) substantial reduction in calculational 31 effort; and 2) improvement in noise immunity. Reduction in 32 calculational effort is achieved from the fact that the topo-33 logical metric is able to compare acoustic features represent-3~ ing longer periods of time, even in the presence of rapidly 35 varying speech patterns. Dynamic programming calculations 36 are reduced in inverse proportion to the square of the time 37 perlod, for example, a doubling of the period reduces calcula-38 tions by a factor of four. ~ fur-ther benefit is a reduction l in template storage as the tirne period covered by the ~lata 2 increases In the pre~erred embodiment the time period is 3 32 msec, representing in~ormation from -two 16-msec ~rames o ~ spectral data from the spectrum analy~er.
Noise immunity in the preEerred embodiment is further 6 irnproved by elimination of word boundary considerations.
7 Prior art use of dynamic programming techniques for word 8 recognition require identification of word start a~d word 9 stop. Since words frequently start and stop on sibilants or lO other low-energy unvoiced segments of speech, noise is parti-ll cularly troublesome for prior art word boundary algorithms.
12 This invention eliminates word boundary considerations by 13 assigning an arbitrary start (200 msec before first appearance 14 of voiced speech) and an arbitrary stop (200 msec after last appearance of voiced speech) in the preferred embodiment.
16 Accurate time registration is achieved by means of dynamic 17 programming methods known to those versed in the art, combined 18 with the highly effective topological metric. By these means l9 accurate recognilion is achieved even in the presence of noise levels which are comparable to the low-energy unvoiced compon-21 ents of speech; l_here is degradation of accuracy as noise 22 level is increased, but there is no catastrophic cessation of 23 recognition as occurs in prior art word recognizers relying 24 on word boundary algorithms.

26 Voice Repro~uction 28 Voice reproduction is a substantially simpler task 29 than voice recognition, and is accornplished in this invention 30 using only a por1ion of the digital processing capability.
31 A person trains the unit for voice reproduction by 32 creating and storing digital voiceprints. Each stored voice-33 print comprises a time sequence of spectral amplitudes, as 34 shown in Flg~ 8, which may be reduced in data con-tent Eor com-35 pact storage in an external memory, i.e., RAM 40.
To reproduce speech, the spectral amplitudes are pro-37 cessed by the vo:ice processor 38 prevlously described. It is 1 a ~eature of this invention that the digital spectrum analysis 2 method is reversible, and a frame of spectral a~plitudes may 3 be processed to yield a frame of reconstituted voice signals 4 in the form of digital amplitudes.
The reconstituted voice signal amplitudes are passed 6 through the digital-to-analog converter 26 and amplified to 7 create an audible sound in a loudspeaker, telephone, or other 8 audio apparatus 10 ~oiceprint Feature Extraction for Voice Reproduction l:L
12 The voiceprint features most suitable for voice repro-13 duction do not necessarily coincide with the voiceprint features 14 most suitable for voice recognition. This results from the fact that people expect qualities in reproduced voice that have 16 nothing to do ~lith recognition; for example, whether the ~7 spea~er is male or female, the emotional state of the speaker, 18 and so forth. Absence of these qualities tends toward a 19 machine-like or robotic quality which many people find objec-20 tionable. The additional features required for quality voice 21 reproduction tend to increase the number of bits in the 22 digitally stored voiceprint.
23 Another feature of this invention is the ability to 24 create and store voiceprints for both recognition and reproduc-2~ tion purposes.
26 In the preferred embodimen-t of the invention the voice 27 signal to be stored for later reproduction is spectrally 28 analyzed on a frame-by-frame basis exactly as is done for 29 recognitlon. Ho~ever, the feature extraction process is 30 different. In the preferred embodiment the spectral amplitudes 31 below a threshold magnitude are discarded by providing suitable 32 instructions within the macro-ROM of the voice processor 38.
33 The remaining amplitudes above the desired level are represent-3~ ed by a limited number of bits. The voiceprint data thus 35 consists of a bit-reduced sequence of spectral amplitudes~
36 Quality of the reproduced voice depends directly on 37 the number of bits preserved in the voiceprint. For a typical 3~

1 word consistiny o~ ~0 frames of 16 milliseconds each, or a 2 total of 640 milliseconds, the initial number of bits is 3 40,960 (40x128x8). Excellent quality is preserved when the 4 voiceprint data is reduced to 8,000 bits; yet the word can S be adequately recognized, but with a robotic quality, at 6 1,000 bits.
7 To those skilled in the art to which this invention 8 relates, many changes in construction and widely differing 9 embodiments and applications of the invention will suggest themselves without departing from the spirit and scope of 11 the invention. The disclosures and the description herein 12 are purely illustrative and are not intended to be in any 13 sense limiting.

We claim-

Claims (28)

1. A method for providing a spectral analysis of an analog signal waveform comprising the steps of:
dividing the total incoming analog signal into time frames of equal duration;
converting the analog signal to a sequence of discrete signal amplitudes at equally spaced time intervals in each frame;
transforming the sequence of discrete signal amplitudes to a sequence of complex spectral amplitudes, each such spectral amplitude representing the magnitude and phase of a function V(n,k) defined as:

wherein k = time sequence index n = frequency sequence index r,t = integer summation indexes m = time function parameter defining the number of retained bits ? = phase adjustment function and the subscripts (p-r) and (r-t) for n and k refer to bit locations in their binary representation with bit locations ranging from o to the maximum value p and subscript values outside this range representing vanishing values.
2, The method of claim 1 wherein the phase adjustment function ? is defined as:

3. The method of claim 1 wherein the phase adjust-ment function ? is zero.
4. The method of claim 1 wherein the transforma-tion from a sequence of discrete signal amplitudes to a sequence of complex spectral amplitudes is accomplished by establishing a processing array; transferring the signal amplitude data to the array in accordance with the expression A°(kp,kp-1,....ko) = Z(kp, kp-1,....ko) wherein A° represents the starting values of the array and Z represents the signal data in the form of binary digits;
starting from the original sequence of signal data substituting one bit of the spectral sequence n for one bit of the time sequence k in accordance with the expression:

(6) wherein Ar = results of the rth step of processing, beginning at r=o and ending at r=p+1 determining the sequence of complex spectral amplitudes from the final step of the processing array in accordance with the formula:
S(np, np-1,...no) = A p+1(no, n1,...np) wherein S = the desired sequence of complex spectral amplitudes.
5. A method for producing an analog signal waveform comprising the steps of:
providing a predetermined series of digital signals representing a sequence of complex spectral amplitudes;
transforming the sequence of complex spectral amplitudes to a sequence of discrete time waveform amplitudes, each such spectral amplitude representing the magnitude and phase of a function V(n,k) defined as:

wherein k = time sequence index n = frequency sequence index r,t = integer summation indexes m = time function parameter defining the number of retained bits ? = phase adjustment function converting the transformed digital data into an analog output signal.
6. The method of claim 5 wherein the phase adjustment function ? is defined as
7. The method of claim 5 wherein the phase adjustment function ? is zero.
8. The method of claim 5 wherein the trans-formation from a sequence of complex spectral amplitudes to a sequence of discrete time waveform amplitudes is accomplished by establishing a processing array, trans-ferring the complex conjugate of the spectral amplitude data to the array in accordance with the expression A°(np, np-1,....no) = S*(np, np-1,...no) wherein A° represents the starting values of the array and S* represents the complex conjugate of the spectral amplitude data in the form of binary digits;
starting from the original sequence of spectral amplitude data one bit of the time sequence k is substituted for one bit of the spectral sequence n in accordance with the formula:

wherein Ar = results of the rth step of processing, beginning at r=o and ending at r=p+1.
determining the sequence of time waveform amplitudes from the final step of the processing array in accordance with the formula:

Z(kp, kp-1,...ko) = Re Ap+1(ko,...kp) wherein Z = the desired sequence of time waveform amplitudes ReAp+1 = the real part of complex values representing the final stage of processing.
9. A method for producing audio analog output comprising the steps of:
providing a predetermined series of encoded digital signals representing the analog output to be produced;
decoding the encoded signals to provide a sequence of complex spectral amplitudes;
transforming the sequence of complex spectral amplitudes to a sequence of discrete time waveform amplitudes each such spectral amplitude representing the magnitude and phase of a function V(n,k) defined as:

wherein k = time sequence index n = frequency sequence index r,t = integer summation indexes m = time function parameter defining the number of retained bits ? = phase adjustment function;
converting the transformed digital data into an analog output signal.
10. The method of claim 9 wherein the encoded digital signals representing the analog output are provided from an external memory bank.
11. The method of claim 9 wherein the encoded digital signals representing the analog output are provided by performing a spectral analysis of an analog signal input to produce a digital voiceprint.
12. The method of claim 11 wherein the spectral analysis includes the steps of:
dividing the total signal into time frames of equal duration;
converting the analog signal to a sequence of discrete signal amplitudes at equally spaced time intervals in each frame;
transforming the discrete signal amplitudes of each frame to a preselected number of spectral amplitudes representing values of various frequency components of the said series of signal amplitudes;
reducing the number of spectral coefficients of each frame by comparing the magnitude of each coefficient to a predetermined threshold value, and eliminating coefficients which are below the threshold;
reducing the number of bits describing each remaining coefficient to a predetermined maximum.
13. A method for producing a voiceprint template for recognition of an analog waveform signal comprising the steps of:
dividing the total signal into time frames of equal duration;
converting the analog signal to a sequence of discrete signal amplitudes at equally spaced time intervals in each said frame;
transforming the discrete signal amplitudes of each frame to a preselected number of spectral amplitudes representing values of various frequency components of the said series of signal amplitudes;, compacting and converting the spectral amplitudes of each frame to a lesser number of channels, each channel being comprised of an energy summation of amplitudes within a designated frequency range expressed in logarithmic amplitudes, and allocated on the basis of predetermined acoustic significance;

deriving a mean amplitude value for all of said channels of each frame;
measuring a deviation from said mean value for each separate channel amplitude in each frame;
determining a feature ensemble for a plurality of successive frames of said total waveform signal; and storing a digital representation of said feature ensembles for said total waveform signal to form a digital coded template thereof.
14. The method of claim 13 wherein each said feature ensemble is comprised of a pair of adjacent successive frames of the total waveform signal.
15. The method of claim 14 wherein each said feature ensemble is comprised of the average mean amplitude value of each frame pair, the slope of the difference in mean values of the same channel in the adjacent pair of frames, and the average amplitude deviation from the mean values for each channel of each frame pair.
16. A word recognition method comprising the steps of:
providing a digital data template representing preselected acoustic features of a spoken word which include time-rates-of-change of spectral amplitudes;
receiving a spoken word to be compared and performing a spectral analysis thereof to determine data representing its acoustic features including time-rates-of-changes of spectral amplitudes;
comparing the template with the received spoken word spectral analysis data to determine a degree of similarity between features given by the metric function:

where:
d = degree of similarity j = channel index a = a scaling factor to account for normal rates of speech b = a parameter for improving recognition performance ? = mean amplitude value of spoken word template ? = mean amplitude value of stored word template ? = time-rate-of-change of spoken word template ? = time-rate-of-change of stored word template .DELTA.xj = deviation of channel amplitude from mean value in spoken word template .DELTA.yj = deviation of channel amplitude from mean value in stored word template; and producing an output in response to a predetermined degree of similarity between said template and said spoken word data.
17. The method of claim 16 wherein said digital data template is retrieved from an external memory storage.
18. The method of claim 16 wherein said digital data template is established by providing an initial training word; performing a spectral analysis of said training word to produce said template; and temporarily storing said training word template before comparing it with the subsequently said received spoken word.
19. The method of claim 16 wherein the step of producing an output includes the sub step of providing stored digital data representing predetermined analog signals; and synthesizing said stored data to produce the analog signals.
20. A voice recognition system for producing a voice-print template of an analog waveform signal comprising:
means for converting an incoming analog signal to a sequence of discrete digital signals;
voice processor means including a timing generator for producing repetitive series of timing cycles, counter means for dividing the total incoming signal into time frames of equal length, sequence control means connected to said timing generator including ROM means for providing operating instruc-tions for the processor during said timing cycles, an arith-metic logic unit for performing a spectral analysis of the received digital signals in response to instructions from said ROM means, said ROM means including instructions for:
transforming the discrete signal amplitudes to a preselected number of spectral amplitudes representing values of various frequency components of the said series of signal amplitudes;
compacting and converting the spectral amplitudes of each frame to a lesser number of channels, each channel being comprised of a summation of amplitudes within a designated frequency range allocated on the basis of predetermined acoustic signifi-cance; deriving a mean amplitude value for all of said channels of each frame; measuring a deviation from said mean value for each separate channel amplitude in each frame, and determining a feature ensemble for each pair of successive frames of said total waveform signal; and external memory means for storing a digital representa-tion of said feature ensembles for said total waveform signal comprising a digital coded template thereof.
21. A voice recognition system for producing a voiceprint template of an analog waveform signal comprising:
means for converting an incoming analog signal to a sequence of discrete digital signals;
voice processor means including a timing generator for producing repetitive series of timing cycles, counter means for dividing the total incoming analog signal into time frames of equal length, sequence control means connected to said timing generator including ROM means for providing opera-ting instructions for the processor during said timing cycles, means including an arithmetic logic unit for performing a spectral analysis of the received analog signal in response to instructions from said ROM means, said ROM means including instructions for transforming the discrete signal amplitudes of each frame to a sequence of complex spectral amplitudes each representing the magnitude and phase of a function V (n, k) defined as:

wherin:
k = time sequence index n = frequency sequence index r,t = integer summation indexes m = time function parameter defining the number of retained bits ? = phase adjustment function Said ROM means also including instructions for: compacting and converting the spectral amplitudes of each frame to a lesser number of channels, each channel being comprised of a summation of signal amplitudes within a designated frequency range allocated on the basis of predetermined acoustic signifi-cance; deriving a mean amplitude value for all of said channels of each frame; measuring a deviation from said mean value for (claim 21, continued) each separate channel amplitude in each frame, and determining a feature ensemble for each pair of successive frames of said total waveform signal; and external memory means for storing a digital repre-sentation of said feature ensembles for said total waveform signal comprising a digital coded template thereof.
22. The voice recognition system as described in claim 20 wherein said ROM means includes means providing instructions for transforming a sequence of discrete signal amplitudes to a sequence of complex amplitudes by establishing a processing array and transforming signal amplitude data to the array in accordance with the expression:

Ao(kp,kp-1,.....ko) = Z (kp,kp-1,....ko) Wherein Ao represents the starting values of the array and Z
represents the signal data in the form of binary digits;
said ROM means including further instructions for substituting one bit of the spectral sequence n for one bit of the time sequence k, starting from the original sequence of signal data, in accordance with the expression:

wherein:
Ar = results of the rth step of processing, beginning at r=o and ending at r=p+1 said ROM means including further instructions for determining the sequence of complex spectral amplitudes from the processing array in accordance with the expression:
S(np,np-1,...no) =Ap+1(no, n1,....np) wherin:
S = the desired sequence of complex spectral amplitudes
23. The voice recognition system as described in claim 22 wherein said voice processor includes means for comparing the voice template developed by spectral analysis of the analog signal with a second template stored in said external memory means.
24. The voice recognition system as described in claim 23 wherein said means for comparing includes ROM
instruction means for determining a degree of similarity between features of the developed voice template and said second template in accordance with the function:

25. The voice recognition system as described in claim 21 wherein said voice processor is in the form of an integrated circuit semiconductor device.
26. The voice recognition system as described in claim 21 wherein said voice processor is in the form of an integrated circuit semiconductor device that also includes said means for converting the incoming analog signal to digital signals.
27. A voice synthesis device comprising:
means providing a predetermined series of digital signals representing a sequence of preselected complex spectral amplitudes;
means for transforming said sequence of complex spectral amplitudes to a sequence of discrete time waveform amplitudes, each such spectral amplitude representing the magnitude and phase of a function V(n,k) defined as:

wherein:

k = time sequence index n = frequency sequence index r,t = integer summation indexes m = time function parameter defining the number of retained bits ? = phase adjustment function and means for converting the transformed digital data into an analog output signal.
28. The voice synthesis device of claim 27 wherein said means for transforming includes:
means for establishing a processing array and there-after transferring the complex conjugate of the spectral amplitude data to the array in accordance with the expression:

Ao(np,np-1,....no) = S*(np,np-1,....no) Wherein Ao represents the starting values of the array and S*
represents the complex conjugate of the spectral amplitude data in the form of binary digits; and also including means for determining the sequence of time waveform amplitudes from the final processing array in accordance with the formula:

Z(kp,kp-1,....ko) = Re Ap+1(ko,....kp) wherein:
Z = the desired sequence of time waveform amplitudes Re Ap+1 = the real part of complex values representing the final stage of processing;

means for substituting one bit of the time sequence k for one bit of the spectral sequence n, starting from the original sequence of spectral amplitude data in accordance with the formula:

wherein:

Ar = results of the rth step of processing, beginning at r=o and ending at r=p+1?.
CA000413666A 1981-10-19 1982-10-18 Method and apparatus for speech recognition and reproduction Expired CA1180812A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US06/312,801 US4415767A (en) 1981-10-19 1981-10-19 Method and apparatus for speech recognition and reproduction
US312,801 1981-10-19

Publications (1)

Publication Number Publication Date
CA1180812A true CA1180812A (en) 1985-01-08

Family

ID=23213066

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000413666A Expired CA1180812A (en) 1981-10-19 1982-10-18 Method and apparatus for speech recognition and reproduction

Country Status (8)

Country Link
US (1) US4415767A (en)
EP (1) EP0077558B1 (en)
JP (1) JPS58100199A (en)
AU (1) AU551937B2 (en)
CA (1) CA1180812A (en)
DE (1) DE3272176D1 (en)
IL (1) IL67124A (en)
MX (1) MX153759A (en)

Families Citing this family (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4559602A (en) * 1983-01-27 1985-12-17 Bates Jr John K Signal processing and synthesizing method and apparatus
US4675840A (en) * 1983-02-24 1987-06-23 Jostens Learning Systems, Inc. Speech processor system with auxiliary memory access
US4866777A (en) * 1984-11-09 1989-09-12 Alcatel Usa Corporation Apparatus for extracting features from a speech signal
DE3642591A1 (en) * 1985-12-20 1987-11-12 Bayerische Motoren Werke Ag Method for voice recognition in a noisy environment
US4797929A (en) * 1986-01-03 1989-01-10 Motorola, Inc. Word recognition in a speech recognition system using data reduced word templates
EP0255523B1 (en) * 1986-01-03 1994-08-03 Motorola, Inc. Method and apparatus for synthesizing speech from speech recognition templates
CA1299750C (en) * 1986-01-03 1992-04-28 Ira Alan Gerson Optimal method of data reduction in a speech recognition system
WO1987004293A1 (en) * 1986-01-03 1987-07-16 Motorola, Inc. Method and apparatus for synthesizing speech without voicing or pitch information
JPS62232000A (en) * 1986-03-25 1987-10-12 インタ−ナシヨナル・ビジネス・マシ−ンズ・コ−ポレ−シヨン Voice recognition equipment
US4941178A (en) * 1986-04-01 1990-07-10 Gte Laboratories Incorporated Speech recognition using preclassification and spectral normalization
US4856067A (en) * 1986-08-21 1989-08-08 Oki Electric Industry Co., Ltd. Speech recognition system wherein the consonantal characteristics of input utterances are extracted
CA1287910C (en) * 1986-09-30 1991-08-20 Salvador Barron Adjunct processor for providing computer facility access protection via call transfer
US4805225A (en) * 1986-11-06 1989-02-14 The Research Foundation Of The State University Of New York Pattern recognition method and apparatus
US4827520A (en) * 1987-01-16 1989-05-02 Prince Corporation Voice actuated control system for use in a vehicle
US4888806A (en) * 1987-05-29 1989-12-19 Animated Voice Corporation Computer speech system
US4949382A (en) * 1988-10-05 1990-08-14 Griggs Talkwriter Corporation Speech-controlled phonetic typewriter or display device having circuitry for analyzing fast and slow speech
US5185800A (en) * 1989-10-13 1993-02-09 Centre National D'etudes Des Telecommunications Bit allocation device for transformed digital audio broadcasting signals with adaptive quantization based on psychoauditive criterion
DE69232463T2 (en) * 1991-12-31 2002-11-28 Unisys Pulsepoint Communicatio VOICE CONTROLLED MESSAGE SYSTEM AND PROCESSING PROCESS
US5608861A (en) * 1994-02-14 1997-03-04 Carecentric Solutions, Inc. Systems and methods for dynamically modifying the visualization of received data
DE4434255A1 (en) * 1994-09-24 1996-03-28 Sel Alcatel Ag Device for voice recording with subsequent text creation
US5706398A (en) * 1995-05-03 1998-01-06 Assefa; Eskinder Method and apparatus for compressing and decompressing voice signals, that includes a predetermined set of syllabic sounds capable of representing all possible syllabic sounds
US5884263A (en) * 1996-09-16 1999-03-16 International Business Machines Corporation Computer note facility for documenting speech training
US5874939A (en) * 1996-12-10 1999-02-23 Motorola, Inc. Keyboard apparatus and method with voice recognition
US7228280B1 (en) * 1997-04-15 2007-06-05 Gracenote, Inc. Finding database match for file based on file characteristics
SE514304C2 (en) * 1997-10-10 2001-02-05 Ari Ab Apparatus and method for processing a log
US6467062B1 (en) * 1997-12-10 2002-10-15 Mordecai Barkan Digital data (multi-bit) storage with discrete analog memory cells
US6397364B1 (en) 1998-04-20 2002-05-28 Mordecai Barkan Digital data representation for multi-bit data storage and transmission
EP1852836A3 (en) 1999-05-26 2011-03-30 Johnson Controls Technology Company Wireless communications system and method
US7346374B2 (en) 1999-05-26 2008-03-18 Johnson Controls Technology Company Wireless communications system and method
US6442506B1 (en) * 1999-11-08 2002-08-27 TREVIñO GEORGE Spectrum analysis method and apparatus
US6735563B1 (en) * 2000-07-13 2004-05-11 Qualcomm, Inc. Method and apparatus for constructing voice templates for a speaker-independent voice recognition system
US7853664B1 (en) * 2000-07-31 2010-12-14 Landmark Digital Services Llc Method and system for purchasing pre-recorded music
US6990453B2 (en) 2000-07-31 2006-01-24 Landmark Digital Services Llc System and methods for recognizing sound and music signals in high noise and distortion
US6629077B1 (en) 2000-11-22 2003-09-30 Universal Electronics Inc. Universal remote control adapted to receive voice input
US7110525B1 (en) 2001-06-25 2006-09-19 Toby Heller Agent training sensitive call routing system
AU2002346116A1 (en) * 2001-07-20 2003-03-03 Gracenote, Inc. Automatic identification of sound recordings
US7065757B2 (en) * 2001-09-28 2006-06-20 Hewlett-Packard Development Company, L.P. Efficient compilation of family of related functions
DE60323086D1 (en) * 2002-04-25 2008-10-02 Landmark Digital Services Llc ROBUST AND INVARIANT AUDIO COMPUTER COMPARISON
US20070198262A1 (en) * 2003-08-20 2007-08-23 Mindlin Bernardo G Topological voiceprints for speaker identification
WO2005020208A2 (en) * 2003-08-20 2005-03-03 The Regents Of The University Of California Topological voiceprints for speaker identification
JP4274419B2 (en) * 2003-12-09 2009-06-10 独立行政法人産業技術総合研究所 Acoustic signal removal apparatus, acoustic signal removal method, and acoustic signal removal program
JP4274418B2 (en) * 2003-12-09 2009-06-10 独立行政法人産業技術総合研究所 Acoustic signal removal apparatus, acoustic signal removal method, and acoustic signal removal program
CN1998168B (en) 2004-02-19 2011-04-06 兰德马克数字服务有限责任公司 Method and apparatus for identification of broadcast source
JP4272107B2 (en) * 2004-05-13 2009-06-03 株式会社フジテレビジョン Acoustic signal removal apparatus, acoustic signal removal method, and acoustic signal removal program
JP3827317B2 (en) * 2004-06-03 2006-09-27 任天堂株式会社 Command processing unit
CN100485399C (en) * 2004-06-24 2009-05-06 兰德马克数字服务有限责任公司 Method of characterizing the overlap of two media segments
EP1864243A4 (en) * 2005-02-08 2009-08-05 Landmark Digital Services Llc Automatic identfication of repeated material in audio signals
EP2110000B1 (en) 2006-10-11 2018-12-26 Visteon Global Technologies, Inc. Wireless network selection
US8453170B2 (en) * 2007-02-27 2013-05-28 Landmark Digital Services Llc System and method for monitoring and recognizing broadcast data
US20080256613A1 (en) * 2007-03-13 2008-10-16 Grover Noel J Voice print identification portal
US7840177B2 (en) * 2007-05-23 2010-11-23 Landmark Digital Services, Llc Device for monitoring multiple broadcast signals
US20090287489A1 (en) * 2008-05-15 2009-11-19 Palm, Inc. Speech processing for plurality of users
US8036891B2 (en) * 2008-06-26 2011-10-11 California State University, Fresno Methods of identification using voice sound analysis
CA2856496A1 (en) * 2010-11-22 2012-05-31 Listening Methods, Llc System and method for pattern recognition and analysis
DE102013214278A1 (en) 2013-07-22 2015-01-22 Digital Endoscopy Gmbh SEALING COMPONENT FOR AN ENDOSCOPE PLUG
DE102013222042A1 (en) 2013-10-30 2015-04-30 Digital Endoscopy Gmbh Deflection movement transmission device, endoscope deflecting control and endoscope
DE102013222041A1 (en) 2013-10-30 2015-04-30 Digital Endoscopy Gmbh Deflection movement transmission device, endoscope deflecting control and endoscope
DE102013222039A1 (en) 2013-10-30 2015-04-30 Digital Endoscopy Gmbh Attachable to a mother endoscope secondary endoscope and combination of mother endoscope and secondary endoscope
DE102013224683A1 (en) 2013-12-02 2015-06-03 Digital Endoscopy Gmbh ENDOSCOPIC HEAD AND ENDOSCOPE
DE102013226591A1 (en) 2013-12-19 2015-06-25 Digital Endoscopy Gmbh DEVICE AND METHOD FOR PRODUCING A PERMANENT HOLLOW PROFILE ELEMENT, LONG-TERM HOLLOW PROFILE ELEMENT AND AN ANCIENT UNIT FOR AN ENDOSCOPE
DE102014201208A1 (en) 2014-01-23 2015-07-23 Digital Endoscopy Gmbh FLUID BLOCK FOR AN ENDOSCOPE PART AND ENDOSCOPE
DE102014201286B4 (en) * 2014-01-24 2019-12-24 Digital Endoscopy Gmbh METHOD AND DEVICE FOR TRACKING THE BASIC FREQUENCY OF A VOICE SIGNAL IN REAL TIME
US9589564B2 (en) 2014-02-05 2017-03-07 Google Inc. Multiple speech locale-specific hotword classifiers for selection of a speech locale
DE102015113016B4 (en) 2015-08-07 2018-03-29 Digital Endoscopy Gmbh ENDOSCOPE HEAD
JP7286894B2 (en) * 2019-02-07 2023-06-06 国立大学法人山梨大学 Signal conversion system, machine learning system and signal conversion program
CN115620706B (en) * 2022-11-07 2023-03-10 之江实验室 Model training method, device, equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3582559A (en) * 1969-04-21 1971-06-01 Scope Inc Method and apparatus for interpretation of time-varying signals
US3816722A (en) * 1970-09-29 1974-06-11 Nippon Electric Co Computer for calculating the similarity between patterns and pattern recognition system comprising the similarity computer
DE2150336B2 (en) * 1971-10-08 1979-02-08 Siemens Ag, 1000 Berlin Und 8000 Muenchen Speech recognition analyser circuit - has multichannel filters operating into comparators to provide sampled values for memory
FR2238412A5 (en) * 1973-07-20 1975-02-14 Trt Telecom Radio Electr
US3982070A (en) * 1974-06-05 1976-09-21 Bell Telephone Laboratories, Incorporated Phase vocoder speech synthesis system
JPS6060076B2 (en) * 1977-12-28 1985-12-27 日本電気株式会社 voice recognition device
US4181821A (en) * 1978-10-31 1980-01-01 Bell Telephone Laboratories, Incorporated Multiple template speech recognition system

Also Published As

Publication number Publication date
JPH0361959B2 (en) 1991-09-24
EP0077558B1 (en) 1986-07-23
AU8942682A (en) 1983-04-28
IL67124A (en) 1985-12-31
AU551937B2 (en) 1986-05-15
EP0077558A1 (en) 1983-04-27
US4415767A (en) 1983-11-15
MX153759A (en) 1987-01-05
DE3272176D1 (en) 1986-08-28
IL67124A0 (en) 1983-03-31
JPS58100199A (en) 1983-06-14

Similar Documents

Publication Publication Date Title
CA1180812A (en) Method and apparatus for speech recognition and reproduction
US4256924A (en) Device for recognizing an input pattern with approximate patterns used for reference patterns on mapping
CA1172363A (en) Continuous speech recognition method
Gu et al. Perceptual harmonic cepstral coefficients for speech recognition in noisy environment
US4059725A (en) Automatic continuous speech recognition system employing dynamic programming
US4282403A (en) Pattern recognition with a warping function decided for each reference pattern by the use of feature vector components of a few channels
US6098040A (en) Method and apparatus for providing an improved feature set in speech recognition by performing noise cancellation and background masking
US4467437A (en) Pattern matching device with a DP technique applied to feature vectors of two information compressed patterns
WO2002029782A1 (en) Perceptual harmonic cepstral coefficients as the front-end for speech recognition
WO1993018505A1 (en) Voice transformation system
JPS5844500A (en) Voice recognition system
US4833714A (en) Speech recognition apparatus
US4426551A (en) Speech recognition method and device
EP0118484B1 (en) Lpc word recognizer utilizing energy features
US4922539A (en) Method of encoding speech signals involving the extraction of speech formant candidates in real time
US4486899A (en) System for extraction of pole parameter values
US4076960A (en) CCD speech processor
WO2022179360A1 (en) Voiceprint recognition method and apparatus, and computer-readable storage medium
US5845092A (en) Endpoint detection in a stand-alone real-time voice recognition system
CN112634937A (en) Sound classification method without digital feature extraction calculation
US5732141A (en) Detecting voice activity
CN110197657A (en) A kind of dynamic speech feature extracting method based on cosine similarity
US4790017A (en) Speech processing feature generation arrangement
Burchard et al. A single chip phoneme based HMM speech recognition system for consumer applications
Akaishi et al. Harmonic and percussive sound separation based on mixed partial derivative of phase spectrogram

Legal Events

Date Code Title Description
MKEC Expiry (correction)
MKEX Expiry