|Publication number||US3624302 A|
|Publication date||Nov 30, 1971|
|Filing date||Oct 29, 1969|
|Priority date||Oct 29, 1969|
|Publication number||US 3624302 A, US 3624302A, US-A-3624302, US3624302 A, US3624302A|
|Inventors||Atal Bishnu S|
|Original Assignee||Bell Telephone Labor Inc|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (4), Referenced by (55), Classifications (11)|
|External Links: USPTO, USPTO Assignment, Espacenet|
United States Patent I I I Murray Hill, NJ.
SPEECH ANALYSIS AND SYNTHESIS BY THE USE OF THE LINEAR PREDICTION OF A SPEECH WAVE 10 Claims, 4 Drawing Figs.
Primary Examiner-Kathleen H. Claffy Assistant Examiner-Jon Bradford Leaheey Attorneys-R. .I. Guenther and William L. Keefauver ABSTRACT: A short-time spectral analysis of a nonstationary signal, such as a speech signal, does not ordinarily yield control signal information sufficient for subsequent synthesis. However, more reliable control signals for a speech synthesizer can be obtained by making use of natural constraints, applicable to a speech wave, in the analysis procedure. For frequencies below 5 kHz., the human vocal tract can be modeled as an acoustic tube in which only plane waves propagate. Thus, for vowels and vowellike sounds, the speech [2?] Output of the vocal "act at any instant of time can be assumed 179 SA to be a weighted sum of its past values and the input to the I 1 0 re 325/38 vocal tract at that instant of time. In the described invention, a speech wave is represented by the output of a linear filter Cm 3353553311 211?1.1231323? 13312: $551.12 JEIIZ nZISQ UNITED STATES PATENTS The parameters of this filter are derived from the speech wave 2,8l7 707 l2/l957 WCIIJFID l79/l SA Such that the meamsquared between the Synthetic 3920-344 2/1962 Presuglacomo SA speech samples at the output ofthe filter and the input speech 3,l58,685 11/1964 Gerstman 179 1 SA Samples is minimum. 3,328,525 6/1967 Kelly 179/] SA 4 TIME VARYING FILTER m PREDICTlON PARAMETER COMPUTER E TRANSVERSAL I: 5 g FILTER 10 I g E 19 2o 21 l 2a 27 PITCH PULSE Z z N I rb-Imm POSITION 3 9 PULSE COMBINER mm COMPUTER g 13 CLOCK 1e I E 61 31 PARAMETER COMPUTER I 110151: 22 GEN. 23
SPEECH ANALYSIS AND SYNTHESIS BY THE USE OF THE LINEAR PREDICTION OF A SPEECH WAVE BACKGROUND OF THE INVENTION This invention relates to the artificial production of speech or similar complex waves from control signals, and particularly to the derivation of control signals from an original speech wave that can be accommodated by storage or transmission facilities with limited channel capacity.
The principal object of the invention is to reduce, as far as possible, the channel capacity, or bit rate in the case of a digital channel, required for the storage or transmission of speech control signals without, however, a sacrifice of intelligibility or the introduction of an objectionable unnatural quality into the reconstructed speech.
1. Field of the Invention Conventional speech communication systems, for example, commercial telephone systems typically convey human speech by transmitting an electrical facsimile of the acoustic wavefonn produced by a human speaker. Because of the redundance of human speech, however, facsimile transmission is a relatively inefi'rcient way to transmit this information. Consequently, a number of arrangements for compressing or reducing the required channel capacity required for the transmission of speech information have been proposed. One of the best known of these arrangements is the so-called vocoder. More recently, techniques for removing inherent signal redundancy in the speech wave through the use of a linear predictor have been utilized.
2. Description of the Prior Art Production of good quality synthetic speech is a necessary corollary to limited channel capacity transmission systems of whatever sort. However, the quality of speech obtained from priorly known synthesizers generally lacks naturalness and exhibits and undesirable quality, even when the synthesizer control signals are derived from the original speech at closely spaced intervals. There are a number of reasons for the poor quality of such synthetic speech. Consider, for example, the case of a formant synthesizer, this being a part of another typical system for the narrow band transmission of speech. Most formant analyzers attempt to isolate peaks due to various formants in the speech spectra. This is a difficnlt task, even for low-pitched male voices, since formants do not always show up as distinct peaks in the spectra, and the spectral peaks do not always result from the formants. Such methods usually break down completely for female speech. Further, satisfactory operation of a formant synthesizer often depends upon the correct ordering of the various formants. This, too, is difficult to achieve.
SUMMARY OF THE INVENTION To avoid many of these problems, a different approach to speech analysis and synthesis is followed in the present invention. Speech parameter signals are continuously developed at a transmitter station using the constraint that the applied speech wave at any instant of time is a weighted sum of its past values, that is to say, speech parameter signals are developed which specify linearly predictable characteristics of an applied speech signal. To derive parameter control signals for the production of realistic synthesized speech, a suitable functional model of speech production is established and it is assumed that a close approximation to a speech wave can be produced at its output. Typically, the model includes a discrete, linear, time-varying filter which is excited by a suitable combination of a quasi-periodic pulse train (voiced excitation) and white noise (unvoiced excitation). The output of the linear filter at any sampling instant is a linear combination of past output samples and the input. In this analysis, the n'" speech sample, s,,, may be expressed as:
where a a a,,; 12,, b b, are parameters which specify the filter at any time, and x, is the n input sample. For completely voiced sounds, the samples Jr, represent a train of quasi-periodic pulses, whereas for completely unvoiced sounds, x represents the output of a white noise generator. For this model of speech production, it can be shown that in any pitch period the speech samples after the first q samples may be expressed as linear combination of the preceding p samples. The optimum linear combination, a,, a a,,, is obtained by minimizing the mean-squared error between the actual values of the speech samples and their predicted values based on the past p samples. The values of p and q are determined by the bandwidth of the input speech signal and the length of the vocal tract. A 10th to 12th order linear predictor satisfactorily represents the speech signal band-limited to 5 kHz. with sufficient accuracy. A higher order predictor may be necessary for certain cases (some male speakers saying nasalized consonants). The parameter q is assumed to be equal to l0 in the analysis and zero in the synthesis.
Durations of individual pitch periods are determined by calculating the pitch-synchronous autocorrelation function of the third power of the input speech wave and selecting the delay for which the autocorrelation function is maximum.
A speech signal may be synthesized by a network continually adjusted by parameter signals derived in this fashion.
This invention will be more fully understood from the following detailed description taken together with the drawings.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block schematic diagram of a speech transmission system, including an analyzer and a synthesizer which illustrates the principles of the invention;
FIG. 2 is a block schematic diagram of a prediction parameter computer suitable for use in the analyzer of the speech transmission system illustrated in FIG. 1;
FIG. 3 is a block schematic diagram of a network for developing parameters, representing the relative amplitudes of voiced and unvoiced signal components, suitable for use in the analyzer of the system illustrated in FIG. I; and
FIG. 4 is a block schematic diagram of a time-varying filter which may be used at the synthesizer of a transmission system embodying the principles of the invention.
DETAILED DESCRIPTION A complete limited channel capacity speech transmission system which illustrates the principles of the invention is shown in FIG. 1. Speech signals, which may originate, for example, in transducer 10, are passed through low pass filter II which has a cutoff frequency in the neighborhood of 5 kHz. and which exhibits a 3-db. cutoff frequency in the neighborhood of 4 kHz. The resultant signal is then sampled at a frequency of approximately 10 kHz. in sampler 12. Clock 13 is employed to energize the sampler and other units in the system. Speech samples, s,,, thus derived are supplied to prediction parameter computer 14, to pitch pulse position computer 15, and to parameter computer 16.
Prediction parameter computer 14 operates on applied speech samples s,, to develop a series of parameter signals a=a, a a,,, for each pitch period (as indicated by signal N, from computer 15). Parameters a uniquely specify the frequencies and bandwidths of speech fonnants in the input signal below about 5 kHz. Parameter signals a are developed from linearly predictable characteristics of the applied speech signals delivered by sampler 12. An extensive discussion of the relation of parameter signals a to the input signal, and their development, is contained in my copending patent application, Ser. No. 753,408, filed Aug. 19, I968. Details of the construction of a prediction parameter computer specially adapted for the practice of this invention is given hereinafter with reference to FIG. 2.
Pitch pulse position computer 15 determines the location of the glottal pulses in the applied speech wave; the difference between the positions of successive glottal pulses specifies the duration of the pitch period. Any suitable pulse position analyzer may be employed to derive pitch period signals, N. For example, a suitable arrangement is described in Automatic Speaker Recognition Based on Pitch Contours by B. S. Atal, Polytechnic Institute of Brooklyn, June, 1968, pages 33-43.
Speech samples from unit 12 are also supplied to computer 16, which determines parameters g,, and These parameters characterize the amplitudes of voiced and unvoiced signal excitation, i.e., parameter 3, specifies the amplitude of voiced (or buzz) excitation signals, and the parameter 3, specifies the amplitude of unvoiced (or hiss) excitation signals.
Parameter signals a, N, and g,, 3,, thus derived uniquely determine formant frequencies and bandwidths of a speech signal, its spectrum, and the relative amplitudes of voiced and unvoiced components necessary for the synthesis of artificial speech. Since these parameters require considerably less channel capacity than the corresponding analog signal representation, they may be economically stored for future use, or transmitted to a distant station. All parameter signals may, for example, be combined for transmission, for example, by multiplexing, or the like, in transmission coder 17. At a receiver station, these signals are recovered and delivered individually through the action of transmission decoder 18. Transmission coders and decoders of any desired construction and form may be employed. Obviously, storage of the parameter signals may take place at any point in the indicated transmission arrangement; the transfer of parameter signals from one storage location to another, for example, may be considered to be a form of transmission.
At the synthesizer, voiced excitation is generated, for example, in pulse generator 19 of any desired construction, under control of pitch pulse parameter signal N. The amplitude of the voiced excitation signal is controlled continuously by parameter signal 3, acting upon modulator 20. Typically, generator 19 produces a pulse of unit amplitude at the beginning of every pitch period. Similarly, unvoiced excitation is produced in noise generator 22. Generator 22 typically produces a sequence of random numbers uniformly distributed between +1 and l at sampling instants. Noise signals are controlled in amplitude by parameter signal g acting on modulator 23.
The outputs of pulse generator 19 and noise generator 22, as scaled by controlled amplifiers 20 and 23, are added together with selected past signal values, available at the output of time-varying filter 24, in a combining network 21. The combined signal produced at the output of network 21 is thereupon delivered by way of low pass filter 26 to reproducer 27, for example, a loud speaker. Low pass filter 26 preferably has a cutoff frequency of about 5 kHz., its exact frequency range being commensurate with the range of filter 11 at the analyzer.
The combined signal is also delivered to the input of transversal filter 25, forming a part of filter network 24. Time-varying filter 24 serves to regenerate speech from the applied excitation and parameter signals a. Such a filter arrangement resembles the resonant filter system of the human vocal tract and typically exhibits certain natural resonances which may be tuned in accordance with formant parameter signals a. Resonant vocoder apparatus of this general form is well known in the art; a typical example is described in J. L. Kelly, Jr., US. Pat. No. 3,328,525, issued June 27, 1967. A transversal filter arrangement specifically adapted for use in the apparatus illustrated in FIG. 1 is described below with reference to FIG. 4.
FIG. 2 illustrates a prediction parameter computer 14 suitable for developing formant parameter signals a in accordance with the invention. For every pitch period of the applied speech wave, an array of signal values s,, from sampler 12 (FIG. 1) is transferred into storage unit 140 to replace the previous array of signal samples contained in the storage unit. Storage unit 140 thus stores an array of signal values u "-9, u u a where N represents the duration of the current pitch period in samples. Every pitch period the values u u u are replaced by values y-g, uu .lncoming samples are placed in the vacated storage locations u u Thus, signals u u are consecutively stored as they are received in storage unit 140. Every pitch period, under the influence of timing signals from pulser 141, synchronized by signals N from pitch pulse position computer 15 to indicate the positions of glottal pulses, an array of signal values is read out of storage unit and transferred to arithmetic unit (AU) 142. This unit comprises a plurality of arithmetic units 143a, l43n, designated individuallyf f ...,f,, ;j', f gf f which operate in parallel. In a typical example of practice, n=55, i.e., 55 arithmetic units are employed. Each individual unit serves to compute one value off according to the following equation:
fr. 3 2 n-i ni index i varies from 1, 10
indexj varies from i, 10
h 2 u u,,
indexj varies from 1, 10
Arithmetic unit 145 preferably comprises an array of individual units, 146a, 146m, operating in parallel to evaluate several values of h. Typically, 10 units are employed, i.e., j=10. The resultant array, h,, h,, h designated H, is delivered every pitch period to computer 144.
Computer 144 is programmed to solve the matrix equation F'a==H, (4)
to yield values of a. Although a special purpose computer may be programmed for this evaluation, one suitable arrangement is described in copending patent application Ser. No. 753,408, filed Aug. 19, 1968.
Prediction parameters a,, a a,,,, uniquely determine the frequencies and bandwidths of all speech formants below 5 kHz. If desired, the bandwidths and frequencies of formants may be determined from values of a for use in the control of other synthesis apparatus. In accordance with the invention, this determination is made by supplying parameter values a from computer 144, by way of switch 147, to polynomial root computer 148. This unit determines the complex roots of a polynomial with real coefficients, i.e., the roots of a polynomialf(z), defined as:
f(z)=z+a,z a z-l-a (5) A polynomial root locater suitable for making the necessary evaluation is described in Mathematical Methods for Digital Computers, edited by Ralston and Wilf, John Wiley & Sons, lnc., 1967, in the section by E. R. Bareiss, at page 185. The output of the polynomial root locator 148 is 10 complex numbers (two sets of 10 real numbers) 2,, z,, z which are then supplied to the arithmetic unit 149 which computes the numbers p,, p:, p in accordance with equation (6) below.
Pk /21'r)( UT) 8 k)- Arithmetic unit 149 is thus a device which takes the complex logarithm of numbers z,, and multiplies them with a number l/21rT) where T=0.000l sec., the interval of sampling unit 12 (FIG. 1). The complex numbers p can be separated into their real and imaginary parts, b and f,,, respectively, as follows:
Pk k+jfm where index k varies from 1 to 10. Logical unit 150 orders the numbers p such that the first number has the lowest positive imaginary part, the second number the second lowest positive imaginary part, and so on. Consequently, the numbers f,, and b represent the frequencies and the bandwidths of the various formants of the speech signal for the pitch period under consideration. These representations may be used in any desired fashion, e.g., for controlling a formant synthesis.
Speech samples from sampling unit 12 (FIG. 1) are also supplied to parameter computer 16 which determines parameter values g and These parameters denote the relative amplitudes of voiced and unvoiced signal components in the applied speech signal. The operation of computer 16 is illustrated in FIG. 3. Every pitch period, an array of signal values s, is transferred into storage unit 161 to replace the previous array of signal samples already in storage. Storage unit 161 thus stores an array of signal values u u u u where N denotes the duration of the current pitch period in samples and m represents the largest pitch period as measured in samples. A value of m=200 has been found to be sufficient in most cases. Every pitch period, the values u u are replaced by values u u Incoming samples are placed in the vacated storage locations u u Arithmetic units 164 and 165 operate on array u to evaluate the values of parameters E and R in accordance with equations (8) and (9) as follows:
N1 E: E W n:
Storage unit 162 contains an array of signal values w w w Every pitch period, the values w W. are replaced by values w w New signal values are computed in arithmetical unit 163 according to equation l0) and stored consecutively in storage locations w w n varies from O, N
Arithmetic unit 166 computes an array of signal values y y. esignated y, and stores them in storage unit 167 in locations designated y y Storage unit 167 is equipped with 10 additional storage locations designated y y which have a number 0 stored in them permanently. The array y is computed in arithmetic unit 166 according to the following relation:
10 v.,= a vn-k+n.
n varies from O, N
The array of numbers r designates the output of a white noise generator 170. Similar to storage unit 167, storage unit 169 also has l0 additional storage locations designated v- L, which have the number 0 stored in them permanently.
The arrays w, y, and v, and the numbers E and R are transferred periodically, under the influence of pitch synchronized clock pulses from pulser 171, to arithmetic unit 172 which comprises six arithmetic units designated d,, d d (1., which operate in parallel. These units of system 172 compute the numbers 11,, d in accordance with equations (13) through (18) set forth below. The index n is summed from 0 to N in each of the equations.
The array of numbers d dficomputed in the manner indicated above are delivered to arithmetic unit 173 which computes parameters g, and g in accordance with the following relations:
ogd gR E (19) R ESILSE (20) (J2 fl a 91 fi i Each of the operations indicated above is carried out sequentially every pitch period under the influence of clock signals (developed by pulser 171) synchronized with the positions of the glottal pulses from pitch pulse computer 14 (FIG. I), as defined by signal N.
Arithmetic and storage units operative in a fashion similar to that described above are described in greater detail in the aforementioned copending application, Ser. No. 75 3 ,408.
Ordinarily, there are five resonances below the frequency of 5 kHz. in the human vocal tract. As discussed above, these resonances may be simulated by a transversal filter arrangement employing n-discrete delay elements. When n=l0, the system can simulate n/2 resonances, i.e., the five resonances of the vocal tract. The synthesizer of this invention thus employs a discrete linear time-varying filter excited by a suitable combination of quasi-periodic pulses and white noise. A transversal filter arrangement is satisfactory for developing a linear combination of past output samples and the current input sample. Actual locations of resonances are determined in the transversal filter arrangement by the parameters a. Details of this form of resonance simulation is described in the abovementioned Kelly U.S. Pat. No. 3,328,525. Transversal filter arrangements for use in speech synthesizers also have been described abundantly in the art. One suitable form is shown by way of a rudimentary block diagram in FIG. 4.
In the arrangement of FIG. 4 time-varying filter 24 (FIG. 1) includes a transversal filter network 25 composed of IO unit delay elements 240, 240, supplying applied signals to 10 adjustable gain amplifiers 241,, 241 Signals developed at the junctions of the several delay units thus represent past sample values of signals supplied from combiner 21 to filter 26 in the synthesizer of FIG. 1. The gains of the individual amplifiers 241 are adjusted by parameter values a to form a collection of weighted past sample values. The resultant signals are additively combined in adder network 242 and supplied to one input of combiner unit 21. As discussed above, the combined output of combiner network 21, which includes voiced and unvoiced excitation, and the combination of weighted past sample values constitute a replica of the applied speech signal. It is supplied by way of filter 26 to loud speaker 27.
Thus, in accordance with the invention an analog speech signal may be efficiently transmitted in the form of an array of numbers, viz., N; 3,, g,; and a,, a These parameters represent the necessary information concerning the speech wave in any given pitch period and are sufi'icient for reconstructing the speech wave. A saving of approximately 10 to l in transmission capacity may be achieved when using these parameters rather than the analog signal itself.
For example, a lO-kilobit signal used for representing the parameter has been found to yield excellent quality synthesized speech. A S-kilobit signal still permits vary acceptable speech to be produced; this in contrast to the usual requirement of a O -kilobit signal for direct coding of a speech wave.
Various other arrangements and modifications of the described arrangements will occur to those skilled in the art.
What is claimed is? 1. Speech analysis apparatus, which comprises:
means for developing a first set of signals which specify linearly predictable characteristics of an applied speech signal,
means for developing a second set of signals representative of the duration of individual pitch periods of said applied speech signal,
means for developing a third set of signals representative of the energy of a speech signal and of the voicing character of speech signals within each of said pitch periods, and
means for utilizing all of said developed signals together as a representation of said applied speech signal.
2. Speech signal analysis apparatus as defined in claim 1, wherein,
said first set of signals which specify linearly predictable characteristics comprises a plurality of limited channel capacity parameter signals derived from past and current values of said applied speech signal for adjusting a resonant filter system, arranged to produce a replica of said applied speech signal when excitedby voiced and unvoiced excitation signals.
3. Speech signal analysis apparatus, as defined in claim 1, wherein,
said first set of signals comprises a sequence of signals a=a,,
..., a,,, for each pitch period of said applied signals, which uniquely determine the frequencies and bandwidths of formants of said applied signal below approximately 5 kHz.
4. Speech signal analysis apparatus as defined in claim 3, in combination with,
means supplied with said sequence of signals a for developing signals representative of the frequencies and bandwidths of forrnants of said applied speech signal during selected pitch periods.
5. Speech signal analysis apparatus as defined in claim 1, wherein,
said first set of signals is developed by minimizing the meansquared error between the actual values of samples of said applied speech signal and predicted values thereof based on a selected number of past sample values.
6. Speech signal apparatus, which comprises:
at a transmitter station;
means for developing a first set of signals which specify linearly predictable characteristics of an applied speech signal,
means for developing a second set of signals representative 5 of the duration of individual pitch periods of said applied speech signal, means for developing a third set of signals representative of the energy of a speech signal in each of said pitch periods and of the voicing character of speech signals within said pitch periods, and
means for combining all of said developed signals for transmission to a receiver station; and
at said receiver station;
means responsive to received signals of said first set for developing signals representative of predicted values of a speech signal,
means responsive to received signals of said second set for developing a sequence of pitch period pulses,
means for generating white noise signals,
means responsive to received signals of said third set for individually adjusting the levels of said pitch period pulses and said white noise signals, and
means for combining said adjusted pitch period pulses, said adjusted white noise signals, and said predicted value signals to form speech signal which is a replica of said applied speech signal.
7. Speech signal apparatus as defined in claim 6, wherein,
said means at said receiver station for developing signals representative of predicted values of said speech signal comprises,
a transversal filter supplied with a combination of adjusted pitch period pulses, adjusted noise signals, and signals selectively representative of past values of said applied signal.
8. Synthesis apparatus for developing artificial speech from signals representative of the pitch period, voicing character, and selected predictable characteristics of an applied speech signal, which comprises:
means responsive to received signals representative of selected predictable characteristics of an applied speech signal for developing signals representative of selected predicted values of said speech signal,
means responsive to received signals representative of the pitch period of said applied speech signal for developing a sequence of pitch period pulses,
means for generating white noise signals, means responsive to received signals representative of the voicing character of said applied speech signal for individually adjusting the levels of said pitch period pulses and said white noise signals, and
means for combining said adjusted pitch period pulses, said adjusted white noise signals, and said predicted value signals to form speech signal which is a replica of said applied speech signal.
9. Synthesis apparatus as defined in claim 8, wherein said means for developing signals representative of predicted values of said speech signal comprises a transversal filter supplied with said combined replica signal and adjusted by said predictable characteristic signals.
10. Synthesis apparatus as defined in claim 8, wherein,
said predicted value signals are selected to represent a linear combination of preceding values of said replica of said applied speech signal.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US2817707 *||May 7, 1954||Dec 24, 1957||Bell Telephone Labor Inc||Synthesis of complex waves|
|US3020344 *||Dec 27, 1960||Feb 6, 1962||Bell Telephone Labor Inc||Apparatus for deriving pitch information from a speech wave|
|US3158685 *||May 4, 1961||Nov 24, 1964||Bell Telephone Labor Inc||Synthesis of speech from code signals|
|US3328525 *||Dec 30, 1963||Jun 27, 1967||Bell Telephone Labor Inc||Speech synthesizer|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US3715512 *||Dec 20, 1971||Feb 6, 1973||Bell Telephone Labor Inc||Adaptive predictive speech signal coding system|
|US3825685 *||May 5, 1972||Jul 23, 1974||Int Standard Corp||Helium environment vocoder|
|US3836717 *||Jul 21, 1972||Sep 17, 1974||Scitronix Corp||Speech synthesizer responsive to a digital command input|
|US3909533 *||Oct 8, 1974||Sep 30, 1975||Gretag Ag||Method and apparatus for the analysis and synthesis of speech signals|
|US3916105 *||Feb 28, 1974||Oct 28, 1975||Ibm||Pitch peak detection using linear prediction|
|US3975587 *||Sep 13, 1974||Aug 17, 1976||International Telephone And Telegraph Corporation||Digital vocoder|
|US3979557 *||Jul 3, 1975||Sep 7, 1976||International Telephone And Telegraph Corporation||Speech processor system for pitch period extraction using prediction filters|
|US4022974 *||Jun 3, 1976||May 10, 1977||Bell Telephone Laboratories, Incorporated||Adaptive linear prediction speech synthesizer|
|US4038495 *||Nov 14, 1975||Jul 26, 1977||Rockwell International Corporation||Speech analyzer/synthesizer using recursive filters|
|US4045616 *||May 23, 1975||Aug 30, 1977||Time Data Corporation||Vocoder system|
|US4052563 *||Oct 7, 1975||Oct 4, 1977||Nippon Telegraph And Telephone Public Corporation||Multiplex speech transmission system with speech analysis-synthesis|
|US4058676 *||Jul 7, 1975||Nov 15, 1977||International Communication Sciences||Speech analysis and synthesis system|
|US4087632 *||Nov 26, 1976||May 2, 1978||Bell Telephone Laboratories, Incorporated||Speech recognition system|
|US4335275 *||Feb 4, 1980||Jun 15, 1982||Texas Instruments Incorporated||Synchronous method and apparatus for speech synthesis circuit|
|US4472832 *||Dec 1, 1981||Sep 18, 1984||At&T Bell Laboratories||Digital speech coder|
|US4633499 *||Oct 8, 1982||Dec 30, 1986||Sharp Kabushiki Kaisha||Speech recognition system|
|US4667340 *||Apr 13, 1983||May 19, 1987||Texas Instruments Incorporated||Voice messaging system with pitch-congruent baseband coding|
|US4701954 *||Mar 16, 1984||Oct 20, 1987||American Telephone And Telegraph Company, At&T Bell Laboratories||Multipulse LPC speech processing arrangement|
|US4709390 *||May 4, 1984||Nov 24, 1987||American Telephone And Telegraph Company, At&T Bell Laboratories||Speech message code modifying arrangement|
|US4710959 *||Apr 29, 1982||Dec 1, 1987||Massachusetts Institute Of Technology||Voice encoder and synthesizer|
|US4764963 *||Jan 12, 1987||Aug 16, 1988||American Telephone And Telegraph Company, At&T Bell Laboratories||Speech pattern compression arrangement utilizing speech event identification|
|US4827517 *||Dec 26, 1985||May 2, 1989||American Telephone And Telegraph Company, At&T Bell Laboratories||Digital speech processor using arbitrary excitation coding|
|US4847906 *||Mar 28, 1986||Jul 11, 1989||American Telephone And Telegraph Company, At&T Bell Laboratories||Linear predictive speech coding arrangement|
|US4866415 *||Jan 19, 1988||Sep 12, 1989||Kabushiki Kaisha Toshiba||Tone signal generating system for use in communication apparatus|
|US4890328 *||Aug 28, 1985||Dec 26, 1989||American Telephone And Telegraph Company||Voice synthesis utilizing multi-level filter excitation|
|US4913539 *||Apr 4, 1988||Apr 3, 1990||New York Institute Of Technology||Apparatus and method for lip-synching animation|
|US4975955 *||Oct 13, 1989||Dec 4, 1990||Nec Corporation||Pattern matching vocoder using LSP parameters|
|US5048088 *||Mar 28, 1989||Sep 10, 1991||Nec Corporation||Linear predictive speech analysis-synthesis apparatus|
|US5233659 *||Jan 3, 1992||Aug 3, 1993||Telefonaktiebolaget L M Ericsson||Method of quantizing line spectral frequencies when calculating filter parameters in a speech coder|
|US5377301 *||Jan 21, 1994||Dec 27, 1994||At&T Corp.||Technique for modifying reference vector quantized speech feature signals|
|US5450449 *||Mar 14, 1994||Sep 12, 1995||At&T Ipm Corp.||Linear prediction coefficient generation during frame erasure or packet loss|
|US5471527 *||Dec 2, 1993||Nov 28, 1995||Dsc Communications Corporation||Voice enhancement system and method|
|US5704003 *||Sep 19, 1995||Dec 30, 1997||Lucent Technologies Inc.||RCELP coder|
|US5822724 *||Jun 14, 1995||Oct 13, 1998||Nahumi; Dror||Optimized pulse location in codebook searching techniques for speech processing|
|US5839098 *||Dec 19, 1996||Nov 17, 1998||Lucent Technologies Inc.||Speech coder methods and systems|
|US5884253 *||Oct 3, 1997||Mar 16, 1999||Lucent Technologies, Inc.||Prototype waveform speech coding with interpolation of pitch, pitch-period waveforms, and synthesis filter|
|US5937376 *||Apr 10, 1996||Aug 10, 1999||Telefonaktiebolaget Lm Ericsson||Method of coding an excitation pulse parameter sequence|
|US6003000 *||Apr 29, 1997||Dec 14, 1999||Meta-C Corporation||Method and system for speech processing with greatly reduced harmonic and intermodulation distortion|
|US6064956 *||Apr 10, 1996||May 16, 2000||Telefonaktiebolaget Lm Ericsson||Method to determine the excitation pulse positions within a speech frame|
|US6081777 *||Sep 21, 1998||Jun 27, 2000||Lockheed Martin Corporation||Enhancement of speech signals transmitted over a vocoder channel|
|US6091773 *||Nov 12, 1997||Jul 18, 2000||Sydorenko; Mark R.||Data compression method and apparatus|
|US6233550||Aug 28, 1998||May 15, 2001||The Regents Of The University Of California||Method and apparatus for hybrid coding of speech at 4kbps|
|US6475245||Feb 5, 2001||Nov 5, 2002||The Regents Of The University Of California||Method and apparatus for hybrid coding of speech at 4KBPS having phase alignment between mode-switched frames|
|US6708154 *||Nov 14, 2002||Mar 16, 2004||Microsoft Corporation||Method and apparatus for using formant models in resonance control for speech systems|
|US7206739||May 23, 2002||Apr 17, 2007||Samsung Electronics Co., Ltd.||Excitation codebook search method in a speech coding system|
|US20030033136 *||May 23, 2002||Feb 13, 2003||Samsung Electronics Co., Ltd.||Excitation codebook search method in a speech coding system|
|US20070043560 *||Oct 30, 2006||Feb 22, 2007||Samsung Electronics Co., Ltd.||Excitation codebook search method in a speech coding system|
|USRE32580 *||Sep 18, 1986||Jan 19, 1988||American Telephone And Telegraph Company, At&T Bell Laboratories||Digital speech coder|
|USRE34247 *||May 2, 1991||May 11, 1993||At&T Bell Laboratories||Digital speech processor using arbitrary excitation coding|
|USRE43099||Nov 17, 2008||Jan 10, 2012||Alcatel Lucent||Speech coder methods and systems|
|DE2435654A1 *||Jul 24, 1974||Feb 5, 1976||Gretag Ag||Apparatus for speech analysis and synthesis - applies predictor method with reduced requirement of computer storage|
|DE3037276A1 *||Oct 2, 1980||Apr 9, 1981||Nippon Telegraph & Telephone||Tonsynthesizer|
|DE3244476A1 *||Dec 1, 1982||Jul 14, 1983||Western Electric Co||Digitaler sprachprozessor|
|EP0749111A2||Jun 4, 1996||Dec 18, 1996||AT&T IPM Corp.||Codebook searching techniques for speech processing|
|WO1999010719A1 *||Aug 28, 1998||Mar 4, 1999||The Regents Of The University Of California||Method and apparatus for hybrid coding of speech at 4kbps|
|U.S. Classification||704/206, 704/262, 704/207, 704/E19.24, 704/208, 704/264, 704/209|
|International Classification||G10L19/00, G10L19/06|