US 7110942 B2 Abstract A method of performing an excitation Vector Quantization (VQ) in a Noise Feedback Coding environment involves reorganizing a calculation of an energy of an error vector for each of a plurality of candidate excitation vectors of a codebook. The energy of the error vector is a cost function that is minimized during a search of the codebook for a best candidate excitation VQ vector. The reorganization includes expanding a Mean Squared Error (MSE) term of the error vector, excluding an energy term that is invariant to the candidate excitation vector, and pre-computing energy terms of ZERO-STATE responses of the candidate excitation vectors that are invariant to sub-vectors of a subframe. Another method searches a signed codebook. Both methods use correlation techniques.
Claims(44) 1. A method of performing an efficient excitation quantization corresponding to a residual signal using a codebook in a speech or audio noise feedback coding (NFC) system, the NFC system including at least one noise feedback loop, the codebook including N vector quantization (VQ) codevectors, where N is an integer greater than one, the method comprising:
(a) deriving N correlation values using the NFC system, each of the N correlation values corresponding to a respective one of the N VQ codevectors;
(b) combining each of the N correlation values with a corresponding one of N ZERO-STATE energies of the NFC system, thereby producing N minimization values each corresponding to a respective one of the N VQ codevectors; and
(c) selecting a preferred one of the N VQ codevectors based on the N minimization values, whereby the preferred VQ codevector is usable as an excitation quantization corresponding to a residual signal derived from a speech or audio signal.
2. The method of
3. The method of
4. The method of
for each of the residual vectors in the series of successive residual vectors, searching the codebook using the N invariant ZERO-STATE responses.
5. The method of
for each of the residual vectors in the series of successive residual vectors, searching the codebook using the N invariant ZERO-STATE energies.
6. A method of searching a codebook in a speech or audio coding system, the codebook including a plurality of shape codevectors each associated with a positive codevector and a negative codevector, comprising:
(a) deriving a correlation term corresponding to one shape codevector by correlating
a ZERO-STATE response of the coding system corresponding to the shape codevector, with
a ZERO-INPUT response of the coding system;
(b) deriving a first minimization value corresponding to the positive codevector associated with the one shape codevector when a sign of the correlation term is a first value; and
(c) deriving a second minimization value corresponding to the negative codevector associated with the one shape codevector when the sign of the correlation term is a second value.
7. The method of
a ZERO-STATE response of the NFC system corresponding to the one shape codevector, and
a ZERO-INPUT response of the NFC system.
8. The method of
(d) performing steps (a), (b) and (c) for each of the shape codevectors, thereby deriving for each shape codevector either a first minimization value corresponding to the positive codevector or a second minimization value corresponding to the negative codevector; and
(e) selecting a preferred codevector from among the positive and negative codevectors corresponding to minimization values derived in steps (a) and (b) based on the minimization values, whereby the preferred codevector is usable as an excitation quantization corresponding to a residual signal derived from a speech or audio signal.
9. The method of
10. The method of
11. The method of
step (b) comprises deriving the first minimization value corresponding to the positive codevector when the sign of the correlation term is negative; and
step (c) comprises deriving the second minimization value corresponding to the negative codevector when the sign of the correlation term is positive.
12. The method of
deriving the ZERO-INPUT response of the coding system; and
deriving the ZERO-STATE response of the coding system.
13. The method of
deriving, from the ZERO-STATE response, a ZERO-STATE energy corresponding to the one shape codevector,
wherein step (b) and step (c) each comprise combining the ZERO-STATE energy with the correlation term to produce the respective minimization value.
14. The method of
step (b) further comprises deriving the minimization value by adding the correlation term to the ZERO-STATE energy; and
step (c) further comprises deriving the minimization value by subtracting the correlation term from the ZERO-STATE energy.
15. The method of
the positive codevector associated with each shape codevector is the shape codevector; and
the negative codevector associated with each shape codevector is derived by negating the shape codevector.
16. The method of
a shape code, C_{shape}={c_{1}, c_{2}, c_{3}, . . . c_{N/2}}, including N/2 shape codevectors c_{n}, and
a sign code, C_{sign}={+1, −1}, including a pair of oppositely-signed sign values +1 and −1, such that the positive codevector and the negative codevector associated with each shape codevector c_{n }each represent a product of the shape codevector and a corresponding one of the sign values, and
wherein step (e) comprises selecting a shape codevector and a corresponding sign value corresponding to the preferred codevector, based on the minimization values.
17. A method of searching a codebook in a speech or audio noise feedback coding (NFC) system, the NFC system including at least one noise feedback loop, the codebook including a plurality of shape codevectors each associated with a positive codevector and a negative codevector, comprising:
for each shape codevector
(a) deriving a correlation term corresponding to the shape codevector using at least one filter structure of the NFC system;
(b) deriving a first minimization value corresponding to the positive codevector associated with the shape codevector when a sign of the correlation term is a first value; and
(c) deriving a second minimization value corresponding to the negative codevector associated with the shape codevector when a sign of the correlation term is a second value; and
(d) selecting a preferred codevector from among the positive and negative codevectors corresponding to minimization values derived in steps (b) and (c) based on the minimization values.
18. The method of
a ZERO-STATE response of the NFC system corresponding to the shape codevector, and
a ZERO-INPUT response of the NFC system.
19. The method of
deriving, from the ZERO-STATE response, a ZERO-STATE energy corresponding to the shape codevector of step (a),
wherein step (b) and step (c) each comprise combining the ZERO-STATE energy with the correlation term to produce the respective minimization value.
20. The method of
performing steps (a) through (d) to produce a preferred codevector usable as an excitation quantization corresponding to each of the residual vectors.
21. The method of
searching the codebook using the plurality of invariant ZERO-STATE responses for each of the residual vectors in the series of successive residual vectors.
22. The method of
searching the codebook using the plurality of invariant ZERO-STATE energies for each of the residual vectors in the series of successive residual vectors.
23. A computer program product comprising a computer usable medium having computer readable program code means embodied in the medium for causing an application program to execute on a computer processor to perform an efficient excitation quantization corresponding to a residual signal using a codebook in a speech or audio noise feedback codec (NFC), the NFC including at least one noise feedback loop, the codebook including N vector quantization (VQ) codevectors, where N is an integer greater than one, the computer readable program code means comprising:
a first computer readable program code means for causing the processor to derive N correlation values using the NFC, each of the N correlation values corresponding to a respective one of the N VQ codevectors;
a second computer readable program code means for causing the processor to combine each of the N correlation values with a corresponding one of N ZERO-STATE energies of the NFC, thereby producing N minimization values each corresponding to a respective one of the N VQ codevectors; and
a third computer readable program code means for causing the processor to elect a preferred one of the N VQ codevectors based on the N minimization values, whereby the preferred VQ codevector is usable as an excitation quantization corresponding to a residual signal derived from a speech or audio signal.
24. The computer program product of
25. The computer program product of
wherein the first, second and third program code means perform their respective functions for each of the residual vectors, thereby producing an excitation quantization corresponding to each of the residual vectors.
26. The computer program product of
a fourth computer readable program code means for causing the processor to search the codebook using the N invariant ZERO-STATE responses for each of the residual vectors in the series of successive residual vectors.
27. The computer program product of
a fourth computer readable program code means for causing the processor to search the codebook using the N invariant ZERO-STATE energies for each of the residual vectors in the series of successive residual vectors.
28. A computer program product comprising a computer usable medium having computer readable program code means embodied in the medium for causing an application program to execute on a computer processor to search a codebook in a speech or audio codec, the codebook including a plurality of shape codevectors each associated with a positive codevector and a negative codevector, the computer readable program code means comprising:
a first computer readable program code means for causing the processor to derive a correlation term corresponding to one shape codevector by correlating
a ZERO-STATE response of the codec corresponding to the shape codevector, with
a ZERO-INPUT response of the codec;
a second computer readable program code means for causing the processor to derive a first minimization value corresponding to the positive codevector associated with the one shape codevector when a sign of the correlation term is a first value; and
a third computer readable program code means for causing the processor to derive a second minimization value corresponding to the negative codevector associated with the one shape codevector when the sign of the correlation term is a second value.
29. The computer program product of
a ZERO-STATE response of the NFC corresponding to the one shape codevector, and
a ZERO-INPUT response of the NFC.
30. The computer program product of
the first, second and third program code means perform their respective functions for each of the shape codevectors, thereby deriving for each shape codevector either a first minimization value corresponding to the positive codevector or a second minimization value corresponding to the negative codevector; and
the computer readable program means further comprises
a fourth computer readable program code means for causing the processor to select a preferred codevector from among the positive and negative codevectors corresponding to minimization values derived by the second and third program code means based on the minimization values, whereby the preferred codevector is usable as an excitation quantization corresponding to a residual signal derived from a speech or audio signal.
31. The computer program product of
32. The computer program product of
33. The computer program product of
the second program code means includes computer readable code means for causing the processor to derive the first minimization value corresponding to the positive codevector when the sign of the correlation term is negative; and
the third program code means includes computer readable code means for causing the processor to derive the second minimization value corresponding to the negative codevector when the sign of the correlation term is positive.
34. The computer program product of
a fifth computer readable program code means for causing the processor to derive the ZERO-INPUT response of the codec before the correlation term is derived; and
a sixth computer readable program code means for causing the processor to derive the ZERO-STATE response of the codec before the correlation term is derived.
35. The computer program product of
a fifth computer readable program code means for causing the processor to derive, from the ZERO-STATE response and before the first minimization value is derived, a ZERO-STATE energy corresponding to the one shape codevector,
wherein
the second program code means includes computer readable program code means for causing the computer to combine the ZERO-STATE energy with the correlation term to produce the first minimization value, and
the third program code means includes computer readable program code means for causing the computer to combine the ZERO-STATE energy with the correlation term to produce the second minimization value.
36. The computer program product of
the second program code means further includes computer readable program code means for causing the computer to derive the first minimization value by adding the correlation term to the ZERO-STATE energy; and
the third program code means further includes computer readable program code means for causing the computer to derive the second minimization value by subtracting the correlation term from the ZERO-STATE energy.
37. The computer program product of
the positive codevector associated with each shape codevector is the shape codevector; and
the negative codevector associated with each shape codevector is derived by negating the shape codevector.
38. The computer program product of
a shape code, C_{shape}={c_{1}, c_{2}, c_{3}, . . . c_{N/2}}, including N/2 shape codevectors c_{n}, and
a sign code, C_{sign}={+1, −1}, including a pair of oppositely-signed sign values +1 and −1, such that the positive codevector and the negative codevector associated with each shape codevector c_{n }each represent a product of the shape codevector and a corresponding one of the sign values, and
wherein the fourth program code means includes computer readable program code means for causing the processor to select a shape codevector and a corresponding sign value corresponding to the preferred codevector, based on the minimization values.
39. A computer program product comprising a computer usable medium having computer readable program code means embodied in the medium for causing an application program to execute on a computer processor to search a codebook in a speech or audio noise feedback codec (NFC), the NFC including at least one noise feedback loop, the codebook including a plurality of shape codevectors each associated with a positive codevector and a negative codevector, the computer readable program code means comprising:
a first computer readable program code means for causing the processor to derive, for each shape codevector, a correlation term corresponding to the given shape codevector using at least one filter structure of the NFC;
a second computer readable program code means for causing the processor to derive, for each shape codevector, a first minimization value corresponding to the positive codevector associated with the given shape codevector when a sign of the correlation term is a first value; and
a third computer readable program code means for causing the processor to derive, for each shape codevector, a second minimization value corresponding to the negative codevector associated with the given shape codevector when a sign of the correlation term is a second value; and
a fourth computer readable program code means for causing the processor to select a preferred codevector from among the positive and negative codevectors corresponding to minimization values derived by the first and second program code means based on the minimization values.
40. The computer program product of
a ZERO-STATE response of the NFC corresponding to the given shape codevector, and
a ZERO-INPUT response of the NFC.
41. The computer program product of
a fifth computer readable program code means for causing the processor to derive, from the ZERO-STATE response and before the second program code means derives a first minimization value, a ZERO-STATE energy corresponding to the given shape codevector,
wherein
the second program code means includes computer readable program code means for causing the processor to combine the ZERO-STATE energy with the correlation term to produce the first minimization value, and
the third program code means includes computer readable program code means for causing the processor to combine the ZERO-STATE energy with the correlation term to produce the second minimization value.
42. The computer program product of
the preferred codevector is usable as an excitation quantization corresponding to a residual signal derived from a speech or audio signal, the residual signal including a series of residual vectors, and
the first, second, third and fourth program code means perform their respective functions to produce a preferred codevector usable as an excitation quantization corresponding to each of the residual vectors.
43. The computer program product of
a plurality of ZERO-STATE responses corresponding to the plurality of shape codevectors are invariant over a series of successive residual vectors, and
the computer readable program code means further comprises a sixth computer readable program code means for causing the processor to search the codebook using the plurality of invariant ZERO-STATE responses for each of the residual vectors in the series of successive residual vectors.
44. The computer program product of
a plurality of ZERO-STATE energies corresponding to the plurality of shape codevectors are invariant over a series of successive residual vectors, and
the computer readable program code means further comprises a sixth computer readable program code means for causing the processor to search the codebook using the plurality of invariant ZERO-STATE energies for each of the residual vectors in the series of successive residual vectors.
Description 1. Field of the Invention This invention relates generally to digital communications, and more particularly, to digital coding (or compression) of speech and/or audio signals. 2. Related Art In speech or audio coding, the coder encodes the input speech or audio signal into a digital bit stream for transmission or storage, and the decoder decodes the bit stream into an output speech or audio signal. The combination of the coder and the decoder is called a codec. In the field of speech coding, predictive coding is a very popular technique. Prediction of the input waveform is used to remove redundancy from the waveform, and instead of quantizing an input speech waveform directly, a residual signal waveform is quantized. The predictor(s) used in predictive coding can be either backward adaptive or forward adaptive predictors. Backward adaptive predictors do not require any side information as they are derived from a previously quantized waveform, and therefore can be derived at a decoder. On the other hand, forward adaptive predictor(s) require side information to be transmitted to the decoder as they are derived from the input waveform, which is not available at the decoder. In the field of speech coding, two types of predictors are commonly used. A first type of predictor is called a short-term predictor. It is aimed at removing redundancy between nearby samples in the input waveform. This is equivalent to removing a spectral envelope of the input waveform. A second type of predictor is often referred as a long-term predictor. It removes redundancy between samples further apart, typically spaced by a time difference that is constant for a suitable duration. For speech, this time difference is typically equivalent to a local pitch period of the speech signal, and consequently the long-term predictor is often referred as a pitch predictor. The long-term predictor removes a harmonic structure of the input waveform. A residual signal remaining after the removal of redundancy by the predictor(s) is quantized along with any information needed to reconstruct the predictor(s) at the decoder. This quantization of the residual signal provides a series of bits representing a compressed version of the residual signal. This compressed version of the residual signal is often denoted the excitation signal and is used to reconstruct an approximation of the input waveform at the decoder in combination with the predictor(s). Generating the series of bits representing the excitation signal is commonly denoted excitation quantization and generally requires the search for, and selection of, a best or preferred candidate excitation among a set of candidate excitations with respect to some cost function. The search and selection require a number of mathematical operations to be performed, which translates into a certain computational complexity when the operations are implemented on a signal processing device. It is advantageous to minimize the number of mathematical operations in order to minimize a power consumption, and maximize a processing bandwidth, of the signal processing device. Excitation quantization in predictive coding can be based on a sample-by-sample quantization of the excitation. This is referred to as Scalar Quantization (SQ). Techniques for performing Scalar Quantization of the excitation are relatively simple, and thus, the computational complexity associated with SQ is relatively manageable. Alternatively, the excitation can be quantized based on groups of samples. Quantizing groups of samples is often referred to as Vector Quantization (VQ), and when applied to the excitation, simply as excitation VQ. The use of VQ can provide superior performance to SQ, and may be necessary when the number of coding bits per residual signal sample becomes small (typically less than two bits per sample). Also, VQ can provide a greater flexibility in bit-allocation as compared to SQ, since a fractional number of bits per sample can be used. However, excitation VQ can be relatively complex when compared to excitation SQ. Therefore, there is need to reduce the complexity of excitation VQ as used in a predictive coding environment. One type of predictive coding is Noise Feedback Coding (NFC), wherein noise feedback filtering is used to shape coding noise, in order to improve a perceptual quality of quantized speech. Therefore, it would be advantageous to use excitation VQ with noise feedback coding, and further, to do so in a computationally efficient manner. Summary The present invention is directed to first and second efficient excitation VQ search methods using correlation techniques, for use in predictive, noise feedback coding of a speech or audio signal. The first and second methods of the present invention are described below in Section IX.C. in connection with The first method reduces the complexity of the excitation VQ in NFC by reorganizing a calculation of an energy of an error vector for each of a plurality of candidate excitation vectors, also referred to as a codebook vector. The energy of the error vector is the cost function that is minimized during the search of the excitation codebook. The reorganization is obtained by: 1. Expanding a Mean Squared Error (MSE) term of the error vector; 2. Excluding an energy term that is invariant to the candidate excitation vector; and 3. Pre-computing energy terms of ZERO-STATE responses of the candidate excitation vectors that are invariant to sub-vectors of a subframe. The second method presents an efficient way of searching the excitation codebook in the case where a signed codebook is used. The second method reorganizes the calculation of the energy of the error vector in such a way that only half of the total number of codevectors is searched. The combination of the first and second methods also provides an efficient search. However, there may be circumstances where the first and second methods are used separately. For example, if a signed codebook is not used, then only the first method applies. As mentioned above, the first and second excitation VQ search methods of the present invention (described in connection with Terminology Predictor A predictor P as referred to herein predicts a current signal value (e.g., a current sample) based on previous or past signal values (e.g., past samples). A predictor can be a short-term predictor or a long-term predictor. A short-term signal predictor (e.g., a short term speech predictor) can predict a current signal sample (e.g., speech sample) based on adjacent signal samples from the immediate past. With respect to speech signals, such “short-term” predicting removes redundancies between, for example, adjacent or close-in signal samples. A long-term signal predictor can predict a current signal sample based on signal samples from the relatively distant past. With respect to a speech signal, such “long-term” predicting removes redundancies between relatively distant signal samples. For example, a long-term speech predictor can remove redundancies between distant speech samples due to a pitch periodicity of the speech signal. The phrases “a predictor P predicts a signal s(n) to produce a signal ps(n)” means the same as the phrase “a predictor P makes a prediction ps(n) of a signal s(n).” Also, a predictor can be considered equivalent to a predictive filter that predictively filters an input signal to produce a predictively filtered output signal. Coding Noise and Filtering Thereof: Often, a speech signal can be characterized in part by spectral characteristics (i.e., the frequency spectrum) of the speech signal. Two known spectral characteristics include 1) what is referred to as a harmonic fine structure or line frequencies of the speech signal, and 2) a spectral envelope of the speech signal. The harmonic fine structure includes, for example, pitch harmonics, and is considered a long-term (spectral) characteristic of the speech signal. On the other hand, the spectral envelope of the speech signal is considered a short-term (spectral) characteristic of the speech signal. Coding a speech signal can cause audible noise when the encoded speech is decoded by a decoder. The audible noise arises because the coded speech signal includes coding noise introduced by the speech coding process, for example, by quantizing signals in the encoding process. The coding noise can have spectral characteristics (i.e., a spectrum) different from the spectral characteristics (i.e., spectrum) of natural speech (as characterized above). Such audible coding noise can be reduced by spectrally shaping the coding noise (i.e., shaping the coding noise spectrum) such that it corresponds to or follows to some extent the spectral characteristics (i.e., spectrum) of the speech signal. This is referred to as “spectral noise shaping” of the coding noise, or “shaping the coding noise spectrum.” The coding noise is shaped to follow the speech signal spectrum only “to some extent” because it is not necessary for the coding noise spectrum to exactly follow the speech signal spectrum. Rather, the coding noise spectrum is shaped sufficiently to reduce audible noise, thereby improving the perceptual quality of the decoded speech. Accordingly, shaping the coding noise spectrum (i.e. spectrally shaping the coding noise) to follow the harmonic fine structure (i.e., long-term spectral characteristic) of the speech signal is referred to as “harmonic noise (spectral) shaping” or “long-term noise (spectral) shaping.” Also, shaping the coding noise spectrum to follow the spectral envelope (i.e., short-term spectral characteristic) of the speech signal is referred to a “short-term noise (spectral) shaping” or “envelope noise (spectral) shaping.” Noise feedback filters can be used to spectrally shape the coding noise to follow the spectral characteristics of the speech signal, so as to reduce the above mentioned audible noise. For example, a short-term noise feedback filter can short-term filter coding noise to spectrally shape the coding noise to follow the short-term spectral characteristic (i.e., the envelope) of the speech signal. On the other hand, a long-term noise feedback filter can long-term filter coding noise to spectrally shape the coding noise to follow the long-term spectral characteristic (i.e., the harmonic fine structure or pitch harmonics) of the speech signal. Therefore, short-term noise feedback filters can effect short-term or envelope noise spectral shaping of the coding noise, while long-term noise feedback filters can effect long-term or harmonic noise spectral shaping of the coding noise, in the present invention. The present invention is described with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. I. Conventional Noise Feedback Coding A. First Conventional Codec B. Second Conventional Codec II. Two-Stage Noise Feedback Coding A. Composite Codec Embodiments
B. Codec Embodiments Using Separate Short-Term and Long-Term Predictors (Two-Stage Prediction) and Noise Feedback Coding
A. General VQ Search
B. Fast VQ Search
C. Further Fast VQ Search Embodiments
Before describing the present invention, it is helpful to first describe the conventional noise feedback coding schemes. A. First Conventional Coder Codec 1000 encodes a sampled input speech or audio signal s(n) to produce a coded speech signal, and then decodes the coded speech signal to produce a reconstructed speech signal sq(n), representative of the input speech signal s(n). Reconstructed output speech signal sq(n) is associated with an overall coding noise r(n)=s(n)−sq(n). An encoder portion of codec 1000 operates as follows. Sampled input speech or audio signal s(n) is provided to a first input of combiner 1004, and to an input of predictor 1002. Predictor 1002 makes a prediction of current speech signal s(n) values (e.g., samples) based on past values of the speech signal to produce a predicted signal ps(n). This process is referred to as predicting signal s(n) to produce predicted signal ps(n). Predictor 1002 provides predicted speech signal ps(n) to a second input of combiner 1004. Combiner 1004 combines signals s(n) and ps(n) to produce a prediction residual signal d(n). Combiner 1006 combines residual signal d(n) with a noise feedback signal fq(n) to produce a quantizer input signal u(n). Quantizer 1008 quantizes input signal u(n) to produce a quantized signal uq(n). Combiner 1014 combines (that is, differences) signals u(n) and uq(n) to produce a quantization error or noise signal q(n) associated with the quantized signal uq(n). Filter 1016 filters noise signal q(n) to produce feedback noise signal fq(n). A decoder portion of codec 1000 operates as follows. Exiting quantizer 1008, combiner 1010 combines quantizer output signal uq(n) with a prediction ps(n)′ of input speech signal s(n) to produce reconstructed output speech signal sq(n). Predictor 1012 predicts input speech signal s(n) to produce predicted speech signal ps(n)′, based on past samples of output speech signal sq(n). The following is an analysis of codec 1000 described above. The predictor P(z) (1002 or 1012) has a transfer function of With the NFC codec structure 1000 in
If the encoding bit rate of the quantizer 1008 in B. Second Conventional Codec Codec 2000 encodes a sampled input speech signal s(n) to produce a coded speech signal, and then decodes the coded speech signal to produce a reconstructed speech signal sq(n), representative of the input speech signal s(n). Reconstructed speech signal sq(n) is associated with an overall coding noise r(n)=s(n)−sq(n). Codec 2000 operates as follows. A sampled input speech or audio signal s(n) is provided to a first input of combiner 2004. A feedback signal x(n) is provided to a second input of combiner 2004. Combiner 2004 combines signals s(n) and x(n) to produce a quantizer input signal u(n). Quantizer 2008 quantizes input signal u(n) to produce a quantized signal uq(n) (also referred to as a quantizer output signal uq(n)). Combiner 2014 combines (that is, differences) signals u(n) and uq(n) to produce a quantization error or noise signal q(n) associated with the quantized signal uq(n). Filter 2016 filters noise signal q(n) to produce feedback noise signal fq(n). Combiner 2006 combines feedback noise signal fq(n) with a predicted signal ps(n) (i.e., a prediction of input speech signal s(n)) to produce feedback signal x(n). Exiting quantizer 2008, combiner 2010 combines quantizer output signal uq(n) with prediction or predicted signal ps(n) to produce reconstructed output speech signal sq(n). Predictor 2012 predicts input speech signal s(n) (to produce predicted speech signal ps(n)) based on past samples of output speech signal sq(n). Thus, predictor 2012 is included in the encoder and decoder portions of codec 2000. Codec structure 2000 was proposed by J. D. Makhoul and M. Berouti in “Adaptive Noise Spectral Shaping and Entropy Coding in Predictive Coding of Speech,” IEEE Transactions on Acoustics, Speech, and Signal Processing, pp. 63–73, February 1979. This equivalent, known NFC codec structure 2000 has at least two advantages over codec 1000. First, only one predictor P(z) (2012) is used in the structure. Second, if N(z) is the filter whose frequency response corresponds to the desired noise spectral shape, this codec structure 2000 allows us to use [N(z)−1] directly as the noise feedback filter 2016. Makhoul and Berouti showed in their 1979 paper that very good perceptual speech quality can be obtained by choosing N(z) to be a simple second-order finite-impulse-response (FIR) filter. The codec structures in II. Two-Stage Noise Feedback Coding The conventional noise feedback coding principles described above are well-known prior art. Now we will address our stated problem of two-stage noise feedback coding with both short-term and long-term prediction, and both short-term and long-term noise spectral shaping. A. Composite Codec Embodiments A first approach is to combine a short-term predictor and a long-term predictor into a single composite short-term and long-term predictor, and then re-use the general structure of codec 1000 in Similarly, in Therefore, one can replace the predictor P(z) (1002 or 1012) in
Thus, both short-term noise spectral shaping and long-term spectral shaping are achieved, and they can be individually controlled by the parameters α and β, respectively.
1050 includes the following functional elements: a first composite short-term and long-term predictor 1052 (also referred to as a composite predictor P′(z)); a first combiner or adder 1054; a second combiner or adder 1056; a quantizer 1058; a third combiner or adder 1060; a second composite short-term and long-term predictor 1062 (also referred to as a composite predictor P′(z)); a fourth combiner 1064; and a composite short-term and long-term noise feedback filter 1066 (also referred to as a filter F′(z)). The functional elements or blocks of codec 1050 listed above are arranged similarly to the corresponding blocks of codec 1000 (described above in connection with Codec 1050 encodes a sampled input speech signal s(n) to produce a coded speech signal, and then decodes the coded speech signal to produce a reconstructed speech signal sq(n), representative of the input speech signal s(n). Reconstructed speech signal sq(n) is associated with an overall coding noise r(n)=s(n)−sq(n). An encoder portion of codec 1050 operates in the following exemplary manner. Composite predictor 1052 short-term and long-term predicts input speech signal s(n) to produce a short-term and long-term predicted speech signal ps(n). Combiner 1054 combines short-term and long-term predicted signal ps(n) with speech signal s(n) to produce a prediction residual signal d(n). Combiner 1056 combines residual signal d(n) with a short-term and long-term filtered, noise feedback signal fq(n) to produce a quantizer input signal u(n). Quantizer 1058 quantizes input signal u(n) to produce a quantized signal uq(n) (also referred to as a quantizer output signal) associated with a quantization noise or error signal q(n). Combiner 1064 combines (that is, differences) signals u(n) and uq(n) to produce the quantization error or noise signal q(n). Composite filter 1066 short-term and long-term filters noise signal q(n) to produce short-term and long-term filtered, feedback noise signal fq(n). In codec 1050, combiner 1064, composite short-term and long-term filter 1066, and combiner 1056 together form a noise feedback loop around quantizer 1058. This noise feedback loop spectrally shapes the coding noise associated with codec 1050, in accordance with the composite filter, to follow, for example, the short-term and long-term spectral characteristics of input speech signal s(n). A decoder portion of coder 1050 operates in the following exemplary manner. Exiting quantizer 1058, combiner 1060 combines quantizer output signal uq(n) with a short-term and long-term prediction ps(n)′ of input speech signal s(n) to produce a quantized output speech signal sq(n). Composite predictor 1062 short-term and long-term predicts input speech signal s(n) (to produce short-term and long-term predicted signal ps(n)′) based on output signal sq(n).
As an alternative to the above described first embodiment, a second embodiment of the present invention can be constructed based on the general coding structure of codec 2000 in The functional elements or blocks of codec 2050 listed above are arranged similarly to the corresponding blocks of codec 2000 (described above in connection with Codec 2050 operates in the following exemplary manner. Combiner 2054 combines a sampled input speech or audio signal s(n) with a feedback signal x(n) to produce a quantizer input signal u(n). Quantizer 2058 quantizes input signal u(n) to produce a quantized signal uq(n) associated with a quantization noise or error signal q(n). Combiner 2064 combines (that is, differences) signals u(n) and uq(n) to produce quantization error or noise signal q(n). Composite filter 2066 concurrently long-term and short-term filters noise signal q(n) to produce short-term and long-term filtered, feedback noise signal fq(n). Combiner 2056 combines short-term and long-term filtered, feedback noise signal fq(n) with a short-term and long-term prediction s(n) of input signal s(n) to produce feedback signal x(n). In codec 2050, combiner 2064, composite short-term and long-term filter 2066, and combiner 2056 together form a noise feedback loop around quantizer 2058. This noise feedback loop spectrally shapes the coding noise associated with codec 2050 in accordance with the composite filter, to follow, for example, the short-term and long-term spectral characteristics of input speech signal s(n). Exiting quantizer 2058, combiner 2060 combines quantizer output signal uq(n) with the short-term and long-term predicted signal ps(n)′ to produce a reconstructed output speech signal sq(n). Composite predictor 2062 short-term an long-term predicts input speech signal s(n) (to produce short-term and long-term predicted signal ps(n)) based on reconstructed output speech signal sq(n). In this invention, the first approach for two-stage NFC described above achieves the goal by re-using the general codec structure of conventional single-stage noise feedback coding (for example, by re-using the structures of codecs 1000 and 2000) but combining what are conventionally separate short-term and long-term predictors into a single composite short-term and long-term predictor. A second preferred approach, described below, allows separate short-term and long-term predictors to be used, but requires a modification of the conventional codec structures 1000 and 2000 of B. Codec Embodiments Using Separate Short-Term and Long-Term Predictors (Two-Stage Prediction) and Noise Feedback Coding It is not obvious how the codec structures in To achieve two-stage prediction and two-stage noise spectral shaping at the same time without combining the two predictors into one, the key lies in recognizing that the quantizer block in
As an illustration of this concept, Codec 3000 includes the following functional elements: a first short-term predictor 3002 (also referred to as a short-term predictor Ps(z)); a first combiner or adder 3004; a second combiner or adder 3006; predictive quantizer 3008 (also referred to as predictive quantizer Q′); a third combiner or adder 3010; a second short-term predictor 3012 (also referred to as a short-term predictor Ps(z)); a fourth combiner 3014; and a short-term noise feedback filter 3016 (also referred to as a short-term noise feedback filter Fs(z)). Predictive quantizer Q′ (3008) includes a first combiner 3024, either a scalar or a vector quantizer 3028, a second combiner 3030, and a long-term predictor 3034 (also referred to as a long-term predictor (Pl(z)). Codec 3000 encodes a sampled input speech signal s(n) to produce a coded speech signal, and then decodes the coded speech signal to produce a reconstructed output speech signal sq(n), representative of the input speech signal s(n). Reconstructed speech signal sq(n) is associated with an overall coding noise r(n)=s(n)−sq(n). Codec 3000 operates in the following exemplary manner. First, a sampled input speech or audio signal s(n) is provided to a first input of combiner 3004, and to an input of predictor 3002. Predictor 3002 makes a short-term prediction of input speech signal s(n) based on past samples thereof to produce a predicted input speech signal ps(n). This process is referred to as short-term predicting input speech signal s(n) to produce predicted signal ps(n). Predictor 3002 provides predicted input speech signal ps(n) to a second input of combiner 3004. Combiner 3004 combines signals s(n) and ps(n) to produce a prediction residual signal d(n). Combiner 3006 combines residual signal d(n) with a first noise feedback signal fqs(n) to produce a predictive quantizer input signal v(n). Predictive quantizer 3008 predictively quantizes input signal v(n) to produce a predictively quantized output signal vq(n) (also referred to as a predictive quantizer output signal vq(n)) associated with a predictive noise or error signal qs(n). Combiner 3014 combines (that is, differences) signals v(n) and vq(n) to produce the predictive quantization error or noise signal qs(n). Short-term filter 3016 short-term filters predictive quantization noise signal q(n) to produce the feedback noise signal fqs(n). Therefore, Noise Feedback (NF) codec 3000 includes an outer NF loop around predictive quantizer 3008, comprising combiner 3014, short-term noise filter 3016, and combiner 3006. This outer NF loop spectrally shapes the coding noise associated with codec 3000 in accordance with filter 3016, to follow, for example, the short-term spectral characteristics of input speech signal s(n). Predictive quantizer 3008 operates within the outer NF loop mentioned above to predictively quantize predictive quantizer input signal v(n) in the following exemplary manner. Predictor 3034 long-term predicts (i.e., makes a long-term prediction of) predictive quantizer input signal v(n) to produce a predicted, predictive quantizer input signal pv(n). Combiner 3024 combines signal pv(n) with predictive quantizer input signal v(n) to produce a quantizer input signal u(n). Quantizer 3028 quantizes quantizer input signal u(n) using a scalar or vector quantizing technique, to produce a quantizer output signal uq(n). Combiner 3030 combines quantizer output signal uq(n) with signal pv(n) to produce predictively quantized output signal vq(n). Exiting predictive quantizer 3008, combiner 3010 combines predictive quantizer output signal vq(n) with a prediction ps(n)′ of input speech signal s(n) to produce output speech signal sq(n). Predictor 3012 short-term predicts (i.e., makes a short-term prediction of) input speech signal s(n) to produce signal ps(n)′, based on output speech signal sq(n). In the first exemplary arrangement of NF codec 3000 depicted in In the first arrangement described above, the DPCM structure inside the Q′ dashed box (3008) does not perform long-term noise spectral shaping. If everything inside the Q′ dashed box (3008) is treated as a black box, then for an observer outside of the box, the replacement of a direct quantizer (for example, quantizer 1008) by a long-term-prediction-based DPCM structure (that is, predictive quantizer Q′ (3008)) is an advantageous way to improve the quantizer performance. Thus, compared with
Taking the above concept one step further, predictive quantizer Q′ (3008) of codec 3000 in Predictive quantizer Q″ (4008) includes a first long-term predictor 4022 (also referred to as a long-term predictor Pl(z)), a first combiner 4024, either a scalar or a vector quantizer 4028, a second combiner 4030, a second long-term predictor 4034 (also referred to as a long-term predictor (Pl(z)), a second combiner or adder 4036, and a long-term filter 4038 (also referred to as a long-term filter Fl(z)). Codec 4000 encodes a sampled input speech signal s(n) to produce a coded speech signal, and then decodes the coded speech signal to produce a reconstructed output speech signal sq(n), representative of the input speech signal s(n). Reconstructed speech signal sq(n) is associated with an overall coding noise r(n)=s(n)−sq(n). In coding input speech signal s(n), predictors 4002 and 4012, combiners 4004, 4006, and 4010, and noise filter 4016 operate similarly to corresponding elements described above in connection with Predictive quantizer Q″ (4008) operates within the outer NF loop mentioned above to predictively quantize predictive quantizer input signal v(n) to produce a predictively quantized output signal vq(n) (also referred to as a predictive quantizer output signal vq(n)) in the following exemplary manner. As mentioned above, predictive quantizer Q″ has a structure corresponding to the basic NFC structure of codec 1000 depicted in Exiting quantizer 4028, combiner 4030 combines quantizer output signal uq(n) with a prediction pv(n)′ of predictive quantizer input signal v(n). Long-term predictor 4034 long-term predicts signal v(n) (to produce predicted signal pv(n)′) based on signal vq(n). Exiting predictive quantizer Q″ (4008), predictively quantized signal vq(n) is combined with a prediction ps(n)′ of input speech signal s(n) to produce reconstructed speech signal sq(n). Predictor 4012 short term predicts input speech signal s(n) (to produce predicted signal ps(n)′) based on reconstructed speech signal sq(n). In the first exemplary arrangement of NF codec 4000 depicted in In the first arrangement of codec 4000 depicted in One advantage of nested two-stage NFC structure 4000 as shown in
Due to the above mentioned “decoupling” between the long-term and short-term noise feedback coding, predictive quantizer Q″ (4008) of codec 4000 in Predictive quantizer Q′″ (5008) includes a first combiner 5024, a second combiner 5026, either a scalar or a vector quantizer 5028, a third combiner 5030, a long-term predictor 5034 (also referred to as a long-term predictor (Pl(z)), a fourth combiner 5036, and a long-term filter 5038 (also referred to as a long-term filter Nl(z)−1). Codec 5000 encodes a sampled input speech signal s(n) to produce a coded speech signal, and then decodes the coded speech signal to produce a reconstructed output speech signal sq(n), representative of the input speech signal s(n). Reconstructed speech signal sq(n) is associated with an overall coding noise r(n)=s(n)−sq(n). In coding input speech signal s(n), predictors 5002 and 5012, combiners 5004, 5006, and 5010, and noise filter 5016 operate similarly to corresponding elements described above in connection with Predictive quantizer 5008 has a structure similar to the structure of NF codec 2000 described above in connection with In a second exemplary arrangement of NF codec 5000, predictors 5002, 5012 are long-term predictors and NF filter 5016 is a long-term noise filter (to spectrally shape the coding noise to follow, for example, the long-term characteristic of the input speech signal s(n)), while predictor 5034 is a short-term predictor and noise filter 5038 is a short-term noise filter (to spectrally shape the coding noise to follow, for example, the short-term characteristic of the input speech signal s(n)).
In a further example, the outer layer NFC structure in Codec 6000 encodes a sampled input speech signal s(n) to produce a coded speech signal, and then decodes the coded speech signal to produce a reconstructed output speech signal sq(n), representative of the input speech signal s(n). Reconstructed speech signal sq(n) is associated with an overall coding noise r(n)=s(n)−sq(n). In coding input speech signal s(n), an outer coding structure depicted in Unlike codec 2000, codec 6000 includes a predictive quantizer equivalent to predictive quantizer 5008 (described above in connection with In a second exemplary arrangement of NF codec 6000, predictor 6012 is a long-term predictor and NF filter 6016 is a long-term noise filter, while predictor 5034 is a short-term predictor and noise filter 5038 is a short-term noise filter. There is an advantage for such a flexibility to mix and match different single-stage NFC structures in different parts of the nested two-stage NFC structure. For example, although the codec 5000 in To see the codec 5000 in Now consider the short-term NFC structure in the outer layer of codec 5000 in 5. Coding Method In a next step 6060, a combiner (e.g., 3004, 4004, 5004, 6004/6006 or equivalents thereof) combines the predicted speech signal (e.g., ps(n)) with the speech signal (e.g., s(n)) to produce a first residual signal (e.g., d(n)). In a next step 6062, a combiner (e.g., 3006, 4006, 5006, 6004/6006 or equivalents thereof) combines a first noise feedback signal (e.g., fqs(n)) with the first residual signal (e.g., d(n)) to produce a predictive quantizer input signal (e.g., v(n)). In a next step 6064, a predictive quantizer (e.g., Q′, Q″, or Q′″) predictively quantizes the predictive quantizer input signal (e.g., v(n)) to produce a predictive quantizer output signal (e.g., vq(n)) associated with a predictive quantization noise (e.g., qs(n)). In a next step 6066, a filter (e.g., 3016, 4016, or 5016) filters the predictive quantization noise (e.g., qs(n)) to produce the first noise feedback signal (e.g., fqs(n)). In a next step 6072 used in all of the codecs 3000–6000, a combiner (e.g., 3024, 4024, 5024/5026 or an equivalent thereof, such as 5024′) combines at least the predictive quantizer input signal (e.g., v(n)) with at least the first predicted predictive quantizer input signal (e.g., pv(n)) to produce a quantizer input signal (e.g., u(n)). Additionally, the codec embodiments including an inner noise feedback loop (that is, exemplary codecs 4000, 5000, and 6000) use further combining logic (e.g., combiners 5026/5026′ or 4026 or equivalents thereof)) to further combine a second noise feedback signal (e.g., fq(n)) with the predictive quantizer input signal (e.g., v(n)) and the first predicted predictive quantizer input signal (e.g., pv(n)), to produce the quantizer input signal (e.g., u(n)). In a next step 6076, a scalar or vector quantizer (e.g., 3028, 4028, or 5028) quantizes the input signal (e.g., u(n)) to produce a quantizer output signal (e.g., uq(n)). In a next step 6078 applying only to those embodiments including the inner noise feedback loop, a filter (e.g., 4038 or 5038) filters a quantization noise (e.g., q(n)) associated with the quantizer output signal (e.g., q(n)) to produce the second noise feedback signal (fq(n)). In a next step 6080, deriving logic (e.g., 3034 and 3030 in III. Overview of Preferred Embodiment (Based on the Fifth Embodiment above) We now describe our preferred embodiment of the present invention. Coder 7000 and coder 5000 of IV. Short-Term Linear Predictive Analysis and Quantization We now give a detailed description of the encoder operations. Refer to Refer to
Let RWINSZ be the number of samples in the right window. Then, RWINSZ=20 for 8 kHz sampling and 40 for 16 kHz sampling. The right window is given by
The concatenation of wl(n) and wr(n) gives the 20 ms asymmetric analysis window. When applying this analysis window, the last sample of the window is lined up with the last sample of the current frame, so there is no look ahead. After the 5 ms current frame of input signal and the preceding 15 ms of input signal in the previous three frames are multiplied by the 20 ms window, the resulting signal is used to calculate the autocorrelation coefficients r(i), for lags i=0, 1, 2, . . . , M, where M is the short-term predictor order, and is chosen to be 8 for both 8 kHz and 16 kHz sampled signals. The calculated autocorrelation coefficients are passed to block 12, which applies a Gaussian window to the autocorrelation coefficients to perform the well-known prior-art method of spectral smoothing. The Gaussian window function is given by After multiplying r(i) by such a Gaussian window, block 12 then multiplies r(0) by a white noise correction factor of WNCF=1+ε, where ε=0.0001. In summary, the output of block 12 is given by
The spectral smoothing technique smoothes out (widens) sharp resonance peaks in the frequency response of the short-term synthesis filter. The white noise correction adds a white noise floor to limit the spectral dynamic range. Both techniques help to reduce ill conditioning in the Levinson-Durbin recursion of block 13. Block 13 takes the autocorrelation coefficients modified by block 12, and performs the well-known prior-art method of Levinson-Durbin recursion to convert the autocorrelation coefficients to the short-term predictor coefficients â_{l}, i=0, 1, . . . , M. Block 14 performs bandwidth expansion of the resonance spectral peaks by modifying â_{l }as
Block 15 converts the {a_{l}} coefficients to Line Spectrum Pair (LSP) coefficients {l_{l}}, which are sometimes also referred to as Line Spectrum Frequencies (LSFs). Again, the operation of block 15 is a well-known prior-art procedure. Block 16 quantizes and encodes the M LSP coefficients to a pre-determined number of bits. The output LSP quantizer index array LSPI is passed to the bit multiplexer (block 95), while the quantized LSP coefficients are passed to block 17. Many different kinds of LSP quantizers can be used in block 16. In our preferred embodiment, the quantization of LSP is based on inter-frame moving-average (MA) prediction and multi-stage vector quantization, similar to (but not the same as) the LSP quantizer used in the ITU-T Recommendation G.729. Block 16 is further expanded in
Basically, the i-th weight is the inverse of the distance between the i-th LSP coefficient and its nearest neighbor LSP coefficient. These weights are different from those used in G.729. Block 162 stores the long-term mean value of each of the M LSP coefficients, calculated off-line during codec design phase using a large training data file. Adder 163 subtracts the LSP mean vector from the unquantized LSP coefficient vector to get the mean-removed version of it. Block 164 is the inter-frame MA predictor for the LSP vector. In our preferred embodiment, the order of this MA predictor is 8. The 8 predictor coefficients are fixed and pre-designed off-line using a large training data file. With a frame size of 5 ms, this 8^{th}-order predictor covers a time span of 40 ms, the same as the time span covered by the 4^{th}-order MA predictor of LSP used in G.729, which has a frame size of 10 ms. Block 164 multiplies the 8 output vectors of the vector quantizer block 166 in the previous 8 frames by the 8 sets of 8 fixed MA predictor coefficients and sum up the result. The resulting weighted sum is the predicted vector, which is subtracted from the mean-removed unquantized LSP vector by adder 165. The two-stage vector quantizer block 166 then quantizes the resulting prediction error vector. The first-stage VQ inside block 166 uses a 7-bit codebook (128 codevectors). For the narrowband (8 kHz sampling) codec at 16 kb/s, the second-stage VQ also uses a 7-bit codebook. This gives a total encoding rate of 14 bits/frame for the 8 LSP coefficients of the 16 kb/s narrowband codec. For the wideband (16 kHz sampling) codec at 32 kb/s, on the other hand, the second-stage VQ is a split VQ with a 3-5 split. The first three elements of the error vector of first-stage VQ are vector quantized using a 5-bit codebook, and the remaining 5 elements are vector quantized using another 5-bit codebook. This gives a total of (7+5+5)=17 bits/frame encoding rate for the 8 LSP coefficients of the 32 kb/s wideband codec. The selected codevectors from the two VQ stages are added together to give the final output quantized vector of block 166. During codebook searches, both stages of VQ within block 166 use the WMSE distortion measure with the weights {w_{l}} calculated by block 161. The codebook indices for the best matches in the two VQ stages (two indices for 16 kb/s narrowband codec and three indices for 32 kb/s wideband codec) form the output LSP index array LSPI, which is passed to the bit multiplexer block 95 in The output vector of block 166 is used to update the memory of the inter-frame LSP predictor block 164. The predicted vector generated by block 164 and the LSP mean vector held by block 162 are added to the output vector of block 166, by adders 167 and 168, respectively. The output of adder 168 is the quantized and mean-restored LSP vector. It is well known in the art that the LSP coefficients need to be in a monotonically ascending order for the resulting synthesis filter to be stable. The quantization performed in Now refer back to Block 18 takes the set of interpolated LSP coefficients {l′_{l}} and converts it to the corresponding set of direct-form linear predictor coefficients {ã_{l}} for each sub-frame. Again, such a conversion from LSP coefficients to predictor coefficients is well known in the art. The resulting set of predictor coefficients {ã_{l}} are used to update the coefficients of the short-term predictor block 40 in Block 19 performs further bandwidth expansion on the set of predictor coefficients {ã_{l}} using a bandwidth expansion factor of γ_{l}=0.75. The resulting bandwidth-expanded set of filter coefficients is given by
This bandwidth-expanded set of filter coefficients {a_{l} ^{l}} are used to update the coefficients of the short-term noise feedback filter block 50 in V. Short-Term Linear Prediction of Input Signal Now refer to The long-term predictive analysis and quantization block 20 uses the short-term prediction residual signal {d(n)} of the current sub-frame and its quantized version {dq(n)} in the previous sub-frames to determine the quantized values of the pitch period and the pitch predictor taps. This block 20 is further expanded in Now refer to
The signal dw(n) is basically a perceptually weighted version of the input signal s(n), just like what is done in CELP codecs. This dw(n) signal is passed through a low-pass filter block 22, which has a −3 dB cut off frequency at about 800 Hz. In the preferred embodiment, a 4^{th}-order elliptic filter is used for this purpose. Block 23 down-samples the low-pass filtered signal to a sampling rate of 2 kHz. This represents a 4:1 decimation for the 16 kb/s narrowband codec or 8:1 decimation for the 32 kb/s wideband codec. The first-stage pitch search block 24 then uses the decimated 2 kHz sampled signal dwd(n) to find a “coarse pitch period”, denoted as cpp in For the narrowband codec, MINPPD=4 samples and MAXPPD=36 samples. For the wideband codec, MINPPD=2 samples and MAXPPD=34 samples. Block 24 then searches through the calculated {c(k)} array and identifies all positive local peaks in the {c(k)} sequence. Let K_{p }denote the resulting set of indices k_{p }where c(k_{p}) is a positive local peak, and let the elements in K_{p }be arranged in an ascending order. If there is no positive local peak at all in the {c(k) } sequence, the processing of block 24 is terminated and the output coarse pitch period is set to cpp=MINPPD. If there is at least one positive local peak, then the block 24 searches through the indices in the set K_{p }and identifies the index k_{p }that maximizes c(k_{p})^{2}/E(k_{p}). Let the resulting index be k*_{p}. To avoid picking a coarse pitch period that is around an integer multiple of the true coarse pitch period, the following simple decision logic is used. 1. If k*_{p }corresponds to the first positive local peak (i.e. it is the first element of K_{p}), use k*_{p }as the final output cpp of block 24 and skip the rest of the steps. 2. Otherwise, go from the first element of K_{p }to the element of K_{p }that is just before the element k*_{p}, find the first k_{p }in K_{p }that satisfies c(k_{p})^{2}/E(k_{p})>T_{1}[c(k*_{p})^{2}/E(k*_{p})], where T_{1}=0.7. The first k_{p }that satifies this condition is the final output cpp of block 24. 3. If none of the elements of K_{p }before k*_{p }satisfies the inequality in 2. above, find the first k_{p }in K_{p }that satisfies the following two conditions:
4. If none of the elements of K_{p }before k*_{p }satisfies the inequalities in 3. above, then use k*_{p }as the final output cpp of block 24. Block 25 takes cpp as its input and performs a second-stage pitch period search in the undecimated signal domain to get a refined pitch period pp. Block 25 first converts the coarse pitch period cpp to the undecimated signal domain by multiplying it by the decimation factor DECF. (This decimation factor DECF=4 and 8 for narrowband and wideband codecs, respectively). Then, it determines a search range for the refined pitch period around the value cpp*DECF. The lower bound of the search range is lb=max(MINPP, cpp*DECF−DECF+1), where MINPP=17 samples is the minimum pitch period. The upper bound of the search range is ub=min(MAXPP, cpp*DECF+DECF−1), where MAXPP is the maximum pitch period, which is 144 and 272 samples for narrowband and wideband codecs, respectively. Block 25 maintains a signal buffer with a total of MAXPP+1+SFRSZ samples, where SFRSZ is the sub-frame size, which is 40 and 80 samples for narrowband and wideband codecs, respectively. The last SFRSZ samples of this buffer are populated with the open-loop short-term prediction residual signal d(n) in the current sub-frame. The first MAXPP+1 samples are populated with the MAXPP+1 samples of quantized version of d(n), denoted as dq(n), immediately preceding the current sub-frame. For convenience of equation writing later, we will use dq(n) to denote the entire buffer of MAXPP+1+SFRSZ samples, even though the last SFRSZ samples are really d(n) samples. Again, without loss of generality, let the index range from n=1 to n=SFRSZ denotes the samples in the current sub-frame. After the lower bound lb and upper bound ub of the pitch period search range are determined, block 25 calculates the following correlation and energy terms in the undecimated dq(n) signal domain for time lags k within the search range [lb, ub].
The time lag kε[lb,ub] that maximizes the ratio {tilde over (c)}^{2}(k)/{tilde over (E)}(k) is chosen as the final refined pitch period. That is,
Once the refined pitch period pp is determined, it is encoded into the corresponding output pitch period index PPI, calculated as
Possible values of PPI are 0 to 127 for the narrowband codec and 0 to 255 for the wideband codec. Therefore, the refined pitch period pp is encoded into 7 bits or 8 bits, without any distortion. Block 25 also calculates ppt1, the optimal tap weight for a single-tap pitch predictor, as follows
Block 27 calculates the long-term noise feedback filter coefficient λ as follows.
Pitch predictor taps quantizer block 26 quantizes the three pitch predictor taps to 5 bits using vector quantization. Rather than minimizing the mean-square error of the three taps as in conventional VQ codebook search, block 26 finds from the VQ codebook the set of candidate pitch predictor taps that minimizes the pitch prediction residual energy in the current sub-frame. Using the same dq(n) buffer and time index convention as in block 25, and denoting the set of three taps corresponding to the j-th codevector as {b_{j1},b_{j2},b_{j3 }}, we can express such pitch prediction residual energy as
This equation can be re-written as
In the codec design stage, the optimal three-tap codebooks {b_{j1},b_{j2},b_{j3}}, j=0, 1, 2, . . . , 31 are designed off-line. The corresponding 9-dimensional codevectors x_{j}, j=0, 1, 2, . . . , 31 are calculated and stored in a codebook. In actual encoding, block 26 first calculates the vector p^{T}, then it calculates the 32 inner products p^{T}x_{j }for j=0, 1, 2, . . . , 31. The codebook index j* that maximizes such an inner product also minimizes the pitch prediction residual energy E_{j}. Thus, the output pitch predictor taps index PPTI is chosen as
The corresponding vector of three quantized pitch predictor taps, denoted as ppt in Once the quantized pitch predictor taps have been determined, block 28 calculates the open-loop pitch prediction residual signal e(n) as follows.
Again, the same dq(n) buffer and time index convention of block 25 is used here. That is, the current sub-frame of dq(n) for n=1, 2, . . . , SFRSZ is actually the unquantized open-loop short-term prediction residual signal d(n). This completes the description of block 20, long-term predictive analysis and quantization. VII. Quantization of Residual Gain The open-loop pitch prediction residual signal e(n) is used to calculate the residual gain. This is done inside the prediction residual quantizer block 30 in Refer to
For the wideband codec, on the other hand, two log-gains are calculated for each sub-frame. The first log-gain is calculated as
Lacking a better name, we will use the term “gain frame” to refer to the time interval over which a residual gain is calculated. Thus, the gain frame size is SFRSZ for the narrowband codec and SFRSZ/2 for the wideband codec. All the operations in The long-term mean value of the log-gain is calculated off-line and stored in block 302. The adder 303 subtracts this long-term mean value from the output log-gain of block 301 to get the mean-removed version of the log-gain. The MA log-gain predictor block 304 is an FIR filter, with order 8 for the narrowband codec and order 16 for the wideband codec. In either case, the time span covered by the log-gain predictor is 40 ms. The coefficients of this log-gain predictor are pre-determined off-line and held fixed. The adder 305 subtracts the output of block 304, which is the predicted log-gain, from the mean-removed log-gain. The scalar quantizer block 306 quantizes the resulting log-gain prediction residual. The narrowband codec uses a 4-bit quantizer, while the wideband codec uses a 5-bit quantizer here. The gain quantizer codebook index GI is passed to the bit multiplexer block 95 of Block 309 then converts the quantized log-gain to the quantized residual gain in the linear domain as follows:
Block 310 scales the residual quantizer codebook. That is, it multiplies all entries in the residual quantizer codebook by g. The resulting scaled codebook is then used by block 311 to perform residual quantizer codebook search. The prediction residual quantizer in the current invention of TSNFC can be either a scalar quantizer or a vector quantizer. At a given bit-rate, using a scalar quantizer gives a lower codec complexity at the expense of lower output quality. Conversely, using a vector quantizer improves the output quality but gives a higher codec complexity. A scalar quantizer is a suitable choice for applications that demand very low codec complexity but can tolerate higher bit rates. For other applications that do not require very low codec complexity, a vector quantizer is more suitable since it gives better coding efficiency than a scalar quantizer. In the next two sections, we describe the prediction residual quantizer codebook search procedures in the current invention, first for the case of scalar quantization in SQ-TSNFC, and then for the case of vector quantization in VQ-TSNFC. The codebook search procedures are very different for the two cases, so they need to be described separately. VIII. Scalar Quantization of Linear Prediction Residual Signal If the residual quantizer is a scalar quantizer, the encoder structure of
The adder 55 adds stnf(n) to the short-term prediction residual d(n) to get v(n).
Next, using its filter memory, the long-term predictor block 60 calculates the pitch-predicted value as The adders 70 and 75 together calculates the quantizer input signal u(n) as
Next, Block 311 of The adder 80 calculates the quantization error of the quantizer block 30 as
This q(n) sample is passed to block 65 to update the filter memory of the long-term noise feedback filter. The adder 85 adds ppv(n) to uq(n) to get dq(n), the quantized version of the current sample of the short-term prediction residual.
This dq(n) sample is passed to block 60 to update the filter memory of the long-term predictor. The adder 90 calculates the current sample of qs(n) as
We found that for speech signals at least, if the prediction residual scalar quantizer operates at a bit rate of 2 bits/sample or higher, the corresponding SQ-TSNFC codec output has essentially transparent quality. IX. Vector Quantization of Linear Prediction Residual Signal If the residual quantizer is a vector quantizer, the encoder structure of The present invention avoids this chicken-and-egg problem by modifying the VQ codebook search procedure, as described below beginning with reference to A. General VQ Search
VQ codebook 1302 includes N VQ codevectors. VQ codebook 1302 provides each of the N VQ codevectors stored in the codebook to gain scaling unit 1304. Gain scaling unit 1304 scales the codevectors, and provides scaled codevectors to an output of scaled VQ codebook 5028 a. Symbol g(n) represents the quantized residual gain in the linear domain, as calculated in previous sections. The combination of VQ codebook 1302 and gain scaling unit 1304 (also labeled g(n)) is equivalent to a scaled VQ codebook. System 1300 further includes predictor logic unit 1306 (also referred to as a predictor 1306), an input vector deriver 1308, an error energy calculator 1310, a preferred codevector selector 1312, and a predictor/filter restorer 1314. Predictor 1306 includes combining and predicting logic. Input vector deriver 1308 includes combining, filtering, and predicting logic, corresponding to such logic used in codecs 3000, 4000, 5000, 6000, and 7000, for example, as will be further described below. The logic used in predictor 1306, input vector deriver 1308, and quantizer 1508 a operates sample-by-sample in the same manner as described above in connection with codecs 3000–7000. Nevertheless, the VQ systems and methods are described below in terms of performing operations on “vectors” instead of individual samples. A “vector” as used herein refers to a group of samples. It is to be understood that the VQ systems and methods described below process each of the samples in a vector (that is, in a group of samples) one sample at a time. For example, a filter filters an input vector in the following manner: a first sample of the input vector is applied to an input of the filter; the filter processes the first sample of the vector to produce a first sample of an output vector corresponding to the first sample of the input vector; and the process repeats for each of the next sequential samples of the input vector until there are no input vector samples left, whereby the filter sequentially produces each of the next samples of the output vector. The last sample of the output vector to be produced or output by the filter can remain at the filter output such that it is available for processing immediately or at some later sample time (for example, to be combined, or otherwise processed, with a sample associated with another vector). A predictor predicts an input vector in much the same way as the filter processes (that is, filters) the input vector. Therefore, the term “vector” is used herein as a convenience to describe a group of samples to be sequentially processed in accordance with the present invention.
A brief overview of a method of operation of system 1300 is now provided. In the modified VQ codebook search procedure of the current invention implemented using system 1300, we provide one VQ codevector at a time from scaled VQ codebook 5028 a, perform all predicting, combining, and filtering functions of predictor 1306 and input vector deriving logic 1308 to calculate the corresponding VQ input vector of the signal u(n), and then calculate the energy of the quantization error vector of the signal q(n) using error energy calculator 1310. This process is repeated for N times for the N codevectors in scaled VQ codebook 5028 a, with the filter memories in input vector deriving logic 1308 reset to their initial values before we repeat the process for each new codevector. After all the N codevectors have been tried, we have calculated N corresponding quantization error energy values of q(n). The VQ codevector that minimizes the energy of the quantization error vector is the winning codevector and is used as the VQ output vector. The address of this winning codevector is the output VQ codebook index CI that is passed to the bit multiplexer block 95. The bit multiplexer block 95 in At a next step 1354, input vector deriver 1308 derives N VQ input vectors u(n) each based on the residual signal d(n) and a corresponding one of the N VQ codevector stored in codebook 1302. Each of the VQ input vectors u(n) corresponds to one of N VQ error vectors q(n). Input vector deriver 1308 and step 1354 are described in further detail below. At a next step 1358, error energy calculator 1310 derives N VQ error energy values e(n) each corresponding to one of the N VQ error vectors q(n) associated with the N VQ input vectors u(n) of step 1354. Error energy calculator 1310 performs a squaring operation, for example, on each of the error vectors q(n) to derive the energy values corresponding to the error vectors. At a next step 1360, preferred codevector selector 1312 selects a preferred one of the N VQ codevectors as a VQ output vector uq(n) corresponding to the residual signal d(n), based on the N VQ error energy values e(n) derived by error energy calculator 1310. Predictor/filter restorer 1314 initializes and restores (that is, resets) the filter states and predictor states of various filters and predictors included in system 1300, during method 1350, as will be further described below.
The method of operation of codec structure 1362 can be considered to encompass a single method. Alternatively, the method of operation of codec structure 1362 can be considered to include a first method associated with the inner NF loop of codec structure 1362 (mentioned above in connection with At a next step 1366, filter 5038 separately filters at least a portion of each of the N VQ error vectors q(n) to produce N noise feedback vectors fq(n) each corresponding to one of the N VQ codevectors. Filter 5038 can perform either long-term or short-term filtering. Filter 5038 filters each of the error vectors q(n) on a sample-by-sample basis (that is, the samples of each error vector q(n) are filtered sequentially, sample-by-sample). Filter 5038 filters each of the N VQ error vectors q(n) based on an initial filter state of the filter corresponding to a previous preferred codevector (the previous preferred codevector corresponds to a previous residual signal). Therefore, restorer 1314 restores filter 5038 to the initial filter state before the filter filters each of the N VQ codevectors. As would be apparent to one of ordinary skill in the speech coding art, the initial filter state mentioned above is typically established as a result of processing many, that is, one or more, previous preferred codevectors. At a next step 1368, combining logic (5006, 5024, and 5026), separately combines each of the N noise feedback vectors fq(n) with the residual signal d(n) to produce the N VQ input vectors u(n). At a next step 1374, predictor 5034 predicts each of the N predictive quantizer input vectors v(n) to produce N predictive, predictive quantizer input vectors pv(n). Predictor 5034 predicts input vectors v(n) based on an initial predictor state of the predictor corresponding to (that is, established by) the previous preferred codevector. Therefore, restorer 1314 restores predictor 5034 to the initial predictor state before predictor 5034 predicts each of the N predictive quantizer input vectors v(n) in step 1374. At a next step 1376, combining logic (e.g., combiners 5024, and 5026) separately combines each of the N predictive quantizer input vectors v(n) with a corresponding one of the N predicted, predictive quantizer input vectors pv(n) to produce the N VQ input vectors u(n). At a next step 1378, a combiner (e.g. combiner 5030) combines each of the N predicted, predictive quantizer input vectors pv(n) with corresponding ones of the N VQ codevectors, to produce N predictive quantizer output vectors vq(n) corresponding to N VQ error vectors qs(n). At a next step 1380, filter 5016 separately filters each of the N VQ error vectors qs(n) to produce the N noise feedback vectors fqs(n). Filter 5016 can perform either long-term or short-term filtering. Filter 5016 filters each of the N VQ error vectors qs(n) on a sample-by-sample basis, and based on an initial filter state of the filter corresponding to at least the previous preferred codevector (see predicting step 1374 above). Therefore, restorer 1314 restores filter 5016 to the initial filter state before filter 5016 filters each of the N VQ codevectors in step 1380. Alternative embodiments of VQ search systems and corresponding methods, including embodiments based on codecs 3000, 4000, and 6000, for example, would be apparent to one of ordinary skill in designing speech codecs, based on the exemplary VQ search system and methods described above. The fundamental ideas behind the modified VQ codebook search methods described above are somewhat similar to the ideas in the VQ codebook search method of CELP codecs. However, the feedback filter structures of input vector deriver 1308 (for example, input vector deriver 1308 a, and so on) are completely different from the structure of a CELP codec, and it is not readily obvious to those skilled in the art that such a VQ codebook search method can be used to improve the performance of a conventional NFC codec or a two-stage NFC codec. Our simulation results show that this vector quantizer approach indeed works, gives better codec performance than a scalar quantizer at the same bit rate, and also achieves desirable short-term and long-term noise spectral shaping. However, according to another novel feature of the current invention described below, this VQ codebook search method can be further improved to achieve significantly lower complexity while maintaining mathematical equivalence. B. Fast VQ Search A computationally more efficient codebook search method according to the present invention is based on the observation that the feedback structure in
At a next step 1434, ZERO-INPUT response filter structure 1402 derives ZERO-INPUT response error vector qzi(n) common to each of the N VQ codevectors stored in VQ codebook 1302. At a next step 1436, ZERO-STATE response filter structure 1404 derives N ZERO-STATE response error vectors qzs(n) each based on a corresponding one of the N VQ codevectors stored in VQ codebook 1302. At a next step 1438, error energy calculator 1410 derives N VQ error energy values each based on the ZERO-INPUT response error vector qzi(n) and a corresponding one of the N ZERO-STATE response error vectors qzs(n). Preferred codevector selector 1412 selects the preferred one of the N VQ codevectors based on the N VQ error energy values derived by error energy calculator 1410. The qzi(n) vector derived at step 1434 captures the effects due to (1) initial filter memories in ZERO-INPUT response filter structure 1402, and (2) the signal vector of d(n). Since the initial filter memories and the signal d(n) are both independent of the particular VQ codevector tried, there is only one ZERO-INPUT response vector, and it only needs to be calculated once for each input speech vector. During the calculation of the ZERO-STATE response vector qzs(n) at step 1436, the initial filter memories and d(n) are set to zero. For each VQ codebook vector tried, there is a corresponding ZERO-STATE response vector qzs(n). Therefore, for a codebook of N codevectors, we need to calculate N ZERO-STATE response vectors qzs(n) for each input speech vector, in one embodiment of the present invention. In a more computationally efficient embodiment, we calculate a set of N ZERO-STATE response vectors qzs(n) for a group of input speech vectors, instead of for each of the input speech vectors, as is further described below.
The method of operation of codec structure 1402 a can be considered to encompass a single method. Alternatively, the method of operation of codec structure 1402 a can be considered to include a first method associated with the inner NF loop of codec structure 1402 a, and a second method associated with the outer NF loop of the codec structure. The first and second methods associated respectively with the inner and outer NF loops of codec structure 1402 a operate concurrently, and together, with one another to form the single method. The aforementioned first and second methods (that is, the inner and outer NF loop methods, respectively) are now described in sequence below. In a first step 1452, an intermediate vector vzi(n) is derived based on the residual signal d(n). In a next step 1454, the intermediate vector vzi(n) is predicted (using predictor 5034, for example) to produce a predicted intermediate vector vqzi(n). Intermediate vector vzi(n) is predicted based on an initial predictor state (of predictor 5034, for example) corresponding to a previous preferred codevector. As would be apparent to one of ordinary skill in the speech coding art, the initial filter state mentioned above is typically established as a result of a history of many, that is, one or more, previous preferred codevectors. In a next step 1456, the intermediate vector vzi(n) and the predicted intermediate vector vqzi(n) are combined with a noise feedback vector fqzi(n) (using combiners 5026 and 5024, for example) to produce the ZERO-INPUT response error vector qzi(n). In a next step 1458, the ZERO-INPUT response error vector qzi(n) is filtered (using filter 5038, for example) to produce the noise feedback vector fqzi(n). Error vector qzi(n) can be either long-term or short-term filtered. Also, error vector qzi(n) is filtered based on an initial filter state (of filter 5038, for example) corresponding to the previous preferred codevector (see predicting step 1454 above). In a first step 1472, the residual signal d(n) is combined with a noise feedback signal fqszi(n) (using combiner 5006, for example) to produce an intermediate vector vzi(n). At a next step 1474, the intermediate vector vzi(n) is predicted to produce a predicted intermediate vector vqzi(n). At a next step 1476, the intermediate vector vzi(n) is combined with the predicted intermediate vector vqzi(n) (using combiner 5014, for example) to produce an error vector qszi(n). At a next step 1478, the error vector qszi(n) is filtered (using filter 5016, for example) to produce the noise feedback vector fqszi(n). Error vector qszi(n) can be either long-term or short-term filtered. Also, error vector qszi(n) is filtered based on an initial filter state (of filter 5038, for example) corresponding to the previous preferred codevector (see predicting step 1454 above).
If we choose the vector dimension to be smaller than the minimum pitch period minus one, or K<MINPP−1, which is true in our preferred embodiment, then with zero initial memory, the two long-term filters 5038 and 5034 in In a next step 1524, each ZERO-STATE input vector vzs(n) produced in filtering step 1522 is separately combined with the corresponding one of the N VQ codevectors (using combiner 5036, for example), to produce the N ZERO-STATE response error vectors qzs(n).
Note that in If we start with a scaled codebook (use g(n) to scale the codebook) as mentioned in the description of block 30 in an earlier section, and pass each scaled codevector through the filter H(z) with zero initial memory, then, subtracting the corresponding output vector from the ZERO-INPUT response vector of qzi(n) gives us the quantization error vector of q(n) for that particular VQ codevector. At a next step 1624, each of the N ZERO-STATE response error vectors qzs(n) is separately filtered to produce the N filtered, ZERO-STATE response error vectors vzs(n). Each of the error vectors qzs(n) is filtered based on an initially zeroed filter state. Therefore, the filter state is zeroed to produce the initially zeroed filter state before each error vector qzs(n) is filtered. The following enumerated steps represent an example of processing one VQ codevector CV(n) including four samples CV(n)_{0 3 }sample-by-sample according to steps 1622 and 1624 using filter structure 1404 b, to produce a corresponding ZERO-STATE error vector qzs(n) including four samples qzs(n)_{0 3}: 1. combiner 5030 combines first codevector sample CV(n)_{0 }of codevector CV(n) with an initial zero state feedback sample vzs(n)_{i }from filter 5034, to produce first error sample qzs(n)_{0 }of error vector qzs(n) (which corresponds to first codevector sample CV(n)_{0}) (part of step 1622); 2. filter 5034 filters first error sample qzs(n)_{0 }to produce a first feedback sample vzs(n)_{0 }of a feedback vector vzs(n) (part of step 1624); 3. combiner 5030 combines feedback sample vzs(n)_{0 }with second codevector sample CV(n)_{1}, to produce second error sample qzs(n)_{1 }(part of step 1622); 4. filter 5034 filters second error sample qzs(n)_{1 }to produce a second feedback sample vzs(n)_{1 }of feedback vector vzs(n) (part of step 1624); 5. combiner 5030 combines feedback sample vzs(n)_{1 }with third codevector sample CV(n)_{2}, to produce third error sample qzs(n)_{2 }(part of step 1622); 6. filter 5034 filters third error sample qzs(n)_{2 }to produce a third feedback sample vzs(n)_{2 }(part of step 1624); and 7. combiner 5030 combines feedback sample vzs(n)_{2 }with fourth (and last) codevector sample CV(n)_{3}, to produce fourth error sample qzs(n)_{3}, whereby the four samples of vector qzs(n) are produced based on the four samples of VQ codevector CV(n) (part of step 1622). Steps 1–7 described above are repeated for each of the N VQ codevectors in accordance with method 1620, to produce the N error vectors qzs(n). This second approach (corresponding to Again, the ideas behind this second codebook search approach are somewhat similar to the ideas in the codebook search of CELP codecs. However, the actual computational procedures and the codec structure used are quite different, and it is not readily obvious to those skilled in the art how the ideas can be used correctly in the framework of two-stage noise feedback coding. Using a sign-shape structured VQ codebook can further reduce the codebook search complexity. Rather than using a B-bit codebook with 2^{B }independent codevectors, we can use a sign bit plus a (B−1)-bit shape codebook with 2^{B−1 }independent codevectors. For each codevector in the (B−1)-bit shape codebook, the negated version of it, or its mirror image with respect to the origin, is also a legitimate codevector in the equivalent B-bit sign-shape structured codebook. Compared with the B-bit codebook with 2^{B }independent codevectors, the overall bit rate is the same, and the codec performance should be similar. Yet, with half the number of codevectors, this arrangement cut the number of filtering operations through the filter H(z)=1/[1−Fs(z)] by half, since we can simply negate a computed ZERO-STATE response vector corresponding to a shape codevector in order to get the ZERO-STATE response vector corresponding to the mirror image of that shape codevector. Thus, further complexity reduction is achieved. In the preferred embodiment of the 16 kb/s narrowband codec, we use 1 sign bit with a 4-bit shape codebook. With a vector dimension of 4, this gives a residual encoding bit rate of (1+4)/4 =1.25 bits/sample, or 50 bits/frame (1 frame=40 samples=5 ms). The side information encoding rates are 14 bits/frame for LSPI, 7 bits/frame for PPI, 5 bits/frame for PPTI, and 4 bits/frame for GI. That gives a total of 30 bits/frame for all side information. Thus, for the entire codec, the encoding rate is 80 bits/frame, or 16 kb/s. Such a 16 kb/s codec with a 5 ms frame size and no look ahead gives output speech quality comparable to that of G.728 and G.729E. For the 32 kb/s wideband codec, we use 1 sign bit with a 5-bit shape codebook, again with a vector dimension of 4. This gives a residual encoding rate of (1+5)/4=1.5 bits/sample=120 bits/frame (1 frame=80 samples=5 ms). The side information bit rates are 17 bits/frame for LSPI, 8 bits/frame for PPI, 5 bits/frame for PPTI, and 10 bits/frame for GI, giving a total of 40 bits/frame for all side information. Thus, the overall bit rate is 160 bits/frame, or 32 kb/s. Such a 32 kb/s codec with a 5 ms frame size and no look ahead gives essentially transparent quality for speech signals.
The speech signal used in the vector quantization embodiments described above can comprise a sequence of speech vectors each including a plurality of speech samples. As described in detail above, for example, in connection with The present invention takes advantage of such periodic updating of the aforementioned parameters to further reduce the computational complexity associated with calculating the N ZERO-STATE response error vectors qzs(n), described above. With reference again to At a next step 1704, a gain value is derived based on the speech signal once every M speech vectors, where M is an integer greater than 1. At a next step 1706, filter parameters are derived/updated based on the speech signal once every T speech vectors, where T is an integer greater than one, and where T may, but does not necessarily, equal M. At a next step 1708, the N ZERO-STATE response error vectors qzs(n) are derived once every T and/or M speech vectors (i.e., when the filter parameters and/or gain values are updated, respectively), whereby a same set of N ZERO-STATE response error vectors qzs(n) is used in selecting a plurality of preferred codevectors corresponding to a plurality of speech vectors. Alternative embodiments of VQ search systems and corresponding methods, including embodiments based on codecs 3000, 4000, and 6000, for example, would be apparent to one of ordinary skill in designing speech codecs, based on the exemplary VQ search system and methods described above. C. Further Fast VQ Search Embodiments The present invention provides first and second additional efficient VQ search methods, which can be used independently or jointly. The first method (described below in Section IX.C.1.) provides an efficient VQ search method for a general VQ codebook, that is, no particular structure of the VQ codebook is assumed. The second method (described below in Section IX.C.2.) provides an efficient method for the excitation quantization in the case where a signed VQ codebook is used for the excitation. The first method reduces the complexity of the excitation VQ in NFC by reorganizing the calculation of the energy of the error vector for each candidate excitation vector, also referred to as a codebook vector. The energy of the error vector is the cost function that is minimized during the search of the excitation codebook. The reorganization is obtained by: 1. Expanding the Mean Squared Error (MSE) term of the error vector; 2. Excluding the energy term that is invariant to the candidate excitation vector; and 3. Pre-computing the energy terms of the ZERO-STATE response of the candidate excitation vectors that are invariant to the sub-vectors of the subframe. The second method represents an efficient way of searching the excitation codebook in the case where a signed codebook is used. The second method is obtained by reorganizing the calculation of the energy of the error vector in such a way that only half of the total number of codevectors is searched. The combination of the first and second methods also provides an efficient search. However, there may be circumstances where the first and second methods are used separately. For example, if a signed codebook is not used, then the second invention does not apply, but the first invention may be applicable. For mathematical convenience, the nomenclature used in Sections IX.C.1. and 2. below to refer to certain quantities differs from the nomenclature used in Section IX.B. above to refer the same or similar quantities. The following key serves as a guide to map the nomenclature used in Section IX.B. above to that used in the following sections. In Section IX.B. above, quantization energy e(n) refers to a quantization energy derivable from an error vector q(n), where n is a time/sample position descriptor. Quantization energy e(n) and error vector q(n) are both associated with a VQ codevector in a VQ codebook. Similarly, in Sections IX.C.1. and 2. below, quantization energy E_{n }refers to a quantization energy derivable from an error vector q_{n}(k), where k refers to the k^{th }sample of the error vector, and where k=1. . . K (that is, K is the total number of samples in the error vector). K is referred to as the error vector dimension. Quantization energy E_{n }and error vector q_{n}(k) are each associated with an n^{th }VQ codevector of N VQ codevectors (where n=1 . . . N). In Section IX.B. above, the ZERO-INPUT response error vector is denoted qzi(n), where n is the time index. In Sections IX.C.1. and 2. below, the ZERO-INPUT response error vector is denoted q_{zi}(k), where k refers to the k^{th }sample of the ZERO-INPUT response error vector. In Section IX.B. above, the ZERO-STATE response error vector is denoted qzs(n), where n is the time index. In Sections IX.C.1. and 2. below, the ZERO-STATE response error vector is denoted q_{zs,n}(k), where n denotes the n^{th }VQ codevector of the N VQ codevectors, and k refers to the k^{th }sample of the ZERO-STATE response error vector. Also, Section IX.B. above, refers to “frames,” for example 5 ms frames, each corresponding to a plurality of speech vectors. Also, multiple bits of side information and VQ codevector indices are transmitted by the coder in each of the frames. In the Sections below, the term “subframe” is taken to be synonymous with “frame” as used in the Sections above. Correspondingly, the term “sub-vectors” refers to vectors within a subframe.
The energy, E_{n}, of the error vector, q_{n}(k), of the n^{th }codevector is given by As discussed above in Section IX.B., the error vector, q_{n}(k), of the n^{th }codevector can be calculated as the superposition of the ZERO-INPUT response, q_{zl}(k), and the ZERO-STATE response, q_{zs,n}(k), of the n^{th }codevector, i.e.
Utilizing this expression, the energy of the error vector, E_{n}, is expressed as
For an NFC system where the dimension of the excitation VQ, K, is less than the master vector size, K_{M }(where K_{M }can be thought of as a frame size or dimension) there will be multiple excitation vectors to quantize per master vector (or frame). The master vector size, K_{M}, is typically the maximum number of samples for which other parameters of the NFC system remain constant. If the relation between the dimension of the VQ, K, and master vector size, K_{M}, is defined as
In the present first invention the energy of the error vector of a given codevector is expanded into
In Eq. 7 the energy of the error vector is expanded into the energy of the ZERO-INPUT response, Eq. 8, the energy of the ZERO-STATE response, Eq. 9, and two times the cross-correlation between the ZERO-INPUT response and the ZERO-STATE response, Eq. 10. The minimization of the energy of the error vector as a function of the codevector is independent of the energy of the ZERO-INPUT response since the ZERO-INPUT response is independent of the codevector. Consequently, the energy of the ZERO-INPUT response can be omitted when searching the excitation codebook. Furthermore, since the N energies of the ZERO-STATE responses of the codevectors are unchanged for the L VQs, the N energies need only be calculated once. Consequently, the VQ operation can be expressed as:
In Eq. 11 only the cross-correlation term would be calculated inside the search loop. The N zero-response energies, E_{q} _{ zs, } _{n}, n=1, . . . N, would be pre-computed prior to the L VQs as explained above. Using Eq. 9 through Eq. 11 to perform the L VQs would require
For narrowband and wideband NFC systems, generally, a significant reduction in the number of floating point operations is obtained with the invention. However, it should be noted that the actual reduction depends on the parameters of the NFC system. In particular, it is obvious that if the VQ dimension is equal to the dimension of the master vector, i.e. K=K_{M} L=1, there is only one VQ per master vector, and effectively the reuse of the energies of the ZERO-STATE responses is not an issue.
A second invention devises a way to reduce complexity in the case a signed codebook is used for the excitation VQ. In a signed codebook the code vectors are related in pairs, where the two code vectors in a pair only differ by the sign of the vector elements, i.e. a first and second code vector in a pair, c_{1 }and c_{2}, respectively, are related by
It is only necessary to store the N/2 linear independent codevectors as the remaining N/2 codevectors are easily generated by simple negation. Furthermore, the ZERO-STATE responses of the remaining N/2 codevectors are given by a simple negation of the ZERO-STATE responses of the N/2 linear independent codevectors. Consequently, the complexity of generating the N ZERO-STATE responses is reduced with the use of a signed codebook. The present second invention further reduces the complexity of searching a signed codebook by manipulating the minimization operation.
By calculating the energy of the error vectors according to the straightforward method, see Eq. 2 and Eq. 4, the search is given by
Similar to the first invention the term of the energy of the error vector is expanded, except for the further incorporation of the property of a signed codebook.
From Eq. 17 it is evident that if a pair of codevectors, i.e. s=±1, are considered jointly, the two minimization terms, E_{n,s=+1 }and E_{n,s=−1 }are given by This method would also apply to a signed sub-codebook within a codebook, i.e. a subset of the code vectors of the codebook make up a signed codebook. It is then possible to apply the invention to the signed sub-codebook.
If the number of VQs per master vector, L, is greater than one, and a signed codebook (or sub-codebook) is used it is advantageous to combine the two methods above. In this case the energies of zero-responses, E_{q} _{ zs } _{,n}, n=1, . . . N/2, in Eq. 20 remains unchanged for the L VQs and are pre-calculated according to the first method. The number of floating point operations needed to calculate the energy of the error vector for all of the combined N codevectors for all of the L VQs is
The methods of the present invention, described in Sections IX.C.1. and 2., are used in an NFC system to quantize a prediction residual signal. More generally, the methods are used in an NFC system to quantize a residual signal. That is, the residual signal is not limited to a prediction residual signal, and thus, the residual signal may include a signal other than a prediction residual signal. The prediction residual signal (and more generally, the residual signal) includes a series of successive residual signal vectors. Each residual signal vector needs to be quantized. Therefore, the methods of the present invention search for and select a preferred one of a plurality of candidate codevectors corresponding to each residual vector. Each preferred codevector represents the excitation VQ of the corresponding residual signal vector. In one arrangement, method 1800 uses an unsigned or general VQ codebook including N unsigned candidate codevectors (see Section IX.C.1.b. above). In another arrangement, method 1800 uses a signed VQ codebook including N signed candidate codevectors (see Section IX.C.2.b above). For example, the signed VQ codebook represents a product of: a shape code, C_{shape}={c_{1}, c_{2}, c_{3}, . . . c_{N/2}}, including N/2 shape codevectors c_{n}, and a sign code, C_{sign}={+1, −1}, including a pair of oppositely-signed sign values +1 and −1, such that a positive codevector and a negative codevector (referred to as the signed codevectors) associated with each shape codevector c_{n }each represent a product of the shape codevector and a corresponding one of the sign values. Thus, the N/2 shape codevectors, when combined with the sign code, correspond to N signed codevectors. That is, first and second oppositely signed codevectors are associated with each on the shape codevectors. Method 1800 assumes there are L vectors in the master vector (or frame) and that the ZERO-STATE responses of the N codevectors (which may be signed or unsigned, as mentioned above) are invariant over the L vectors, because gain and/or filter parameters in the NFC system are updated only once every L vectors. At a first step 1805, N ZERO-STATE responses, each corresponding to a respective one of the N codebook vectors, are calculated. The N ZERO-STATE responses may be calculated using the NFC filter structures of At a next step 1810, N ZERO-STATE energies, corresponding to the N ZERO-STATE responses of step 1805, are calculated. At a next step 1815, an initial one of the L vectors in the frame to be quantized is identified. Next, a loop including steps 1820, 1825, 1830, 1835 and 1840 is repeated for each of the vectors to be quantized in the frame. Each iteration of the loop produces an excitation VQ corresponding to a successive one of the vectors in the frame, beginning with the initial vector. At first step 1820 of the loop, a ZERO-INPUT response corresponding to the given (that is, identified) vector is calculated. For example, in the first iteration of the loop, a ZERO-INPUT response corresponding to the first vector in the frame is calculated. The ZERO-INPUT response may be calculated using the NFC filter structure described above in connection with At a next step 1825, a best or preferred codevector is selected from among the N codevectors based on minimization terms. The minimization terms are derived based on the N ZERO-STATE energies from step 1810, and cross-correlations between the ZERO-INPUT response from step 1820 and ZERO-STATE responses from step 1805. In the arrangement of method 1800 using unsigned codevectors, step 1825 is governed by Eq. 11 of Section IX.C.1.b. above. In the arrangement of method 1800 using signed codevectors, step 1825 is governed by Eq. 20 of Section IX.C.2.b. above. Step 1825 is described further below in connection with At a next step 1830, filter memories in the NFC system used to implement method 1800 are updated using the best or preferred codevector selected in step 1825. At a decision step 1835, it is determined whether a last one of the vectors in the frame has been quantized. If yes, then the method is done. On the other hand, if further vectors in the frame remain to be quantized, flow proceeds to a step 1840, and a next one of the vectors to be quantized in the frame is identified. The quantization loop repeats for the next vector, and so on, for each of the L vectors in the frame. At initial step 1910 of the loop, one of the ZERO-STATE responses calculated in step 1805 is retrieved. The retrieved ZERO-STATE response corresponds to the codevector being tested during the current iteration of the search loop. For example, the first time through the loop, the ZERO-STATE response corresponding to the first codevector is retrieved. At a next step 1915, a cross-correlation between the ZERO-STATE response and the ZERO-INPUT response (from step 1820) is calculated. The cross-correlation produces a correlation term (also referred to as a “correlation result”). At a next step 1920, the ZERO-STATE energy, corresponding to the ZERO-STATE response of step 1910, is retrieved. At a next step 1925, a minimization term, corresponding to the codevector being tested in the current iteration of the search loop, is calculated. The minimization term is based on the retrieved ZERO-STATE energy, and a cross-correlation between the ZERO-STATE response of the codevector being tested and the ZERO-INPUT response. The ZERO-STATE energy and the cross-correlation term are combined (for example, the ZERO-STATE energy and cross-correlation term are added as in Eq. 11, and as in Eq. 20 when the cross-correlation term is negative). At next steps 1930 and 1935, the current minimization term (just calculated in step 1925) is compared to the minimization terms resulting from previous iterations through the search loop, to identify a current best minimization term from among all of the minimization terms calculated thus far. The codevector corresponding to this current best minimization term is also identified. At a next step 1940, it is determined whether a last one of the N codevectors has been tested. If yes, then the method is done because the codebook has been searched, and a preferred codevector has been determined. However, if no, at step 1945, then a next one of the N codevectors to be tested is identified, and the search loop is repeated. Assuming N iterations of the loop in method 1900 for each vector to be quantized, then method 1900 performs the following steps: deriving N correlation values using the NFC system (step 1915), each of the N correlation values corresponding to a respective one of the N VQ codevectors; combining each of the N correlation values with a corresponding one of N ZERO-STATE energies of the NFC system (step 1925), thereby producing N minimization values each corresponding to a respective one of the N VQ codevectors; and selecting a preferred one of the N VQ codevectors based on the N minimization values (steps 1930 and 1935), whereby the preferred VQ codevector is usable as an excitation quantization corresponding to a prediction residual signal (and more generally, to a residual signal) derived from a speech or audio signal. Since the prediction residual signal (more generally, the residual signal) includes a series of prediction residual vectors (more generally, a series of residual vectors), and method 1900 is repeated for each of the residual vectors in accordance with method 1800, overall the method produces an excitation quantization corresponding to each of the prediction residual vectors (and more generally, to each of the residual vectors). In a first step 2005, a first shape codevector to be tested (for example, codevector c_{1}) in the shape codebook is identified. At a next step 2010, the ZERO-STATE response of the shape codevector is retrieved. At a next step 2015, the energy of the ZERO-STATE response of step 2010 is retrieved. At a next step 2020, a cross-correlation term between the ZERO-STATE response of the shape codevector and the ZERO-INPUT response is calculated. The sign of the cross-correlation term may be a first value (for example, negative) or a second value (for example, positive). At a next step 2025, the sign value of the cross-correlation term is determined. For example, it is determined whether the cross-correlation term is positive. If yes (the cross-correlation term is positive), then at step 2030, a minimization term is calculated as the energy of the ZERO-STATE response minus the cross-correlation term. In block 2030, the phrase “sign is negative” indicates block 2030 corresponds to the negative codevector. Thus, arriving at block 2030 indicates the negative codevector is the preferred one of the negative and positive codevectors corresponding to the current shape codevector (see Eq. 20 of Section IX.C.2.b. above). On the other hand, if the cross-correlation term is negative, then at step 2035, the minimization term is calculated as the energy of the ZERO-STATE response plus the cross-correlation term. In block 2035, the phrase “sign is positive” indicates block 2035 corresponds to the positive codevector. Thus, arriving at block 2035 indicates the positive codevector is the preferred one of the negative and positive codevectors corresponding to the current shape codevector. Next, steps 2040 and 2045 determine the best current minimization term among all of the minimization terms calculated so far, and also, identify the signed codevector associated with the best current minimization term. At a next step 2050, it is determined whether the last codevector in the shape codebook has been tested. If yes, then the search is completed and the preferred shape codevector and its sign have been determined. If no, then at step 2055, the next shape codevector to be tested in the shape codebook is identified. In an alternative arrangement of method 2000, it is not assumed that the ZERO-STATE responses and their corresponding energies have been precalculated. In this alternative arrangement, the ZERO-STATE response and ZERO-STATE energy corresponding to each shape codevector is calculated within each iteration of the search loop, using additional method steps. Assuming N iterations of the loop in method 2000, method 2000 performs the following steps for each vector to be quantized: for each shape codevector
Example methods 1900 and 2000 each derive a minimization term corresponding to a codevector in each iteration of their respective search loops. In alternative arrangements of Methods 1900 and 2000, all of the minimization terms may be calculated in a single step, followed by a single step search through all of these minimization terms to select the preferred minimization term, and corresponding codevector.
This section provides a summary and comparison of the number of floating point operations that is required to perform the L VQs in a master vector for the different methods. The comparison assumes that the same techniques are used to obtain the ZERO-INPUT response and ZERO-STATE responses for the different methods, and thus, that the complexity associated herewith is identical for the different methods. Consequently, this complexity is omitted from the estimated number of floating point operations. The different methods are mathematically equivalent. i.e., all are equivalent to an exhaustive search of the codevectors. The comparison is provided in Table 1, which lists the expression for the number of floating point operations as well as the number of floating point operations for the example narrowband and wideband NFC systems. In the table the first and second inventions are labeled “Pre-computation of energies of ZERO-STATE responses” and “signed codebook search”, respectively.
It should be noted that the sign of the cross-correlation term in Eq. 7, 11, 16, 17, 18, 19, and 20 is opposite in some NFC systems due to alternate sign definitions of the signals. It is to be understood that this does not affect the present invention fundamentally, but will simply result in proper sign changes in the equations and methods of the invention. X. Decoder Operations The decoder in Refer to The short-term predictive parameter decoder block 120 decodes LSPI to get the quantized version of the vector of LSP inter-frame MA prediction residual. Then, it performs the same operations as in the right half of the structure in The prediction residual quantizer decoder block 130 decodes the gain index GI to get the quantized version of the log-gain prediction residual. Then, it performs the same operations as in blocks 304, 307, 308, and 309 of The long-term predictor block 140 and the adder 150 together perform the long-term synthesis filtering to get the quantized version of the short-term prediction residual dq(n) as follows.
The short-term predictor block 160 and the adder 170 then perform the short-term synthesis filtering to get the decoded output speech signal sq(n) as The following description of a general purpose computer system is provided for completeness. The present invention can be implemented in hardware, or as a combination of software and hardware. Consequently, the invention may be implemented in the environment of a computer system or other processing system. An example of such a computer system 2100 is shown in Computer system 2100 also includes a main memory 2108, preferably random access memory (RAM), and may also include a secondary memory 2110. The secondary memory 2110 may include, for example, a hard disk drive 2112 and/or a removable storage drive 2114, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, etc. The removable storage drive 2114 reads from and/or writes to a removable storage unit 2118 in a well known manner. Removable storage unit 2118, represents a floppy disk, magnetic tape, optical disk, etc. which is read by and written to by removable storage drive 2114. As will be appreciated, the removable storage unit 2118 includes a computer usable storage medium having stored therein computer software and/or data. In alternative implementations, secondary memory 2110 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 2100. Such means may include, for example, a removable storage unit 2122 and an interface 2120. Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 2122 and interfaces 2120 which allow software and data to be transferred from the removable storage unit 2122 to computer system 2100. Computer system 2100 may also include a communications interface 2124. Communications interface 2124 allows software and data to be transferred between computer system 2100 and external devices. Examples of communications interface 2124 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, etc. Software and data transferred via communications interface 2124 are in the form of signals 2128 which may be electronic, electromagnetic, optical or other signals capable of being received by communications interface 2124. These signals 2128 are provided to communications interface 2124 via a communications path 2126. Communications path 2126 carries signals 2128 and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link and other communications channels. In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to media such as removable storage drive 2114, a hard disk installed in hard disk drive 2112, and signals 2128. These computer program products are means for providing software to computer system 2100. Computer programs (also called computer control logic) are stored in main memory 2108 and/or secondary memory 2110. Computer programs may also be received via communications interface 2124. Such computer programs, when executed, enable the computer system 2100 to implement the present invention as discussed herein. In particular, the computer programs, when executed, enable the processor 2104 to implement the processes of the present invention, such as the methods implemented using the various codec structures described above, such as methods 6050, 1350, 1364, 1430, 1450, 1470, 1520, 1620, 1700, 1800, 1900, and 2000, for example. Accordingly, such computer programs represent controllers of the computer system 2100. By way of example, in the embodiments of the invention, the processes performed by the signal processing blocks of codecs/structures 1050, 2050, 3000-7000, 1300, 1362, 1400, 1402 a, 1404 a, and 1404 b can be performed by computer control logic. Where the invention is implemented using software, the software may be stored in a computer program product and loaded into computer system 2100 using removable storage drive 2114, hard drive 2112 or communications interface 2124. In another embodiment, features of the invention are implemented primarily in hardware using, for example, hardware components such as Application Specific Integrated Circuits (ASICs) and gate arrays. Implementation of a hardware state machine so as to perform the functions described herein will also be apparent to persons skilled in the relevant art(s). XII. Conclusion While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the invention. The present invention has been described above with the aid of functional building blocks and method steps illustrating the performance of specified functions and relationships thereof. The boundaries of these functional building blocks and method steps have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Any such alternate boundaries are thus within the scope and spirit of the claimed invention. One skilled in the art will recognize that these functional building blocks can be implemented by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. Patent Citations
Non-Patent Citations
Referenced by
Classifications
Legal Events
Rotate |