US 20030055630 A1 Abstract A tunable high resolution spectral estimator is disclosed as a method and apparatus for encoding and decoding signals, signal analysis and synthesis, and for performing high resolution spectral estimation. The invention is comprised of an encoder coupled with either or both of a signal synthesizer and a spectral analyzer. The encoder processes a frame of a time-based input signal by passing it through a bank of lower order filters and estimating a plurality of lower order covariances from which a plurality of filter parameters may be determined. Coupled to the encoder, through any appropriate data link or interface including telecommunication links, is one or both of a signal synthesizer and a spectral analyzer. The signal synthesizer includes a decocer for processing the covariances and a parameter transformer. The signal synthesizer includes a decoder for processing the covariances and a parameter transformer for determining filter parameters for an ARMA filter. An excitation signal is processed through the ARMA filter to reproduce, or synthesize, a representation of the input filter. The spectral analyzer also includes a decoder which processes the covariances for input to a spectral plotter to detemine the power frequency spectrum of the input signal. The invention may be used in a myriad of applications including voice identification, doppler-based radar speed estimation, time delay estimation, and others.
Claims(28) 1. A signal encoder for determining a plurality of filter parameters from an input signal for later reproduction of said signal, said encoder comprising a bank of first order filters, each of said filters being tuned to a preselected frequency, and a covariance estimator connected to the output of said filter bank for estimating covariances from which filter parameters may be calculated for a filter to reproduce said signal. 2. The signal encoder of 3. The signal encoder of 4. The signal encoder of 5. The signal encoder of 6. The signal encoder of 7. The signal encoder of 8. The signal encoder of 9. The signal encoder of 10. The signal encoder/signal synthesizer of 11. The signal encoder/signal synthesizer of 12. The signal encoder/signal synthesizer of 13. The signal encoder/signal synthesizer of 14. The signal encoder/signal synthesizer of 15. The signal encoder/signal synthesizer of 16. The signal encoder of 17. The signal encoder/spectral analyzer of 18. A device for verifying the identity of a speaker based on his spoken speech, said device comprising a voice input device for receiving a speaker's voice and processing it for further comparison, a bank of first order filters coupled to said voice input device, each of said filters being tuned to a preselected frequency, a covariance estimator coupled to said filter bank for estimating filter covariances, a decoder coupled to said covariance estimator for producing a plurality of filter parameters, and a comparator for comparing said produced filter parameters with prerecorded speaker input filter parameters and thereby verifying the speaker's identity or not. 19. The device of 20. The device of 21. A Doppler-based speed estimator comprising a pulse-Doppler radar for producing an output of Doppler frequencies, a HREE filter coupled to said radar, and a spectral plotter coupled to said HREE filter for determining the power frequency spectrum of said radar output, said power frequency spectrum thereby specifying the speed of any objects sensed by said radar. 22. A device for estimating the delay between any two signals, said device including a sensing device for producing a time based output reflective of any delay desired to be estimated, a Fourier transformer for converting said time based output to a frequency based output, a HREE filter coupled to said transformer, and a spectral plotter coupled to said HREE filter for determining the power frequency spectrum of said time based signal, said power frequency spectrum thereby specifying said delay. 23. A method for analyzing a signal comprising the steps of passing said signal through a bank of lower order filters, each of said filters being tuned to a preselected frequency, and estimating a plurality of covariances from the output of said filter bank, said covariances being sufficient for calculating a plurality of filter parameters for a HREE filter, said HREE filter thereby being capable of reproducing said signal. 24. The method of 25. The method of 26. A method of verifying the identity of a speaker based on his spoken speech, said method comprising the steps of receiving a speaker's voice, processing said voice input for further comparison by passing it through a bank of lower order filters, each of said filters being tuned to a preselected frequency, estimating a plurality of filter covariances from said filter outputs, producing a plurality of filter parameters from said filter covariances, and comparing said filter parameters with prerecorded speaker input filter parameters and thereby verifying the speaker's identity or not. 27. A method of estimating a speed of an object with a Doppler-based radar comprising the steps of producing an output of Doppler frequencies with said Doppler-based radar, passing said frequencies through a HREE filter, and determining the power frequency spectrum of said frequencies to thereby specify the speed of said object. 28. A method for estimating the delay between any two signals, said method comprising the steps of producing a time based output reflective of any delay desired to be estimated, converting said time based output to a frequency based output by taking its Fourier transform, and determining the power frequency spectrum of said frequency based signal to thereby specify said delay.Description [0001] We disclose a new method and apparatus for encoding and decoding signals and for performing high resolution spectral estimation. Many devices used in communications employ such devices for data compression, data transmission and for the analysis and processing of signals. The basic capabilities of the invention pertain to all areas of signal processing, especially for spectral analysis based on short data records or when increased resolution over desired frequency bands is required. One such filter frequently used in the art is the Linear Predictive Code (LPC) filter. Indeed, the use of LPC filters in devices for digital signal processing (see, e.g., U.S. Pat. Nos. 4,209,836 and 5,048,088 and D. Quarmby, [0002] We now describe this available art, the difference between the disclosed invention and this prior art, and the principal advantages of the disclosed invention. FIG. 1 depicts the power spectrum of a sample signal, plotted in logarithmic scale. [0003] We have used standard methods known to those of ordinary skill in the art to develop a 4th order LPC filter from a finite window of this signal. The power spectrum of this LPC filter is depicted in FIG. 2. [0004] One disadvantage of the prior art LPC filter is that its power spectral density cannot match the “valleys,” or “notches,” in a power spectrum, or in a periodogram. For this reason encoding and decoding devices for signal transmission and processing which utilize LPC filter design result in a synthesized signal which is rather “flat,” reflecting the fact that the LPC filter is an “all-pole model.” Indeed, in the signal and speech processing literature it is widely appreciated that regeneration of human speech requires the design of filters having zeros, without which the speech will sound flat or artificial; see, e.g., [C. G. Bell, H. Fujisaaki, J. M. Heinz, K. N. Stevons and A. S. House, [0005] Another feature of linear predictive coding is that the LPC filter reproduces a random signal with the same statistical parameters (covariance sequence) estimated from the finite window of observed data. For longer windows of data this is an advantage of the LPC filter, but for short data records relatively few of the terms of the covariance sequence can be computed robustly. This is a limiting factor of any filter which is designed to match a window of covariance data. The method and apparatus we disclose here incorporates two features which are improvements over these prior art limitations: The ability to include “notches” in the power spectrum of the filter, and the design of a filter based instead on the more robust sequence of first covariance coefficients obtained by passing the observed signal through a bank of first order filters. The desired notches and the sequence of (first-order) covariance data uniquely determine the filter parameters. We refer to such a filter as a tunable high resolution estimator, or THREE filter, since the desired notches and the natural frequencies of the bank of first order filters are tunable. A choice of the natural frequencies of the bank of filters correspond to the choice of a band of frequencies within which one is most interested in the power spectrum, and can also be automatically tuned. FIG. 3 depicts the power spectrum estimated from a particular choice of 4th order THREE filter for the same data used to generate the LPC estimate depicted in FIG. 2, together with the true power spectrum, depicted in FIG. 1, which is marked with a dotted line. [0006] We expect that this invention will have application as an alternative for the use of LPC filter design in other areas of signal processing and statistical prediction. In particular, many devices used in communications, radar, sonar and geophysical seismology contain a signal processing apparatus which embodies a method for estimating how the total power of a signal, or (stationary) data sequence, is distributed over frequency, given a finite record of the sequence. One common type of apparatus embodies spectral analysis methods which estimate or describe the signal as a sum of harmonics in additive noise [P. Stoica and R. Moses, [0007]FIG. 6 depicts the five corresponding power spectra obtained through LPC filter design, while FIG. 7 depicts the corresponding power spectra obtained through the THREE filter design. FIGS. 8, 9 and [0008] The broader technology of the estimation of sinusoids in colored noise has been regarded as difficult [B. Porat, Digital [0009] We therefore disclose that the THREE filter design leads to a method and apparatus, which can be readily implemented in hardware or hardware/software with ordinary skill in the art of electronics, for spectral estimation of sinusoids in colored noise. This type of problem also includes time delay estimation [M. A. Hasan and M. R. Asimi-Sadjadi, [0010] We also disclose that the basic invention could be used as a part of any system for speech compression and speech processing. In particular, in certain applications of speech analysis, such as speaker verification and speech recognition, high quality spectral analysis is needed [Joseph P. Campbell, Jr., [0011]FIG. 1 is a graphical representation of the power spectrum of a sample signal; [0012]FIG. 2 is a graphical representation of the spectral estimate of the sample signal depicted in FIG. 1 as best matched with an LPC filter; [0013]FIG. 3 is a graphical representation of the spectral estimate of the sample signal with true spectrum shown in FIG. 1 (and marked with dotted line here for comparison), as produced with the invention; [0014]FIG. 4 is a graphical representation of five sample signals comprised of the superposition of two sinusoids with colored noise; [0015]FIG. 5 is a graphical representation of the five periodograms corresponding to the sample signals of FIG. 4; [0016]FIG. 6 is a graphical representation of the five corresponding power spectra obtained through LPC filter design for the five sample signals of FIG. 4; [0017]FIG. 7 is a graphical representation of the five corresponding power spectra obtained through the invention filter design; [0018]FIG. 8 is a graphical representation of a power spectrum estimated from a time signal with two closely spaced sinusoids (marked by vertical lines), using periodogram; [0019]FIG. 9 is a graphical representation of a power spectrum estimated from a time signal with two closely spaced sinusoids (marked by vertical lines), using LPC design; [0020]FIG. 10 is a graphical representation of a power spectrum estimated from a time signal with two closely spaced sinusoids (marked by vertical lines), using the invention; [0021]FIG. 11 is a schematic representation of a lattice-ladder filter in accordance with the present invention; [0022]FIG. 12 is a block diagram of a signal encoder portion of the present invention; [0023]FIG. 13 is a block diagram of a signal synthesizer portion of the present invention; [0024]FIG. 14 is a block diagram of a spectral analyzer portion of the present invention; [0025]FIG. 15 is a block diagram of a bank of filters, preferably first order filters, as utilized in the encoder portion of the present invention; [0026]FIG. 16 is a graphical representation of a unit circle indicating the relative location of poles for one embodiment of the present invention; [0027]FIG. 17 is a block diagram depicting a speaker verification enrollment embodiment of the present invention; [0028]FIG. 18 is a block diagram depicting a speaker verification embodiment of the present invention; [0029]FIG. 19 is a block diagram of a speaker identification embodiment of the present invention; [0030]FIG. 20 is a block diagram of a doppler-based speed estimator embodiment of the present invention; and [0031]FIG. 21 is a block diagram for a time delay estimator embodiment of the present invention. [0032] The present invention of a THREE filter design retains two important advantages of linear predictive coding. The specified parameters (specs) which appear as coefficients (linear prediction coefficients) in the mathematical description (transfer function) of the LPC filter can be computed by optimizing a (convex) entropy functional. Moreover, the circuit, or integrated circuit device, which implements the LPC filter is designed and fabricated using ordinary skill in the art of electronics (see, e.g., U.S. Pat. Nos. 4,209,836 and 5,048,088) on the basis of the specified parameters (specs). For example, the expression of the specified parameters (specs) is often conveniently displayed in a lattice filter representation of the circuit, containing unit delays z [0033] In order to incorporate zeros as well as poles into digital filter models, it is customary in the prior art to use alternative architectures, for example the lattice-ladder architecture [K. J. Åström, [0034] As for the lattice representation of the LPC filter, the lattice-ladder filter consists of gains, which are the parameter specs, unit delays z [0035] As part of this disclosure, we disclose a method and apparatus for determining the gains in a ladder-lattice embodiment of THREE filter from a choice of notches in the power spectrum and of natural frequencies for the bank of filters, as well as a method of automatically tuning these notches and the natural frequencies of the filter bank from the observed data. Similar to the case of LPC filter design, the specs, or coefficients, of the THREE filter are also computed by optimizing a (convex) generalized entropy functional. One might consider an alternative design using adaptive linear filters to tune the parameters in the lattice-ladder filter embodiment of an autoregressive moving-average (ARMA) model of a measured input-output history, as has been done in [M. G. Bellanger, [0036] We now disclose a new method and apparatus for encoding and reproducing time signals, as well as for spectral analysis of signals. The method and apparatus, which we refer to as the Tunable High Resolution Estimator (THREE), is especially suitable for processing and analyzing short observation records. [0037] The basic parts of the THREE are: the Encoder, the Signal Synthesizer, and the Spectral Analyzer. The Encoder samples and processes a time signal (e.g., speech, radar, recordings, etc.) and produces a set of parameters which are made available to the Signal Synthesizer and the Spectral Analyzer. The Signal Synthesizer reproduces the time signal from these parameters. From the same parameters, the Spectral Analyzer generates the power spectrum of the time-signal. [0038] The design of each of these components is disclosed with both fixed-mode and tunable features. Therefore, an essential property of the apparatus is that the performance of the different components can be enhanced for specific applications by tuning two sets of tunable parameters, referred to as the filter-bank poles p=(p [0039] As noted herein, the THREE filter is tunable. However, in its simplest embodiment, the tunable feature of the filter may be eliminated so that the invention incorporates in essence a high resolution estimator (HREE) filter. In this embodiment the default settings, or a priori information, is used to preselect the frequencies of interest. As can be appreciated by those of ordinary skill in the art, in many applications this a priori information is available and does not detract from the effective operation of the invention. Indeed the tunable feature is not needed for these applications. Another advantage of not utilizing the tunable aspect of the invention is that faster operation is achieved. This increased operational speed may be more important for some applications, such as those which operate in real time, rather than the increased accuracy of signal reproduction expected with tuning. This speed advantage is expected to become less important as the electronics available for implementation are further improved. [0040] The intended use of the apparatus is to achieve one or both of the following objectives: (1) a time signal is analyzed by the Encoder and the set of parameters are encoded, and transmitted or stored. Then the Signal Synthesizer is used to reproduce the time signal; and/or (2) a time signal is analyzed by the Encoder and the set of parameters are encoded, and transmitted or stored. Then the Spectral Analyzer is used to identify the power spectrum of time signal over selected frequency bands. [0041] These two objectives could be achieved in parallel, and in fact, data produced in conjunction with (2) may be used to obtain more accurate estimates of the MA parameters, and thereby improve the performance of the time synthesizer in objective (1). Therefore, a method for updating the MA parameters on-line is also disclosed. [0042] The Encoder. Long samples of data, as in speech processing, are divided into windows or frames (in speech typically a few 10 ms.), on which the process can be regarded as being stationary. The procedure of doing this is well-known in the art [T. P. Barnwell III, K. Nayebi and C. H. Richardson, y(0),y(1), . . . , y(N). (2.1) [0043] This is done in the box designated as A/D in FIG. 12. This is standard in the art [T. P. Barnwell III, K. Nayebi and C. H. Richardson, [0044] The choice of starting point t [0045] As will be explained in the description of Component w=(w [0046] which are coded and passed on via a suitable interface to the Signal Synthesizer and the Spectral Analyzer. It should be noted that both sets p and w are self-conjugate. Hence, for each of them, the information of their actual values is carried by n+1 real numbers. [0047] Two additional features which are optional, are indicated in FIG. 12 by dashed lines. First, Component r=(r [0048] the so-called MA parameters, to be defined below. [0049] The Signal Synthesizer. The core component of the Signal Synthesizer is the Decoder, given as Component a=(a [0050] called the AR parameters. This set along with parameters r are fed into Component [0051] The Spectral Analyzer. The core component of the Spectral Analyzer is again the Decoder, given as Component [0052] Components. Now described in detail are the key components of the parts and their function. They are discussed in the same order as they have been enumerated in FIGS. [0053] Bank of Filters. The core component of the Encoder is a bank of n+1 filters with transfer functions
[0054] where the filter-bank poles p [0055] Clearly, u [0056] can be obtained via the second order filter
[0057] where p [0058] the same second order filter (2.7) replaces two complex one-order filters. We also disclose that for tunability of the apparatus to specific applications there may also be switches at the input buffer so that one or more filters in the bank can be turned off. The hardware implementation of such a filter bank is standard in the art. [0059] The key theoretical idea on which our design relies, described in C. I. Byrnes, T. T. Georgiou, and A. Lindquist, Φ( [0060] is the power spectrum of y, it can be shown that
[0061] where E{•} is mathematical expectation, provided t c [0062] from output data, as explained under point 2 below, to yield interpolation conditions ƒ(Z [0063] from which the function f(z), and hence the power spectrum Φ can be determined. The theory described in C. I. Byrnes, T. T. Georgiou, and A. Lindquist, [0064] Covariance Estimator. Estimation of the variance c [0065] of a stationary stochastic process v(t) from an observation record ν [0066] can be done in a variety of ways. The preferred procedure is to evaluate
[0067] over the available frame. [0068] In the present application, the variances ĉ [0069] Complex arithmetic is preferred, but, if real filter parameters are desired, the output of the second-order filter (2.7) can be processed by noting that [0070] where cov(ξ [0071] Before delivering w=(w [0072] is positive definite. If not, exchange w [0073] Initializer/Resetter. The purpose of this component is to identify and truncate portions of an incoming time series to produce windows of data (2.1), over which windows the series is stationary. This is standard in the art [T. P. Barnwell III, K. Nayebi and C. H. Richardson, [0074] Filter Bank Parameters. The theory described in C. I. Byrnes, T. T. Georgiou, and A. Lindquist, [0075] There are two observations which are useful in addressing the design trade-off. First, the size n of the data bank is dictated by the quality of the desired reproduction of the spectrum and the expected complexity of it. For instance, if the spectrum is expected to have k spectral lines or formants within the targeted frequency band, typically, a filter of order n=2k+2 is required for reasonable reproduction of the characteristics. [0076] Second, if N is the length of the window frame, a useful rule of thumb is to place the poles within
[0077] This guarantees that the output of the filter bank attains stationarity in about {fraction (1/10)} of the length of the window frame. Accordingly the Covariance Estimator may be activated to operate on the later 90% stationary portion of the processed window frame. Hence, t [0078] This typically gives a slight improvement as compared to the Covariance Estimator processing the complete processed window frame. [0079] There is a variety of ways to take advantage of the design trade-offs. We now disclose what we believe are the best available rules to automatically determine a default setting of the bank of filter poles, as well as to automatically determine the setting of the bank of filter poles given a priori information on a bandwidth of frequencies on which higher resolution is desired. [0080] Default Values. [0081] (a) One pole is chosen at the origin, [0082] (b) choose one or two real poles at
[0083] (c) choose an even number of equally spaced poles on the circumference of a circle with radius
[0084] in a Butterworth-like pattern with angles spanning the range of frequencies where increased resolution is desired. [0085] The total number of elements in the filter bank should be at least equal to the number suggested earlier, e.g., two times the number of formants expected in the signal plus two. [0086] In the tunable case, it may be necessary to switch off one or more of the filters in the bank. [0087] As an illustration, take the signal of two sinusoidal components in colored noise depicted in FIG. 4. More specifically, in this example, [0088] with ω [0089] A THREE filter is determined by the choice of filter-bank poles and a choice of MA parameters. The comparison of the original line spectra with the power spectrum of the THREE filter determined by these filter-bank poles and the default value of the MA parameters, discussed below, is depicted in FIG. 7. [0090] Excitation Signal Selection. An excitation signal is needed in conjunction with the time synthesizer and is marked as Component 5′. For some applications the generic choice of white noise may be satisfactory, but in general, and especially in speech it is a standard practice in vocoder design to include a special excitation signal selection. This is standard in the art [T. P. Barnwell III, K. Nayebi and C. H. Richardson, [0091] Component [0092] Excitation signal selection is not needed if only the frequency synthesizer is used. [0093] MA Parameter Selection. As for the filter-bank poles, the MA parameters can either be directly tuned using special knowledge of spectral zeros present in the particular application or set to a default value. However, based on available data (2.1), the MA parameter selection can also be done on-line, as described in Appendix A. [0094] There are several possible approaches to determining a default value. For example, the choice r [0095] We now disclose what we believe is the best available method for determining the default values of the MA parameters. Choose r [0096] which corresponds to the central solution, described in Section [0097] Decoder. Given p, w, and r, the Decoder determines n+1 real numbers a [0098] with the property that the polynomial α( [0099] has all its roots less than one in absolute value. This is done by solving a convex optimization problem via an algorithm presented in papers C. I. Byrnes, T. T. Georgiou, and A. Lindquist, [0100] For the default choice (2.12) of MA-parameters, a much simpler algorithm is available, and it is also presented in the section on the Decoder algorithms. The MATLAB code for this algorithm is also enclosed in the Appendix B. [0101] Parameter Transformer. The purpose of Component [0102] where r [0103] A filter design which is especially suitable for an apparatus with variable dimension is the lattice-ladder architecture depicted in FIG. 11. In this case, the gain parameters α [0104] are chosen in the following way. For k=n, n−1, . . . , 1, solve the recursions
[0105] for j=0, 1, . . . , k, and set
[0106] This is a well-known procedure; see, e.g., K. J. Astrtöm, Introduction to stochastic realization theory, Academic Press, 1970; and K. J. Aström, [0107] ARMA filter. An ARMA modeling filter consists of gains, unit delays z [0108] Spectral plotter. The Spectral Plotter amounts to numerical implementation of the evaluation Φ(e (a [0109] This is the coefficient vector padded with M-n-1 zeros. The discrete Fourier transform can be implemented using the FFT algorithm in standard form. [0110] Decoder Algorithms. We now disclose the algorithms used for the Decoder. The input data consists of [0111] (i) the filter-bank poles p=(p [0112] (ii) the MA parameters r=(r ρ( [0113] has all its roots less than one in absolute value, and [0114] (iii) the complex numbers w=(w [0115] determined as (2.11) in the Covariance Estimator. [0116] The problem is to find AR parameters a=(a α( [0117] has all its roots less than one in absolute value, such that
[0118] is a good approximation of the power spectrum Φ(e β( [0119] satisfying α( [0120] such that the rational function
[0121] satisfies the interpolation condition ƒ(z [0122] For this purpose the parameters p and r are available for tuning. If the choice of r corresponds to the default value, r [0123] The central solution algorithm for the default filter. In the special case in which the MA parameters r=(r [0124] and the coefficients σ σ( [0125] We need a rational function
[0126] such that p(s [0127] and a realization p(z)=c(sI−A) [0128] and the n-vector b remains to be determined. To this end, choose a (reindexed) subset s [0129] Then, remove all zero rows from u [0130] for the n-vector x with components x [0131] the required b is obtained by removing the last component of the (n+1)-vector
[0132] where R is the triangular (n+1)×(n+1)-matrix
[0133] where empty matrix entries denote zeros. [0134] Next, with prime (′) denoting transposition, solve the Lyapunov equations P (A−P [0135] which is a standard routine, form the matrix N=(I−P [0136] and compute the (n+1)-vectors h [0137] h [0138] h [0139] h [0140] h [0141] Finally, compute the (n+1)-vectors [0142] y [0143] with components y (s+1) [0144] starting with the coefficient of S [0145] The (central) interpolant (3.7) is then given by
[0146] where {circumflex over (α)}(z) and {circumflex over (β)}(z) are the polynomials {circumflex over (α)}( {circumflex over (β)}( [0147] However, to obtain the α(z) which matches the MA parameters r=τ, {circumflex over (α)}(z) needs to be normalized by setting
[0148] This is the output of the central solver. [0149] Convex optimization algorithm for the tunable filter. To initiate the algorithm, one needs to choose an initial value for a, or, equivalently, for α(z), to be recursively updated. We disclose two methods of initialization, which can be used if no other guidelines, specific to the application, are available. [0150] Initialization method 1. Given the solution of the Lyapunov equation
[0151] where r is the column vector having the coefficients 1, r [0152] as initial value. [0153] Initialization method 2. Take
[0154] where α [0155] Algorithm. Given the initial (3.4) and (3.1), solve the linear system of equations
[0156] for the column vector S with components s L [0157] for the vector
[0158] The components of h are the Markov parameters defined via the expansion
[0159] The vector (3.13) is the quantity on which iterations are made in order to update α(z). More precisely, a convex function J(q) presented in C. I. Byrnes, T. T. Georgiou, and A. Lindquist, [0160] This is done by upholding condition (3.6) while successively trying to satisfy the interpolation condition (3.8) by reducing the errors [0161] Each iteration of the algorithm consists of two steps. Before turning to these, some quantities, common to each iteration and thus computed off-line, need to be evaluated. [0162] Given the MA parameter polynomial (3.2), let the real numbers π ρ( [0163] Moreover, given a subset p [0164] together with its real part V γ( [0165] define the (n+1)×(m+1) matrix
[0166] We compute off-line M(ρ), M(τ*ρ) and M(τρ), where ρ and τ are the polynomials (3.2) and (3.1) and τ*(z) is the reversed polynomial τ*( [0167] Finally, we compute off-line L [0168] Step 1. In this step the search direction of the optimization algorithm is determined. Given α(z), first find the unique polynomial (3.5) satisfying (3.6). Identifying coefficients of z [0169] where π [0170] This is a candidate for an approximation of the positive real part of the power spectrum Φ as in (2.8). [0171] Next, we describe how to compute the gradient ∇J. Evaluate the interpolation errors (3.16), noting that e [0172] into its real part ν [0173] for the column vector x and form the gradient as
[0174] where S is the solution to the Lyapunov equation (3.9) and L [0175] To obtain the search direction, using Newton's method, we need the Hessian. Next, we describe how it is computed. Let the 2n×2n-matrix {circumflex over (P)} be the solution to the Lyapunov equation {circumflex over (P)}=Â′{circumflex over (P)}Â+ĉ′ĉ, [0176] where Â is the companion matrix (formed analogously to A in (3. 10)) of the polynomial α(z) {tilde over (P)}=Ã′{tilde over (P)}Ã+{tilde over (c)}′{tilde over (c)}, [0177] where Ã is the companion matrix (formed analogously to A in (3.10)) of the polynomial α(z) [0178] where the precomputed matrices L [0179] For an arbitrary polynomial (3.19), determine λ γ( [0180] where π(z) is a polynomial of at most degree m−1. This yields m+1 linear equation for the m+1 unknowns λ [0181] Finally, the new search direction becomes d=H [0182] Let d [0183] Step 2. In this step a line search in the search direction d is performed. The basic elements are as follows. Five constants c [0184] If ||d||<c [0185] Then, an updated value for a is obtained by determining the polynomial (3.4) with all roots less than one in absolute value, satisfying α( [0186] with σ(z) being the updated polynomial (3.14) given by σ(z)=τ(z)q(z), [0187] where the updated q(z) is given by
[0188] with h [0189] This factorization can be performed if and only if q(z) satisfies condition (3.15). If this condition fails, this is determined in the factorization procedure, and then the value of λ is scaled down by a factor of c [0190] The algorithm is terminated when the approximation error given in (3.16) becomes less than a tolerance level specified by c [0191] Otherwise, set h equal to h [0192] Description of technical steps in the procedure. The MATLAB code for this algorithm is given in Appendix B. As an alternative a state-space implementation presented in C. I. Byrnes, T. T. Georgiou, and A. Lindquist, [0193] (1) Routine pm, which computes the Pick matrix from the given data p=(p [0194] (2) Routine q2a which is used to perform the technical step of factorization described in Step [0195] for the minimum-phase solution a(z), in terms of which α(z)=τ(z)a(z). This is standard and is done by solving the algebraic Riccati equation [0196] for the stabilizing solution. This yields [0197] This is a standard MATLAB routine [W. F. Arnold, III and A. J. Laub, [0198] (3) Routine central, which computes the central solution as described above. [0199] (4) Routine decoder which integrates the above and provides the complete function for the decoder of the invention. [0200] An application to speaker recognition. In automatic speaker recognition a person's identity is determined from a voice sample. This class of problems come in two types, namely speaker verification and speaker identification. In speaker verification, the person to be identified claims an identity, by for example presenting a personal smart card, and then speaks into an apparatus that will confirm or deny this claim. In speaker identification, on the other hand, the person makes no claim about his identity, and the system must decide the identity of the speaker, individually or as part of a group of enrolled people, or decide whether to classify the person as unknown. [0201] Common for both applications is that each person to be identified must first enroll into the system. The enrollment (or training) is a procedure in which the person's voice is recorded and the characteristic features are extracted and stored. A feature set which is commonly used is the LPC coefficients for each frame of the speech signal, or some (nonlinear) transformation of these [Jayant M. Naik, [0202] Speaker recognition can further be divided into text-dependent and text-independent methods. The distinction between these is that for text-dependent methods the same text or code words are spoken for enrollment and for recognition, whereas for text-independent methods the words spoken are not specified. [0203] Depending on whether a text-dependent or text-independent method is used, the pattern matching, the procedure of comparing the sequence of feature vectors with the corresponding one from the enrollment, is performed in different ways. The procedures for performing the pattern matching for text-dependent methods can be classified into template models and stochastic models. In a template model as the Dynamic Time Warping (DTW) [Hiroaki Sakoe and Seibi Chiba, [0204] For text-independent speaker recognition the procedure can be used in a similar manner for speech-recognition-based methods and text-prompted recognition [Sadaoki Furui, Recent advances in Speaker Recognition, Lecture Notes in Computer Science 1206, 1997, page 241f] where the phonemes can be identified. [0205] Speaker verification. FIG. 17 depicts an apparatus for enrollment. An enrollment session in which certain code words are spoken by a person later to be identified produces via this apparatus a list of speech frames and their corresponding MA parameters r and AR parameters a, and these triplets are stored, for example, on a smart card, together with the filter-bank parameters p used to produce them. Hence, the information encoded on the smart card (or equivalent) is speaker specific. When the identity of the person in question needs to be verified, the person inserts his smart card in a card reader and speaks the code words into an apparatus as depicted in FIG. 18. Here, in Box [0206] Speaker identification. In speaker identification the enrollment is carried out in a similar fashion as for speaker verification except that the feature triplets are stored in a database. FIG. 19 depicts an apparatus for speaker identification. It works like that in FIG. 17 except that there is a frame identification box (Box 12) as in FIG. 18, the output of which together with the MA parameters a and AR parameters a are fed into a data base. The feature triplets are compared to the corresponding triplets for the population of the database and a matching score is given to each. On the basis of the (weighted) sum of the matching scores of each frame the identity of the speaker is decided. [0207] Doppler-Based Applications and Measurement of Time-Delays. In communications, radar, sonar and geophysical seismology a signal to be estimated or reconstructed can often be described as a sum of harmonics in additive noise [P. Stoica and Ro. Moses, [0208] Tunable high-resolution speed estimation by Doppler radar. We disclose an apparatus based on THREE filter design for determining the velocities of several moving objects. If we track m targets moving with constant radial velocities v [0209] where θ [0210] where Δ is the pulse repetition interval, assuming once-per-pulse coherent in-phase/quadrature sampling. [0211]FIG. 20 illustrates a Doppler radar environment for our method, which is based on the Encoder and Spectral Analyzer components of the THREE filter. To estimate the velocities amounts to estimating the Doppler frequencies which appear as spikes in the estimated spectrum, as illustrated in FIG. 7. The device is tuned to give high resolution in the particular frequency band where the Doppler frequencies are expected. [0212] The only variation in combining the previously disclosed Encoder and Spectral Estimator lies in the use of dashed rather than solid communication links in FIG. 20. The dashed communication links are optional. When no sequence r of MA parameters is transmitted from Box [0213] The same device can also be used for certain spatial doppler-based applications [P. Stoica and Ro. Moses, [0214] Tunable high-resolution time-delay estimator. The use of THREE filter design in line spectra estimation also applies to time delay estimation [M. A. Hasan and M. R. Azimi-Sadjadi, [0215]FIG. 21 illustrates a possible time-delay estimator environment for our method, which has precisely the same THREE-filter structure as in FIG. 20 except for the preprocessing of the signal. In fact, this adaptation of THREE filter design is a consequence of Fourier analysis, which gives a method of interchanging frequency and time. In more detail, if x(t) is the emitted signal, the backscattered signal is of the form
[0216] where the first term is a sum of convolutions of delayed copies of the emitted signal and v(t) represents ambient noise and measurement noise. The convolution kernels h [0217] where the Fourier transform X(ω) of the original signal is known and can be divided off. [0218] It is standard in the art to obtain a frequency-dependent signal from the time-dependent signal by fast Fourier methods, e.g., FFT. Sampling the signal Z(w) at frequencies ω=τω [0219] y [0220] of a time series
[0221] where θ [0222] Other Areas of Application. The THREE filter method and apparatus can be used in the encoding and decoding of signals more broadly in applications of digital signal processing. In addition to speaker identification and verification, THREE filter design could be used as a part of any system for speech compression and speech processing. The use of THREE filter design line spectra estimation also applies to detection of harmonic sets [M. Zeytino{haeck over (g)}lu and K. M. Wong, [0223] Various changes may be made to the invention as would be apparent to those skilled in the art. However, the invention is limited only to the scope of the claims appended hereto, and their equivalents. Referenced by
Classifications
Rotate |