US 6788785 B1 Abstract A Fast Affine Projection (FAP) adaptive filter and method of adaptive filtering are disclosed, which reduce instability associated with FAP filters caused by error accumulation in the process of inversion of an autocorrelation matrix. The method provides updating of the adaptive filter coefficients by solving at least one system of linear equations whose coefficients are the autocorrelation matrix coefficients, by using a descending iterative method with intrinsic feedback. The results of the solution are used to update the adaptive filter coefficients. The approach is applicable for a normalized step size ranging from zero to unity, and allows either direct determination of updated filter coefficients without determining an inverse autocorrelation matrix, or, determining the inverse autocorrelation matrix by a descending iterative method. In some embodiments, a normalized step size is set close to unity, and the system of linear equations is solved by steepest descent or conjugate gradients methods. In other embodiments, a normalized step size is substantially less than unity, e.g. less than about 0.7. Accumulation of inevitable numerical errors is avoided and the stable adaptive filter and method are suitable for various DSP platforms, e.g. 16 and 24 bit, fixed-point and floating-point platforms.
Claims(80) 1. A method of adaptive filtering using a Fast Affine Projection (FAP) adaptive filter, comprising the steps of:
(a) determining adaptive filter coefficients;
(b) defining a normalized step size;
(c) updating the adaptive filter coefficients, comprising:
determining autocorrelation matrix coefficients from a reference input signal, and
solving a least one system of linear equations whose coefficients are the auto-correlation matrix coefficients, the system being solved using a descending iterative method having an inherent stability of its operation, the results of the solution being used for updating the adaptive filter coefficients, and the number of systems of linear equations to be solved being dependent on the normalized step size;
(d) repeating the steps (b) and (c) the required number of times.
2. A method as defined in
3. A method as defined in
4. A method as defined in
5. A method as defined in
6. A method as defined in
7. A method as defined in
8. A method as defined in
9. A method as defined in
10. A method as defined in
11. A method as defined in
12. A method as defined in
13. A method as defined in
14. A method as defined in
15. A method as defined in
16. A method as defined in
17. A method as defined in
18. A method as defined in
19. A method as defined in
20. A method as defined in
21. A method as defined in
22. A method as defined in
23. A method as defined in
24. A method as defined in
25. A method as defined in
26. A method as defined in
27. A method as defined in
28. A method as defined in
29. A method as defined in
30. A method as defined in
31. A method as defined in
32. A method as defined in
33. A method as defined in
34. A method as defined in
35. A method as defined in
36. A method as defined in
37. A method as defined in
38. A method as defined in
39. A method as defined in
40. A method as defined in
41. An adaptive filter comprising:
a Fast Affine Projection (FAP) adaptive filter characterized by adaptive filter coefficients;
a means for updating the adaptive filter coefficients, including means for setting a normalized step size, the updating means comprising:
a correlator for determining auto-correlation matrix coefficients from a reference input signal, and
a calculator for solving at least one system of linear equations whose coefficients are the auto-correlation matrix coefficients, the system being solved by using a descending iterative method having an inherent stability of its operation, the results of the solution being used for updating the adaptive filter coefficients and the number of systems of linear equations to be solved being dependent on the normalized step size.
42. The adaptive filter as defined in
43. The adaptive filter as defined in
44. The adaptive filter as defined in
45. The adaptive filter as defined in
46. The adaptive filter as defined in
47. The adaptive filter as defined in
48. The adaptive filter as defined in
49. The adaptive filter as defined in
50. The adaptive filter as defined in
51. The adaptive filter as defined in
52. The adaptive filter as defined in
53. The adaptive filter as defined in
54. The adaptive filter as defined in
55. The adaptive filter a defined in
56. The adaptive filter as defined in
57. The adaptive filter as defined in
58. The adaptive filter as defined in
59. The adaptive filter as defined in
60. The adaptive filter as defined in
61. The adaptive filter as defined in
62. The adaptive filter as defined in
63. The adaptive filter as defined in
64. The adaptive filter as defined in
65. The adaptive filter as defined in
66. The adaptive filter as defined in
67. The adaptive filter as defined in
68. The adaptive filter as defined in
69. The adaptive filter as defined in
70. The adaptive filter as defined in
71. The adaptive filter as defined in
72. The adaptive filter as defined in
73. The adaptive filter as defined in
74. The adaptive filter as defined in
75. The adaptive filter as defined in
76. The adaptive filter as defined in
77. The adaptive filter as defined in
78. The adaptive filter as defined in
79. The adaptive filter as defined in
80. The adaptive filter as defined in
Description This application is a continuation-in-Part of U.S. patent application Ser. No. 09/218,428 to Heping Ding filed Dec. 22, 1998 and incorporated herein by reference. The present invention relates to adaptive filters, and in particular, to fast affine projection (FAP) adaptive filters providing a stability of operation, and methods of stable FAP adaptive filtering. Adaptive filtering is a digital signal processing technique that has been widely used in technical areas such as, e.g., echo cancellation, noise cancellation, channel equalization, system identification and in products like, e.g., network echo cancellers, acoustic echo cancellers for full-duplex handsfree telephones and audio conference systems, active noise control, data communications systems. The characteristics of an adaptive filter are determined by its adaptation algorithm. The choice of the adaptation algorithm in a specific adaptive filtering system directly affects the performance of the system. Being simple and easily stable, the normalized least mean square (NLMS) adaptation algorithm, being a practical implementation of the least mean square (LMS) algorithm, is now most widely used in the industry with a certain degree of success. However, because of its intrinsic weakness, the NLMS algorithm converges slowly with colored training signals like the speech, an important class of signals most frequently encountered in many applications such as telecommunications. The performance of systems incorporating NLMS adaptive filters very often suffers from the slow convergence nature of the algorithm. Other known algorithms proposed so far are either too complicated to implement on a commercially available low-cost digital signal processor (DSP) or suffer from numerical problems. Recently, a fast affine projection (FAP) method was proposed as described in a publication by Steven L. Gay and Sanjeev Tavathia (Acoustic Research Department, AT&T Bell Laboratories), “The Fast Affine Projection Algorithm,” pp. 3023-3026, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, May 1995, Detroit, Mich., U.S.A. The FAP is a simplified version of the more complicated, and therefore less practical, affine projection (AP) algorithm. With colored train signals such as the speech, the FAP usually converges several times faster than the NLMS, with only a marginal increase in implementation complexity. However, a stability issue has been preventing FAP from being used in the industry. A prior art FAP implementation oscillates within a short period of time even with floating-point calculations. This results from the accumulation of finite precision numerical errors in a matrix inversion process associated with the FAP. Researchers have been trying to solve this problem, but no satisfactory answer has been found so far. A remedy proposed in the publication listed above and reinforced in publication by Q. G. Liu, B. Champagne, and K. C. Ho (Bell-Northern Research and INRS-Télécommunications, Université du Québec), “On the Use of a Modified Fast Affine Projection Algorithm in Subbands for Acoustic Echo Cancellation,” pp. 354-357, Proceedings of 1996 IEEE Digital Signal Processing Workshop, Loen, Norway, September 1996, is to periodically re-start a new inversion process in parallel with the old one, and to use it to replace the latter so as to get rid of the accumulated numerical errors therein. While this can be a feasible solution for high-precision DSPs such as a floating-point processor, it is still not suitable for fixed-point DSP implementations because then the finite precision numerical errors would accumulate so fast that the re-starting period would have to be made impractically small, not to mention the extra complexity associated with this part of the algorithm. Therefore there is a need in the industry for development of alternative adaptive filtering methods which would ensure stability of operation while providing fast convergence and reliable results. It is an object of the present invention to provide an adaptive filter and a method of adaptive filtering which would avoid the afore-mentioned problems. According to one aspect of the present invention there is provided a method of adaptive filtering, comprising the steps of: (a) determining adaptive filter coefficients; (b) defining a normalized step size; (c) updating the filter coefficients, comprising: determining auto-correlation matrix coefficients from a reference input signal, and solving at least one system of linear equations whose coefficients are the auto-correlation matrix coefficients, the system being solved by using a descending iterative method having an inherent stability of its operation, the results of the solution being used for updating the filter coefficients and the number of systems of linear equations to be solved being dependent on the normalized step size; (d) repeating the steps (b) and (c) required number of times. Advantageously, determining of the auto-correlation matrix is performed recursively. The normalized step size may be chosen to be equal to any value from 0 to 1 depending on the application. In the majority of applications, it is often set to be close to unity or equal to unity. Conveniently, the normalized step size is within a range from about 0.9 to 1.0. Another convenient possibility is to set the normalized step size within a range from about 0.7 to 1.0. For the normalized step size close to unity, the step of solving at least one system of linear equations comprises solving one system of linear equations only. Alternatively, in some applications, e.g., when one needs to keep misadjustment low after convergence, it is required to set the normalized step size substantially less than unity, e.g. less than about 0.7. In this situation the step of solving at least one system of linear equations comprises solving N systems of linear equations, with N being a projection order. In the embodiments of the invention, a problem of finding the inverse of an auto-correlation matrix which is inherent for other known methods, is reduced to a problem of solving a system of linear equations based on the auto-correlation matrix. The system is solved by one of descending iterative methods which provide inherent stability of operation due to an intrinsic feedback adjustment. As a result inevitable numerical errors are not accumulated. In first and second embodiments of the invention, a steepest descent and conjugate gradient methods are used respectively to determine the first column of the inverse auto-correlation matrix, taking into account that the normalized step size is close to unity. In a third embodiment of the invention a steepest descent or conjugate gradient method is used to determine coefficients of the inverse auto-correlation matrix by recursively solving N systems of linear equations having decrementing orders. It corresponds to the case of the normalized step size being not close to unity. The forth embodiment of the invention avoids determining the inverse of the auto-correlation matrix. Instead, a system of linear equations is solved by using a conjugate gradient method resulting in a solution that can be used directly to determine an updating part of the filter coefficients. Alternatively, other known descending methods, e.g. steepest descent, Newton's method, PARTAN, quasi-Newton's method or other known iterative descending methods may also be used. Conveniently, the steps of the method may be performed by operating with real value or complex value numbers. The method described above is suitable for a variety of applications, e.g. echo cancellation, noise cancellation, channel equalization, system identification which are widely used in products such as network echo cancellers, acoustic echo cancellers for full-duplex handsfree telephones and audio conference systems, active noise control systems, data communication systems. According to another aspect of the invention there is provided an adaptive filter, comprising: a filter characterized by adaptive filter coefficients; means for updating the filter coefficients, including means for setting a normalized step size, the updating means comprising: a correlator for determining auto-correlation matrix coefficients from a reference input signal, and a calculator for solving at least one system of linear equations whose coefficients are the auto-correlation matrix coefficients, the system being solved by using a descending iterative method having an inherent stability of its operation, the results of the solution being used for updating the filter coefficients and the number of systems of linear equations to be solved being dependent on the normalized step size. Advantageously, the calculator is an iterative calculator. Preferably, the calculator is a steepest descent or a conjugate gradient calculator. Alternatively, it may be a calculator performing a Newton's or quasi-Newton's method, a PARTAN calculator, or another known iterative descending calculator providing an inherent stability of operation. Conveniently, the filter and the updating means are capable of operating with real numbers. Alternatively, they may be capable of operating with complex numbers. The normalized step size may be chosen to be equal to any value from 0 to 1 depending on the application. In the majority of applications, the adaptive filter is often set with the normalized step size close to unity or equal to unity. Conveniently, the normalized step size is within a range from about 0.9 to 1.0. Another convenient possibility is to set the normalized step size within a range from about 0.7 to 1.0. For the normalized step size close to unity, the calculator provides iterative solution of one system of linear equations only at each time interval. Alternatively, in some applications, e.g., when one needs to keep misadjustment after convergence low, it is required to set the normalized step size substantially less than unity, e.g. less than about 0.7. In this situation the calculator provides solutions of N systems of linear equations, with N being a projection order. Conveniently, due to the symmetry of the auto-correlation matrix, determining of the inverse auto-correlation matrix may be performed by solving N systems of linear equations having decrementing orders. The adaptive filter as described above may be used for echo cancellation, noise cancellation, channel equalization, system identification or other applications where adaptive filtering is required. The adaptive filter and method described above have an advantage over known FAP adaptive filters by providing a stability of operation. The problem caused by error accumulation in matrix inversion process existing in known FAP filters is solved in the present invention by using iterative descending methods. First, the matrix inversion operation is reduced to a solution of a corresponding system of linear equations based on the auto-correlation matrix. Second, the iterative descending methods, used for the solution of the above system, provide an inherent stability of operation due to an intrinsic feedback adjustment. As a result, inevitable numerical errors are not accumulated, thus providing stability of adaptive filtering. The invention will now be described in greater detail regarding the attached drawings in which: FIG. 1 is a block diagram of an adaptive echo cancellation system; FIG. 2 is a block diagram of an adaptive filter according to the first embodiment of the invention; FIG. 3 is a block diagram of a steepest descent calculator embedded in the filter of FIG. 2; FIG. 4 is a block diagram of a conjugate gradient calculator embedded in an adaptive filter according to a second embodiment of the invention; FIG. 5 is a block diagram of an adaptive filter according to a third embodiment of the invention; FIG. 6 is a flow-chart illustrating an operation of a steepest descent calculator embedded in the adaptive filter of FIG. 5; FIG. 7 is a flow-chart illustrating an operation of a conjugate gradient calculator embedded in the adaptive filter of FIG. 5; FIG. 8 is a block diagram of an adaptive filter according to a fourth embodiment of the invention; and FIG. 9 is a block diagram of a conjugate gradient calculator embedded in the adaptive filter of FIG. A. Conventions in Linear Algebra Representation In this document, underscored letters, such as B. Introduction FIG. 1 presents a block diagram of an adaptive echo cancellation system Note that, depending on a particular application, the terms “far-end” and “near-end” may need to be interchanged. For example, with a network echo canceller in a telephone terminal, x(n) in FIG. 1 is actually the near-end signal to be transmitted to the far-end, and d(n) in FIG. 1 is the signal received from the telephone loop connected to the far-end. Although the terminology used above is based on the assumption that x(n) is the far-end signal and d(n) is the signal perceived at the near-end, it is done solely for convenience and does not prevent the invention from being applied to other adaptive filter applications with alternate terminology. The following conventions in linear algebra representation are used throughout the text of the present patent. Underscored letters, such as 1. The Normalized Least Mean Square (NLMS) Filter The following L-dimensional column vectors are defined as the reference input vector and the adaptive filter coefficient vector respectively, where L is the length of the adaptive filter: The part for convolution and subtraction, which derives the output of the adaptive echo cancellation system, can then be expressed as where the superscript “T” stands for transpose of a vector or matrix. The adaptation part of the method, which updates the coefficient vectors based on the knowledge of the system behavior, is In Equation (3), μ(n) is called the adaptation step size, which controls the rate of change to the coefficients, α is a normalized step size, and δ, being a small positive number, prevents μ(n) from going too big when there is no or little reference signal x(n). The computations required in the NLMS filter include 2L+2 multiply and accumulate (MAC) operations and 1 division per sampling interval. Details about the least mean square (LMS) method can be found, e.g. In classical papers to B. Widrow, et al., “Adaptive Noise Cancelling: Principles and Applications,” Proceedings of the IEEE, Vol. 63, pp. 1692-1716, Dec. 1975 and B. Widrow, et al., “Stationary and Nonstationary Learning Characteristics of the LMS Adaptive Filter,” Proceedings of the IEEE, Vol. 64, pp. 1151-1162, Aug. 1976. 2. The Affine Projection (AP) Filter The affine projection method is a generalization of the NLMS method. With N being a so-called projection order, we define where The convolution and subtraction part of the method is
n)= (d n)−X ^{T}(n) (W n) (Equation 5)where where I is the N×N identity matrix, and α and δ play similar roles as described with regards to Equation 3. α is the normalized step size which may have a value from 0 to 1, and very often is assigned a unity value. δ is a regularization factor that prevents R(n), the auto-correlation matrix, from becoming ill-conditioned or rank-deficient, in which case P(n) would have too big eigenvalues causing instability of the method. It can be seen that an N×N matrix inversion operation at each sampling interval is needed in the AP method. The AP method offers a good convergence property, but computationally is very extensive. It needs 2LN+O(N 3. The Fast Affine Projection (FAP) Filter Since the AP method is impractically expensive computationally, certain simplifications have been made to arrive at the so-called FAP method, see, e.g. U.S. Pat. No. 5,428,562 to Gay. Note that here the “F”, for “fast”, means that it saves computations, not faster convergence. In fact by adopting these simplifications, the performance indices, including the convergence speed, will slightly degrade. Briefly, the FAP method consists of two parts: (a) An approximation which is shown in Equation (7) below and certain simplifications to reduce the computational load. The approximation in Equation (7) uses the scaled posteriori errors to replace the a priori ones in Equation (4): (b) The matrix inversion operation. The matrix inversion may be performed by using different approaches. One of them is a so-called “sliding windowed fast recursive least square (FRLS)” approach, outlined in U.S. Pat. No. 5,428,562 to Gay, to recursively calculate the P(n) in Eq. 6. This results in a total requirement of computations to be 2L+14N MACs and 5 divisions. In another approach, the matrix inversion lemma is used twice to derive P(n) at sampling interval n, see, e.g. Q. G. Liu, B. Champagne, and K. C. Ho (Bell-Northern Research and INRS-Télécommunications, Université du Québec), “On the Use of a Modified Fast Affine Projection Algorithm in Subbands for Acoustic Echo Cancellation”, pp. 354-357, Proceedings of 1996 IEEE Digital Signal Processing Workshop, Loen, Norway, September 1996. It assumes an accurate estimate P(n−1) to start with, then derives P(n) by modifying P(n−1) based on P(n−1) and knowledge of the new data Note that, it always arrives at the most accurate and stable solution to solve the matrix inversion problem directly by using classical methods. However, these methods are too expensive computationally to implement on a real time platform. Therefore, various alternative approaches with much less complexity, such as the ones described above, are used. The above matrix inversion methods have no feedback adjustment. An accurate estimate of P(n) relies heavily on an accurate starting point P(n−1). If P(n−1) deviates from the accurate solution, the algorithm has no way of knowing that, and will still keep updating it based on it and the new 4. Stable Fast Affine Projection Filter with a Normilized Step Size Close or Equal to Unity Usually, for maximum convergence speed, the normalized step size α, as indicated in Equation (6), is set to a value of unity, or less than but quite close to it. This is the case described in the publications and the U.S. Pat. No. 5,428,562 cited above. It indicates that in this case
n) (Equation 8)where In light of the above, the problem of finding P(n), the inverse of the auto-correlation matrix
reduces to solving a set of N linear equations where R(n) is symmetric and positive definite according to its definition Equation (9), and Although Eq. (10) is much simpler to be solved than the original matrix inversion problem, it is still quite expensive, and especially division extensive, to do that with classical methods like Gaussian elimination. Therefore the obtained system of linear equations is solved by one of iterative descending methods which provide an inherent stability of operation and avoid accumulation of numerical errors as will be described in detail below. 5. Stable Fast Affine Projection Filter with General Step Size As mentioned above, the concept described in section 4 above, is only suitable for applications where a relatively large α (the one equal to unity or less than but very close to unity) is needed. Although a large α is needed in most applications, the method of adaptive filtering wouldn't be regarded as complete without addressing cases with smaller normilized step sizes. For example, one way of reducing the misadjustment (steady state output error) after the FAP system has converged is to use a small α. According to Equation (6), determining of an updating part of the filter coefficients may be performed either by a direct solving for C. Preferred Embodiments of the Invention A method of adaptive filtering implemented in an adaptive filter In general, steepest descent is a technique that seeks the minimum point of a certain quadratic function iteratively. At each iteration (the same as sampling interval in our application), it takes three steps consecutively: 1. to find the direction in which the parameter vector should go. This is just the negative gradient of the quadratic function at the current point; 2. to find the optimum step size for the parameter vector updating so that it will land at the minimum point along the direction dictated by the above step; and 3. to update the parameter vector as determined above. By iteratively doing the above, the steepest descent reaches the unique minimum of the quadratic function, where the gradient is zero, and continuously tracks the minimum if it moves. Details about the steepest descent method can be found, for example, in a book by David G. Luenberger (Stanford University), Linear and Nonlinear Programming, Addison-Wesley Publishing Company, 1984. For an adaptive filtering application, the implied quadratic function is as follows whose gradient with respect to
n) (P n)− (Equation 12)b where Based on the above discussion, the stable FAP (SFAP) method which uses the steepest descent technique includes the following steps: Initialization: Updating the adaptive filter coefficients in sampling interval n including: recursive determining of an auto-correlation matrix: where determining projection coefficients by solving the system of linear Equations (10) using the steepest descent technique, the projection coefficients being the coefficients of the inverse of the auto-correlation matrix: (P n)= (P n−1)−β(n) (g n) (Equation 17)
and performing an adaptive filtering for updating the filter coefficients e(n)=d(n)−y(n) (Equation 20)
n) (Equation 21)
where It is important to note that feedback adjustment provided by Equations (15), (16) and (17) does not exist in known prior art approaches. The prior art FAP approaches determine P(n) based on P(n−1) and the new incoming data X(n) only, without examining how well a P actually approximates R The three expressions shown in Equations (15), (16) and (17) correspond to the three steps of the steepest descent technique discussed above. An adaptive filter
reference input signal x(n) and an auxiliary signal f(n) (see Equation (33) below), used for updating the coefficients, and generates a provisional echo estimate signal PR(n) (see Equation (34) below). The updating means A convention in FIG. 2 is the use of a thick line to represent the propagation of a matrix or vector signal, i.e., with more than one component, and the use of a thin line to stand for a scalar signal propagation. In FIG. 2 a correlator A steepest descent calculator Two C language prototypes implementing the steepest descent technique according to the first embodiment of the invention have been built. The first one is a floating point module, and the second one is a 16-bit fixed-point DSP implementation. A floating-point module simulating the NLMS acoustic echo canceller design in Venture, a successful full-duplex handsfree telephone terminal product by Nortel Networks Corporation, and a bench mark, floating-point module that repeats a prior art FAP scheme by Q. G. Liu, B. Champagne, and K. C. Ho (Bell-Northern Research and INRS-Télécommunications, Université du Québec), “On the Use of a Modified Fast Affine Projection Algorithm in Subbands for Acoustic Echo Cancellation,” pp. 354-357, Proceedings of 1996 IEEE Digital Signal Processing Workshop, Loen, Norway, September 1996, have been also implemented for comparison purposes. The following data files have been prepared for processing. The source ones are speech files with Harvard sentences (Intermediate Reference System filtered or not) sampled at 8 KHz and a white noise file. Out of the source files certain echo files have been produced by filtering the source ones with certain measured, 1200-tap, room impulse responses. These two sets of files act as x(n) and d(n) respectively. The major simulation results are as follows. The bench mark prior art floating-point FAP scheme with L=1024 and N=5, goes unstable at 2′57″ (2 minutes and 57 seconds, real time, with 8 KHz sampling rate) with speech training, but with certain unhealthy signs showing up after only about 25 seconds. These signs are in the form of improper excursions of the elements of the vector For comparison, within the time period of our longest test case (7′40″), the portions that estimate Filters of another length L=512 have also been built for SFAP, the prior art FAP and NLMS. As expected, they converge approximately twice as fast as they do for L=1024. Thus, the adaptive filter and method using a steepest descent calculator for determining the inverse matrix coefficients, providing a stability of adaptive filtering, are provided. A method of adaptive filtering according to a second embodiment of the present invention uses an iterative “conjugate gradient” technique to iteratively solve the Equation (10), the corresponding calculator being shown in FIG. Conjugate gradient is a technique that also seeks the minimum point of a certain quadratic function iteratively. Conjugate gradient is closely related to the steepest descent scheme discussed above. It differs from the steepest decent in that it is guaranteed to reach the minimum in no more than N steps, with N being the order of the system. That is, conjugate gradient usually converges faster than the steepest descent. At each iteration (the same as sampling interval in out application), the conjugate gradient takes five steps consecutively: 1. to find the gradient of the quadratic function at the current point; 2. to find the optimum factor for adjusting the direction vector, along which adjustment to the parameter vector will be made; 3. to update the direction vector as determined above; 4. to find the optimum step size for the parameter vector updating; and 5. to update the parameter vector as determined above. Unlike the steepest descent algorithm, which simply takes the negative gradient of the quadratic function as the parameter vector updating direction, conjugate gradient modifies the negative gradient to determine an optimized direction. By iteratively doing the above, the scheme reaches the unique minimum of the quadratic function, where the gradient is zero, in no more than N steps. The conjugate gradient technique also continuously tracks the minimum if it moves, such as the case with non-stationary input signal x(n). Details about the conjugate gradient algorithm can be found, for example, in a book by David G. Luenberger (Stanford University), Linear and Non-linear Programming, Addison-Wesley Publishing Company, 1984. For an adaptive filtering application, the implied quadratic function is still shown in Equation (11), whose gradient with respect to Based on the above discussion, the SFAP method according to the second embodiment, which uses the conjugate gradient technique, includes the following steps: Initialization: Updating the adaptive filter coefficients in sampling interval n including: recursive determining of an auto-correlation matrix: where n)=r _{srs}(n−1) g ^{T}(n) (b n−1) (Equation 27)
n)=γ(n) (s n−1)− (g n) (Equation 28)
n)=R(n) (s n) (Equation 29) n)=−r_{srs}(n) g ^{T}(n) (s n) (Equation 31)
n)= (P n−1)+β(n) (s n) (Equation 32)and performing an adaptive filtering for updating the filter coefficients
n)= (W n−1)+αη_{N−1}(n−1) (X n−N)= (W n−1)+f(n) (X n−N) (Equation 33)
n) (X n)+α,{overscore (η)} ^{T}(n−1) ({tilde over (R)} n)=PR(n)+EC(n) (Equation 34)
n) (Equation 36)
where The five expressions shown in Equations (26), (27), (28), (31) and (32) respectively correspond to the five steps of the conjugate gradient technique discussed earlier in this section. As shown in Table 2, the total computational requirement of the Stable FAP method according to the second embodiment of the invention is 2L+2N An adaptive filter according to the second embodiment of the invention is similar to that of the first embodiment shown in FIG. 2 except for the calculator The conjugate gradient calculator The rest of the structure of the adaptive filter, employing the conjugate gradient calculator
A C language prototype for 16-bit fixed-point DSP implementation of the SFAP using the conjugate gradient scheme has been built and studied. It has the same parameters ters (L=1024 and N=5) and uses same data files as the steepest descent prototype described above. It behaves very similarly to its floating-point steepest descent counterpart. There is no observable difference in the way A method of adaptive filtering according to a third embodiment of the present invention provides adaptive filtering when the normalized step size has any value from 0 to 1. It updates the adaptive filter coefficients by iteratively solving a number of systems linear equations having decrementing orders to determine the inverse auto-correlation matrix in a manner described below. Let's prove first that, if P is the inverse of a symmetric matrix R, then it is also symmetric. By definition
Transposing Equation (38) we get
respectively. Since R and I are symmetric, Equation (39) can be written as
This means that P
That is, P is symmetric. Based on the understanding that the inverse of a symmetric matrix is also symmetric, let's consider a sampling interval n where we need to find an N-th order square matrix P(n) so that
Equation (42) can be written in a scalar form where r We first solve the set of N linear equations defined by j=0 in Equation (43), for {p Equation (45) coincides with Equation (10) derived earlier and applied to the first and second embodiments of the invention. The right hand side of Equation (45) or Equation (46) tells that Having dealt with the j=0 case, we now start solving the set of N linear equations defined by j=1 in Equation (43), for {p Because P(n) is symmetric so that p with still N equations but only N−1 instead of N unknowns, i.e., {p Equation (49) has the same format as Equation (45) except that the order is reduced by one. Equation (49) can also be solved by using either of the two approaches presented above, costing “2(N−1) By repeating the above recursion steps, with the order of the problem decrementing by one each step, we can completely solve the lower triangle of P(n). Since P(n) is symmetric, this is equivalent to solving the entire P(n). A formula for this entire process can be derived from Equation (43) and the concept described above, as follows: Note that the right hand sides of Equation (50) for all i at each recursion step j do not contain any unknowns, i.e., {P for steepest descent method, and N divisions and for conjugate gradient method. Note that in deriving Equations (51) and (52) the following formulae are used which can be easily proven by mathematical induction. Based on the above derivations, the SFAP method according to the third embodiment of the invention includes the following steps: Initialization: Updating the adaptive filter coefficients in sampling interval n including the steps shown in Equation 55 below. Please, note that designations used in Equation (55), are as follows:
An adaptive filter The P(n) calculator In modification to this embodiment, the steepest descent calculator A method of adaptive filtering according to a fourth embodiment of the present invention also provides adaptive filtering when the normalized step size has any value from 0 to 1. It updates the adaptive filter coefficients by iteratively solving a number of systems linear equations which avoid an explicit matrix inversion performed in the third embodiment of the invention. The details are described below. The second equation from the set of Equations (6), which is reproduced for convenience in Equation (56) below, is equivalent to
n) (Equation 56)It is possible to obtain As a way of example, we will use a conjugate gradient method and perform N conjugate gradient iterations so that an exact solution, not an iterated one, is reached. It is ensured by the fact that the conjugate gradient method is guaranteed to reach the solution in no more than N iterations, with N being the order of the problem, see Equation (55). It is convenient to start with Accordingly, the SFAP method of the fourth embodiment of the invention includes the following steps:
The steps of the adaptive filtering methods according to the fourth embodiment are presented in more detail below: Initialization: Processing in sampling interval n:
where the designations are similar to that presented with regard to the first, second and third embodiments described above. Note that any division operation in Equation (56) is not performed if the denominator is not greater than zero, in which case a zero is assigned to the quotient. An adaptive filter 600 according to a fourth embodiment of the invention is shown in detail in FIG. The Modifications described with regard to the first two embodiments are equally applicable to the third and fourth embodiments of the invention. Two “C” prototypes according to the third and fourth embodiments of the invention have been implemented in a floating. point PC platform. They have demonstrated results completely consistent with the results of the first and second embodiments of the invention. Thus, an adaptive filter and a method providing a stability of adaptive filtering based on feedback adjustment, are provided. Although the methods operate with real-valued numbers, it does not prevent the invention from being extended to cases where introduction of complex numbers is necessary. Although the embodiments are illustrated within the context of echo cancellation, the results are also applicable to other adaptive filtering applications. Thus, it will be appreciated that, while specific embodiments of the invention are described in detail above, numerous variations, modifications and combinations of these embodiments fall within the scope of the invention as defined in the following claims. Patent Citations
Non-Patent Citations
Referenced by
Classifications
Legal Events
Rotate |