VTechWorks staff will be away for the winter holidays starting Tuesday, December 24, 2024, through Wednesday, January 1, 2025, and will not be replying to requests during this time. Thank you for your patience, and happy holidays!
 

On Ways to Improve Adaptive Filter Performance

TR Number

Date

1999-12-13

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

Adaptive filtering techniques are used in a wide range of applications, including echo cancellation, adaptive equalization, adaptive noise cancellation, and adaptive beamforming. The performance of an adaptive filtering algorithm is evaluated based on its convergence rate, misadjustment, computational requirements, and numerical robustness. We attempt to improve the performance by developing new adaptation algorithms and by using "unconventional" structures for adaptive filters.

Part I of this dissertation presents a new adaptation algorithm, which we have termed the Normalized LMS algorithm with Orthogonal Correction Factors (NLMS-OCF). The NLMS-OCF algorithm updates the adaptive filter coefficients (weights) on the basis of multiple input signal vectors, while NLMS updates the weights on the basis of a single input vector. The well-known Affine Projection Algorithm (APA) is a special case of our NLMS-OCF algorithm.

We derive convergence and tracking properties of NLMS-OCF using a simple model for the input vector. Our analysis shows that the convergence rate of NLMS-OCF (and also APA) is exponential and that it improves with an increase in the number of input signal vectors used for adaptation. While we show that, in theory, the misadjustment of the APA class is independent of the number of vectors used for adaptation, simulation results show a weak dependence. For white input the mean squared error drops by 20 dB in about 5N/(M+1) iterations, where N is the number of taps in the adaptive filter and (M+1) is the number of vectors used for adaptation. The dependence of the steady-state error and of the tracking properties on the three user-selectable parameters, namely step size, number of vectors used for adaptation (M+1), and input vector delay D used for adaptation, is discussed. While the lag error depends on all of the above parameters, the fluctuation error depends only on step size. Increasing D results in a linear increase in the lag error and hence the total steady-state mean-squared error. The optimum choices for step size and M are derived. Simulation results are provided to corroborate our analytical results.

We also derive a fast version of our NLMS-OCF algorithm that has a complexity of O(NM). The fast version of the algorithm performs orthogonalization using a forward-backward prediction lattice. We demonstrate the advantages of using NLMS-OCF in a practical application, namely stereophonic acoustic echo cancellation. We find that NLMS-OCF can provide faster convergence, as well as better echo rejection, than the widely used APA.

While the first part of this dissertation attempts to improve adaptive filter performance by refining the adaptation algorithm, the second part of this work looks at improving the convergence rate by using different structures. From an abstract viewpoint, the parameterization we decide to use has no special significance, other than serving as a vehicle to arrive at a good input-output description of the system. However, from a practical viewpoint, the parameterization decides how easy it is to numerically minimize the cost function that the adaptive filter is attempting to minimize.

A balanced realization is known to minimize the parameter sensitivity as well as the condition number for Grammians. Furthermore, a balanced realization is useful in model order reduction. These properties of the balanced realization make it an attractive candidate as a structure for adaptive filtering. We propose an adaptive filtering algorithm based on balanced realizations.

The third part of this dissertation proposes a unit-norm-constrained equation-error based adaptive IIR filtering algorithm. Minimizing the equation error subject to the unit-norm constraint yields an unbiased estimate for the parameters of a system, if the measurement noise is white. The proposed algorithm uses the hyper-spherical transformation to convert this constrained optimization problem into an unconstrained optimization problem. It is shown that the hyper-spherical transformation does not introduce any new minima in the equation error surface. Hence, simple gradient-based algorithms converge to the global minimum. Simulation results indicate that the proposed algorithm provides an unbiased estimate of the system parameters.

Description

Keywords

Unbiased Equation Error, Algorithm Analysis, Stereophonic Echo Cancellation, Affine Projection Algorithm, Filter Structures, Adaptive Filtering, Adaptive IIR Filtering

Citation