A apresentação está carregando. Por favor, espere

A apresentação está carregando. Por favor, espere

Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos 1 Adaptive & Array Signal Processing AASP Prof. Dr.-Ing. João Paulo C. Lustosa.

Apresentações semelhantes


Apresentação em tema: "Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos 1 Adaptive & Array Signal Processing AASP Prof. Dr.-Ing. João Paulo C. Lustosa."— Transcrição da apresentação:

1 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos 1 Adaptive & Array Signal Processing AASP Prof. Dr.-Ing. João Paulo C. Lustosa da Costa University of Brasília (UnB) Department of Electrical Engineering (ENE) Laboratory of Array Signal Processing PO Box 4386 Zip Code 70.919-970, Brasília - DF Homepage: http://www.pgea.unb.br/~lasp

2 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Method of the Steepest Descent (1)   Estimation error: 2   Mean Squared-error Cost Function

3 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Method of the Steepest Descent (2)   Mean Squared-error Cost Funtion 3

4 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Method of the Steepest Descent (3)   Steepest-descent algorithm   Oldest method for optimization   Using directly the Wiener-Hopf equations Computational difficulties due to the matrix inversion large number of taps high rate input data   Successive correction of w(n) should lead to J min 4   The definition of the steepest descent equation is given by: Since Replacing the gradient in the steepest descent equation:

5 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Method of the Steepest Descent (4)   Comparison: Wiener Solution vs Steepest Descent 5   For the Wiener Solution: R and p should be given. One matrix inversion is necessary to find the optimal w.   For the Steepest Descent Solution: R and p should be given. No matrix inversion is necessary. However, a certain number of iterations is necessary until w to converge. The speed of convergence, i.e. The number of iterations until convergence, depends on the step size . If  is greater than the optimal value, the algorithm will never converge. If  is smaller than the optimal value, the algorithm will converge slowly.

6 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Method of the Steepest Descent (4)   Stability of the Steepest-descent Algorithm 6   The equation to update w(n) can be write in terms of a c(n) vector defined below:   Therefore:

7 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Method of the Steepest Descent (5)   Stability of the Steepest-descent Algorithm 7   Computing the Eigenvalue Decomposition (EVD) of R:   We can write the update equation of c(n) as

8 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Method of the Steepest Descent (6)   Stability of the Steepest-descent Algorithm 8   For the k-th element of each vector, we have that:   Assuming some initial value when n = 0:   Therefore, we can rewrite the n-th element as a function of the initial value.

9 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Method of the Steepest Descent (7)   Stability of the Steepest-descent Algorithm 9   Hence, for the convergence:

10 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Method of the Steepest Descent (8)   Stability of the Steepest-descent Algorithm 10   With this upper and lower bound for , it is possible to find the optimal  that allows a fast convergence.

11 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Least Mean Squares (LMS) vs Steepest-descent (1) 11   To find a good approximation of p and R, we would need a large number of samples.   However, since in many applications only a few samples are available, we can compute instantaneous estimates of p and of R.   We can replace the estimates of p and of R in the Mean Squared-error Cost Function below.   By applying the gradient, we obtain:

12 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Least Mean Squares (LMS) vs Steepest-descent (2) 12   Replacing the gradient in the steepest-descent expression:

13 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Least Mean Squares (LMS) vs Steepest-descent (3) 13   We can summarize the LMS algorithm in three steps:   1st step - Compute the filter output:   2nd step – Estimation error:   3rd step – Tap-weight adaptation:

14 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Recursive Least Mean Squares (RMS) (1) 14   We consider that the estimation error is more affected by more recent samples than by older samples.   n is the total number of samples,  (n,i) is the weighting factor,and e(i) is the estimation error for the i-th sample.   Using the defition of e(i):   We define the weighting factor  (n,i) in the following way:

15 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Recursive Least Mean Squares (RMS) (2) 15   Using the definition of the weighting factor  (n,i)   Considering n samples, we can extend the Wiener-Hopf equations as: The M-by-M correlation matrix  (n) is defined as The M-by-1 cross-correlation vector z(n) is defined as

16 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Recursive Least Mean Squares (RMS) (3) 16   Using the definition of the correlation matrix, we can rewrite it in a recursive way:

17 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Recursive Least Mean Squares (RMS) (4) 17   Analogously we can also compute the recursive form of the cross- correlation vector z(n):   Matrix Inversion Lemma (also known as Woodburry´s identity) Given as certain matrix A defined as the inverse A of is given by   Comparing A matrix with, we have that:

18 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Recursive Least Mean Squares (RMS) (5) 18   Applying the matrix inverse lemma   We define:   Using the definition of P(n) and K(n), we can rewrite the inverse of  (n)

19 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Recursive Least Mean Squares (RMS) (6) 19   Using the above definition of K(n), we have that: Comparing with the expression of P(n), we have that: Therefore:

20 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Recursive Least Mean Squares (RMS) (7) 20   Expanding z(n) using its recursive form:   Computing the expression to update the tap-weight vector:   Replacing the first P(n) with its recursive expression:

21 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Recursive Least Mean Squares (RMS) (7) 21   Computing the expression to update the tap-weight vector:   Replacing

22 Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos Recursive Least Mean Squares (RMS) (8) 22   We can summarize the RLS algorithm in the following steps:


Carregar ppt "Universidade de Brasília Laboratório de Processamento de Sinais em Arranjos 1 Adaptive & Array Signal Processing AASP Prof. Dr.-Ing. João Paulo C. Lustosa."

Apresentações semelhantes


Anúncios Google