All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Information Report

Category:
## Resumes & CVs

Published:

Views: 39 | Pages: 4

Extension: PDF | Download: 0

Share

Description

On the tracking performance of combinations of least mean squares and recursive least squares adaptive filters

Tags

Transcript

ONTHETRACKINGPERFORMANCEOFCOMBINATIONSOFLEASTMEANSQUARESANDRECURSIVELEASTSQUARESADAPTIVEFILTERS
V ´ ıtor H. Nascimento
†
, Magno T. M. Silva
†
, Luiz A. Azpicueta-Ruiz
‡
, and Jer ´ onimo Arenas-Garc´ ıa
‡†
University of S˜ao Paulo, Brazil
‡
Univ. Carlos III de Madrid, Spain
{
vitor,magno
}
@lps.usp.br
{
lazpicueta, jarenas
}
@tsc.uc3m.es
ABSTRACT
Combinations of adaptive ﬁlters have attracted attention as a simplesolution to improve ﬁlter performance, including tracking properties.In this paper, we consider combinations of LMS and RLS ﬁlters, andstudy their performance for tracking time-varying solutions. We showthat a combination of two ﬁlters from the same family (i.e., two LMSor two RLS ﬁlters) cannot improve the performance over that of a sin-gle ﬁlter of the same type with optimal selection of the step size (orforgetting factor). However, combining LMS and RLS ﬁlters it is pos-sible to simultaneously outperform the optimum LMS and RLS ﬁl-ters. In other words, combination schemes can achieve smaller errorsthanoptimallyadjustedindividualﬁlters. Experimentalworkinaplantidentiﬁcation setup corroborates the validity of our results.
Index Terms
—
Adaptive ﬁlters, convex combination, steady-stateanalysis, tracking performance, RLS algorithm, LMS algorithm.
1. INTRODUCTION
Combinations of adaptive ﬁlters have gained considerable attentionlately, since they decrease the sensitivity of the ﬁlter to choices of pa-rameters such as the step size, forgetting factor or ﬁlter length (see,e.g., [1–5]). Using a combination of two ﬁlters with different step
sizes, for example, one can obtain fast convergence and low steady-state misadjustment, or use the combination to ﬁnd the optimum stepsize in a nonstationary environment [1]. In general, this combinationapproach is more robust than variable step-size schemes [5].In tracking of time-varying scenarios, combination schemes offerimproved tracking capabilities with respect to the component ﬁlters[4]. However, it has been noticed in simulations that the excess mean-square error (EMSE) obtained by the combination of two ﬁlters of thesame family [e.g., two least mean-squares (LMS) ﬁlters with differentstep sizes, or two recursive least-squares (RLS) ﬁlters with differentforgetting factors] will never be better than the performance of a singleﬁlter employing the optimum step size (or optimum forgetting factor)for a given nonstationary condition [1,6].
More recently, the combination of ﬁlters from different families(one LMS and one RLS) was proposed as a way to take advantage of the different tracking properties of LMS and RLS [5]. In fact, despitethe fast initial convergence provided by RLS, it was shown in [7] thatLMS may outperform RLS depending on how the optimum solutionchanges with time. Although a theoretical analysis and several simu-lations were provided in [5], it was not noticed that the combination
The work of Nascimento and Silva was partly supported by CNPqunder Grants 303361/2004-2 and 302633/2008-1; and FAPESP underGrants 2008/04828-5 and 2008/00773-1. The work of Azpicueta-Ruiz andArenas-Garc´ıa was partly supported by MEC project TEC2008-02473 andCAM project S-0505/TIC/0223.
Table1
. Parameters of the considered algorithms.Alg.
ρ
i
M
−
1
i
(
n
)
LMS
µ
i
I
RLS
1
b
R
i
(
n
) =
n
X
l
=1
λ
n
−
li
u
(
l
)
u
T
(
l
)
may actually obtain a smaller EMSE than its component ﬁlters evenwhen the optimum step size is used for LMS and the optimum forget-ting factor is used for RLS. In other words, the combination of ﬁltersof different families may obtain a performance that would not be pos-sible with a combination of two ﬁlters of the same family, or with analgorithm that chooses the optimum stepsize forLMS(orthe optimumforgetting factor for RLS). In this paper we illustrate these facts boththrough theoretical analysis and simulations.The paper is organized as follows. In the next section, we presentthe data model and introduce the notation that will be used throughoutthe paper, reviewing also some results for the tracking performanceof LMS and RLS ﬁlters. Then, in Sec. 3 we recall some theoreticalresults regarding the performance of convex and afﬁne combinationof adaptive ﬁlters in nonstationary environments. We also prove thatcombinations of ﬁlters of the same family cannot improve the perfor-mance over that of a single ﬁlter with optimum selection of the param-eters (i.e., step size or forgetting factor), and that this limitation can beovercome when ﬁlters of different families are combined. Several ex-amples that validate the analysis are provided in Sec. 4. Finally, Sec. 5presents the main conclusions of our work.
2. PROBLEMFORMULATION
In this paper, we consider combinations of two LMS, two RLS or oneRLS and one LMS ﬁlters. The update laws for LMS and RLS may bewritten as
w
i
(
n
) =
w
i
(
n
−
1) +
ρ
i
M
i
(
n
)
u
(
n
)
e
i
(
n
)
,
(1)where the index
i
= 1
,
2
refers to each ﬁlter in the combination,
w
i
(
n
)
∈
R
M
is the coefﬁcient vector of each ﬁlter at time
n
,
ρ
i
is a step-size parameter,
u
(
n
)
∈
R
M
is the input regressor vector,and
e
i
(
n
)
is the estimation error. For RLS,
M
i
(
n
)
∈
R
M
×
M
is anestimate of the inverse of the regressor autocovariance matrix,
R
=E
{
u
(
n
)
u
T
(
n
)
}
, and can be computed in an efﬁcient way using thematrix inversion lemma or, if lattice algorithms are used, its explicitevaluation may be avoided [8]. Table 1 lists the values of the differentparameters in (1) for each class of the considered ﬁlters.The estimation error is given by
e
i
(
n
) =
d
(
n
)
−
y
i
(
n
)
,
(2)
Table 2
. Analytical expressions for the steady-state EMSEs of LMSand RLS.Alg.
ζ
LMS
µ
i
σ
2
v
Tr
{
R
}
+
µ
−
1
i
Tr
{
Q
}
2
RLS
ν
i
σ
2
v
M
+
ν
−
1
i
Tr
{
QR
}
2
Table 3
. Optimum tracking parameters (
µ
o
and
ν
o
) and steady-stateEMSEs (
ζ
o
) for LMS and RLS.Alg.
µ
o
,ν
o
ζ
o
LMS
s
Tr
{
Q
}
σ
2
v
Tr
{
R
}
p
σ
2
v
Tr
{
R
}
Tr
{
Q
}
RLS
s
Tr
{
QR
}
σ
2
v
M
p
σ
2
v
M
Tr
{
QR
}
where
y
i
(
n
) =
w
i
(
n
−
1)
T
u
(
n
)
,
i
= 1
,
2
are the ﬁlter outputs, and
d
(
n
)
is the desired response. As it is well-known, there exists a linearregression model relating
d
(
n
)
and
u
(
n
)
, such that
d
(
n
) =
w
T
o
u
(
n
) +
v
(
n
)
,
(3)where
v
(
n
)
is white and zero-mean measurement noise with variance
σ
2
v
, uncorrelated with
u
(
n
)
, and
w
o
provides the optimum linear leastmean-squares estimate of
d
(
n
)
given
u
(
n
)
[8]. We assume in thispaper, as it is common in studies of adaptive ﬁlters, that
u
(
n
)
and
v
(
n
)
are stationary zero-mean processes, and moreover that
v
(
n
)
isindependent of
u
(
m
)
for all
m,n
.Deﬁne also the
a priori
errors
e
a,i
(
n
) = [
w
o
−
w
i
(
n
−
1)]
T
u
(
n
)
.Under the stated assumptions, the mean-square error
E
{
e
2
i
(
n
)
}
maybe shown to be [8]
E
{
e
2
i
(
n
)
}
=
ζ
i
(
n
) +
σ
2
v
,
where
ζ
i
(
n
) = E
{
e
2
a,i
(
n
)
}
is the so-called excess mean-square error(EMSE) of each ﬁlter.The optimum weight vector
w
o
is usually not constant in practice.A common approach to model its variations is through the brownianmotion model below, which allows a tractable analysis
w
o
(
n
) =
w
o
(
n
−
1) +
q
(
n
)
,
(4)where
{
q
(
n
)
}
is a sequence of i.i.d. vectors with zero mean and auto-correlation matrix
Q
= E
{
q
(
n
)
q
T
(
n
)
}
.Using this model and (1)–(3), and assuming sufﬁciently small stepsize and forgetting factor sufﬁciently close to 1, it can be shown thatthe steady-state EMSE [i.e.,
ζ
i
= lim
n
→∞
ζ
i
(
n
)
] of LMS and RLSareasgiveninTable2[5,7,8]. Forconvenience, wewilloftenuse
ν
i
=1
−
λ
i
. Since
ν
i
plays in the expressions for RLS a similar role to thatof
µ
i
for LMS, we will also refer to
ν
i
as a “step size”. Differentiatingthe expressions in Table 2 with respect to either
µ
i
or
ν
i
, one cancompute the optimum step sizes
µ
o
and
ν
o
for a given environment(i.e., values of
Q
,
R
and noise variance
σ
2
v
). These optimum values,along with the resulting optimum EMSEs for each ﬁlter, are given inTable 3 [7,8].
Through these expressions, it was shown in [7] that, despite itsslow initial convergence, LMS may present better tracking perfor-mance than RLS, depending on the values of
Q
and
R
. In particular,if
Q
is proportional to
R
, the optimum steady-state EMSE of LMSwill be smaller than that of RLS. On the contrary, if
Q
is proportionalto
R
−
1
, RLS will present better performance. When
Q
is propor-tional to
I
(the identity matrix), both algorithms will present similarbehavior [7].In the next section we introduce combinations of adaptive ﬁlters,and prove that the tracking performance of the combination of twoLMS or two RLS ﬁlters is lower bounded by the values of Table 3, butthe combination of one RLS with one LMS algorithm may achieve abetter performance.
3. CONVEXCOMBINATIONSANDOPTIMALTRACKING
One promising way of increasing the performance of adaptive ﬁlters isto run two or more ﬁlters in parallel, and combine their outputs con-structing an overall output given by
y
(
n
) =
η
(
n
−
1)
y
1
(
n
) + [1
−
η
(
n
−
1)]
y
2
(
n
)
.
Up to now, good results have been obtained with both a convex com-bination model, in which the mixing parameter
η
(
n
)
is constrained toremain in the interval
[0
,
1]
[1], and an afﬁne combination model, inwhich
η
(
n
)
may be any real number [3]. In the ﬁrst case, the mixingparameter is usually updated using an auxiliary variable
a
(
n
)
, accord-ing to
a
(
n
) =
a
(
n
−
1) + ˜
µ
a
(
n
)
e
(
n
)∆
e
(
n
)
η
(
n
−
1) [1
−
η
(
n
−
1)]
,η
(
n
) = 11 + exp[
−
a
(
n
)]
,
in which
˜
µ
a
(
n
)
is a (possibly normalized) step size,
e
(
n
) =
d
(
n
)
−
y
(
n
)
is the overall estimation error, and
∆
e
(
n
) =
e
2
(
n
)
−
e
1
(
n
)
. Ingeneral, to avoid slow adaptation close to
η
= 1
or
η
= 0
,
a
(
n
)
is con-strained (by simple saturation) to the interval
[
−
a
+
,a
+
]
. A commonchoice for
a
+
is 4.For afﬁne combinations, one possible method for updating themixing parameter is through the recursion
η
(
n
) =
η
(
n
−
1) +
µ
a
(
n
)
e
(
n
)∆
e
(
n
)
.
In both cases, it is convenient to choose a normalized step size
˜
µ
a
(
n
)
using an estimate
p
(
n
)
of
E
{
∆
e
2
(
n
)
}
, such that
p
(
n
) =
λ
p
p
(
n
−
1) + (1
−
λ
p
)∆
e
2
(
n
)
,
where
λ
p
is a forgetting factor, and
˜
µ
a
(
n
) =
µ
a
/
[
δ
+
p
(
n
)]
,
δ
beinga small regularization term [6,9].
It can be shown that the optimum mixing parameter in steady stateis given by [1,3,6]
η
∗
=
ζ
2
−
ζ
12
ζ
1
−
2
ζ
12
+
ζ
2
,
(5)where
ζ
12
= lim
n
→∞
E
{
e
a,
1
(
n
)
e
a,
2
(
n
)
}
is the steady-state cross-EMSE between both ﬁlters in the combination, given in Table 4 forthe different combination possibilities considered in this paper [5]. Forconvex combinations
η
∗
is given by (5) only if the value falls in theinterval
[0
,
1]
, otherwise
η
∗
= 0
(resp. 1), if (5) is negative (resp.larger than 1).The EMSE of the combination, using the optimum
η
∗
, is given by
ζ
∗
=
ζ
1
ζ
2
−
ζ
212
ζ
1
−
2
ζ
12
+
ζ
2
(6)
Table 4
. Analytical expressions for the steady-state cross-EMSE of the considered combinations.Combination
ζ
12
µ
1
-LMS and
µ
2
-LMS
µ
1
µ
2
Tr
{
R
}
σ
2
v
+ Tr
{
Q
}
µ
1
+
µ
2
λ
1
-RLS and
λ
2
-RLS
ν
1
ν
2
Mσ
2
v
+ Tr
{
QR
}
ν
1
+
ν
2
λ
1
-RLS and
µ
2
-LMS
µ
2
ν
1
σ
2
v
Tr
˘
Σ
¯
+ Tr
˘
Q
Σ
¯
,
where
Σ
(
ν
1
I
+
µ
2
R
)
−
1
R
.We remark that the optimality here is with respect to the choice of
η
only, which we denote by the subscript
∗
.We will now search for the optimum
ζ
∗
with respect to the stepsizes for the particular case of a combination of two ﬁlters of the samefamily. Consider ﬁrst the combination of two LMS ﬁlters. Assumingthat the optimal
η
∗
is selected in steady state (which is usually a goodapproximation for the considered recursions [6, 9]), the steady-state
EMSE of the combination will be given by (6) with
ζ
1
and
ζ
2
givenby the ﬁrst row in Table 2, and
ζ
12
given by the ﬁrst row in Table 4.Differentiating
ζ
∗
with respect to
µ
2
, and after some manipulations,we obtain
d
ζ
LMS
∗
d
µ
2
=
−
12(Tr
{
Q
}−
µ
21
σ
2
v
Tr
{
R
}
)
2
(Tr
{
Q
}−
µ
22
σ
2
v
Tr
{
R
}
)(
µ
1
µ
2
σ
2
v
Tr
{
R
}
+ Tr
{
Q
}
)
2
(
µ
1
+
µ
2
)
2
,
where we have used the superscript
LMS
to emphasize that we are con-sidering a combination of two LMS ﬁlters. From this expression onecan see that
d
ζ
LMS
∗
/
d
µ
2
= 0
if
µ
2
=
µ
o
(notice the factor depend-ing on
µ
2
in the numerator), so the minimum value of
ζ
LMS
∗
is attainedwhen one of the component ﬁlters has optimum step size
µ
o
. Note that,duetothesymmetryofthecombination, wewouldreachthesamecon-clusion if we had differentiated with respect to
µ
1
(this is easily seenif one replaces
η
by
1
−
η
in all expressions).Furthermore, if we substitute
ζ
2
=
ζ
LMSo
and
µ
2
=
µ
o
in theexpressions for
η
∗
from (5) and for
ζ
12
from Table 4, we conclude that
η
∗
o
= 0
. This means that
ζ
LMS
∗
o
=
ζ
LMSo
,
where
ζ
LMS
∗
o
is the optimal EMSE of the combination, both with respectto the mixing parameters and the step sizes of the constituent ﬁlters. Inother words, the smallest EMSE obtainable with a combination of twoLMS algorithms is exactly equal to that obtained with a single LMSﬁlter with optimum step size
µ
o
.Just the same conclusion is obtained for a combination of two RLSﬁlters. The only difference is that for two RLS ﬁlters, the derivative
d
ζ
RLS
∗
/
d
ν
2
reads
d
ζ
RLS
∗
d
ν
2
=
−
12(Tr
{
QR
}−
ν
21
σ
2
v
M
)
2
(Tr
{
QR
}−
ν
22
σ
2
v
M
)(
ν
1
ν
2
σ
2
v
M
+ Tr
{
QR
}
)
2
(
ν
1
+
ν
2
)
2
,
and again we see that
ν
2
=
ν
o
minimizes the overall steady-state errorof the combination, and
ζ
RLS
∗
o
=
ζ
RLSo
.The above results allow us to conclude that, although a combina-tion of two LMS (or two RLS) ﬁlters can improve the tracking per-formance when the degree of nonstationarity is not known
a priori
or time-varying, the steady-state EMSE of the combination is lowerbounded by the optimal EMSE of an individual ﬁlter from the same
00.5110310.30.1
−35−34−33−32
µ
2
/
µ
o
α
E M S E , [ d B ]
00.5110310.30.1−35−34−33−32
µ
2
/
µ
o
α
E M S E , [ d B ]
Fig. 1
. Steady-state EMSE of a combination of two LMS ﬁlters forvarying
α
and
µ
2
, and
µ
1
= 0
.
3
µ
o
(left) and
µ
1
= 3
µ
o
(right).family. In the next section, we will illustrate that this is not the case fora heterogeneous combination of one LMS and one RLS ﬁlters. Sincefor the combination of one RLS and one LMS ﬁlters the expressionsbecome too complex for an analytical approach, we will proceed toshow via examples that for this case it is possible to obtain an over-all steady-state EMSE strictly smaller that the minimum of
ζ
LMSo
and
ζ
RLSo
.
4. EXAMPLES
In this section we include several experiments for the identiﬁcation of a time-varying system. Three sets of experiments have been consid-ered: the ﬁrst one consists of a combination of two LMS ﬁlters withdifferent step sizes; two RLS ﬁlters with different forgetting factors arecombined in the second group of simulations; and the last experimentimplements a combination of one LMS and one RLS ﬁlters, both of them with optimal step sizes (i.e.,
µ
o
and
ν
o
).In all cases, the unknown plant
w
o
, of length
M
= 7
, was ini-tialized with random values from interval
[
−
1
,
1]
, being
w
o
(0) =[
.
9003
,
−
.
5377
,.
2137
,
−
.
028
,.
7826
,.
5242
,
−
.
0871]
T
. Then, the so-lutionischanged ateach iterationaccording totherandom-walk model(3), with a covariance matrix of
q
(
n
)
given by
Q
=
γ
»
α
R
Tr
(
R
) + (1
−
α
)
R
−
1
Tr
(
R
−
1
)
–
,
(7)where constant
γ
has been selected to be
γ
= 10
−
5
, so that Tr
(
Q
) =
γ
, and
α
∈
[0
,
1]
is a control parameter that allows to tradeoff betweena situation with
Q
∝
R
(for
α
= 1
), for which
ζ
LMSo
< ζ
RLSo
, and
Q
∝
R
−
1
(
α
= 0
), in which the reverse situation occurs.The input signal is the output of a ﬁrst-order AR model with trans-ferfunction
[1
−
a
2
]
/
(1
−
az
−
1
)
using
a
= 0
.
8
, fedwithi.i.d. Gaussiannoise with variance
σ
2
u
=
17
, so that Tr
(
R
) = 1
. The output additivenoise is i.i.d. Gaussian with zero-mean and variance
σ
2
v
= 10
−
2
.Regarding the adjustment for the combinations, we have used convexcombinations with ﬁxed step size
µ
a
= 100
, while the step sizes of the constituent ﬁlters are selected as explained below.All estimated steady-state EMSEs have been obtained by averag-ing
25000
runs of the algorithms once the ﬁlters have completely con-verged, and
100
independent runs.To start with, we will consider the combination of two LMS ﬁlters.Note that in this case
µ
o
=
√
10
−
3
independently of the value of
α
.Fig.1depictsthesteady-stateEMSEofthecombinationfordifferent
α
and
µ
2
, and for two different values of the step size for the ﬁrst compo-nent,
µ
1
= 0
.
3
µ
o
and
µ
1
= 3
µ
o
. In both subﬁgures, we observe a ﬂatregion for which the combination inherits the performance of the ﬁlterwith step size
µ
1
. As predicted by our analysis in the previous section,
00.5110310.30.1
−38−34−30−26
ν
2
/
ν
o
α
E M S E , [ d B ]
00.5110310.30.1−37−35−33−31
ν
2
/
ν
o
α
E M S E , [ d B ]
Fig. 2
. Steady-state EMSE of a combination of two RLS ﬁlters forvarying
α
and
ν
2
, and
ν
1
=
.
001
(left) and
ν
1
=
.
0186
(right).the optimal behavior of the combination is observed when
µ
2
=
µ
o
.Therefore, the combination performs in this situation similarly to anLMS ﬁlter with optimal step size.Similar conclusions can be extracted for the convex combinationof two RLS ﬁlters. Fig. 2 illustrates the behavior of such a combinationscheme for different values of
α
and
ν
2
. In this case,
ν
o
changes with
α
. Therefore, in this situation we have selected
ν
1
= 0
.
001
(left panelof Fig. 2) and
ν
1
= 0
.
0186
(right panel), respectively smaller andlarger than the optimal step sizes for
α
= 0
and
α
= 1
. For each
α
,we explore values for the step size of the second ﬁlter in the range from
ν
o
/
10
to
10
ν
o
. Again, as predicted by the analysis, the best behaviorresults for
ν
2
=
ν
o
, showing that
ζ
RLS
∗
o
=
ζ
RLSo
.We conclude the section considering the more interesting case of acombination including oneLMSandoneRLSﬁlters. We willillustratethat it is possible for this heterogeneous combination to outperform thesmallest EMSEs that could be achieved by any individual LMS or RLSﬁlters. To this end, let us select for each
α
the optimal step sizes for theLMS and RLS components according to Table 3. The upper panel of Fig. 3 displays the theoretical steady-state EMSEs of both constituentﬁlters and of their combination, and shows good agreement with thereal curves obtained through simulation (intermediate panel). We cansee that for all values of
α
other than
α
= 0
and
α
= 1
, the combinedLMS-RLS scheme reduces the individual EMSEs of both components,thus leading to the interesting result that
ζ
LMS-RLS
∗
o
<
min
[
ζ
LMSo
,ζ
RLSo
]
,i.e., this combined scheme is able to improve the tracking capabilitiesof optimal LMS and RLS ﬁlters.It is also interesting to pay attention to the optimal values of themixing parameter (bottom panel of Fig. 3). In ﬁrst place, we see thatfor
α
∈
[0
,
1]
the optimal mixing parameter lies in interval
[0
,
1]
, i.e.,afﬁne combinations can be expected to work equally well –but notbetter– than convex combinations for the considered scenario. It isalso important to remark that no gains over the tracking performanceof optimal LMS or RLS can occur when
Q
∝
R
(
α
= 1
) or
Q
∝
R
−
1
(
α
= 0
), since in these cases optimal selections of the mixingparameters are
η
= 0
and
η
= 1
, respectively.
5. CONCLUSIONS
In this paper we have studied the tracking performance of combina-tions of LMS and RLS ﬁlters. We have provided theoretical and em-pirical evidence that the steady-state EMSE of a combination of twoﬁlters of the same family is lower bounded by the optimal EMSE of a ﬁlter of the same family. However, heterogeneous combinations of one LMS and one RLS ﬁlters have been shown to simultaneously re-duce the EMSEs of both optimal LMS and RLS ﬁlters, thus providinga way to obtain ﬁlters with superior tracking capabilities.
0 0.2 0.4 0.6 0.8 1−38−36−34−32
α
E M S E , d B
µ
o
− LMS
ν
o
− RLSCombination
0 0.2 0.4 0.6 0.8 1−38−36−34−32
α
E M S E , d B
µ
o
− LMS
ν
o
− RLSCombination
0 0.2 0.4 0.6 0.8 100.20.40.60.81
α
η
TheoreticalExperimental
Fig. 3
. Steady-state performance of an adaptive combination of oneLMS and one RLS ﬁlters with optimal step sizes (
µ
o
and
ν
o
, respec-tively). The ﬁgure displays, from top to bottom, the theoretical EMSEof both constituent ﬁlters and of the combination for different valuesof
α
, the observed EMSEs obtained through simulation, and the theo-retical and simulated steady-state values of
η
(
n
)
.
6. REFERENCES
[1] J. Arenas-Garc´ıa, A. R. Figueiras-Vidal, and A. H. Sayed, “Mean-squareperformance of a convex combination of two adaptive ﬁlters,”
IEEE Trans. Signal Process.
, vol. 54, no. 3, pp. 1078–1090, Mar. 2006.[2] Y. Zhang and J. Chambers, “Convex combination of adaptive ﬁlters for avariable tap-length LMS algorithm,”
IEEE Signal Process. Lett.
, vol. 13,no. 10, pp. 628–631, Oct. 2006.[3] N. J. Bershad, J. C. M. Bermudez, and J.-Y. Tourneret, “An afﬁne com-bination of two LMS adaptive ﬁlters—transient mean-square analysis,”
IEEE Trans. Signal Process.
, vol. 56, no. 5, pp. 1853–1864, May 2008.[4] J. Arenas-Garc´ıa, A. R. Figueiras-Vidal, and A. H. Sayed, “Trackingproperties of a convex combination of two adaptive ﬁlters,” in
Proc.2005 IEEE Workshop on Stat. Signal Process.
, Bourdeaux, France.[5] M. T. M. Silva and V. H. Nascimento, “Improving the tracking capabilityof adaptive ﬁlters via convex combination,”
IEEE Trans. Signal Process.
,vol. 56, no. 7, pp. 3137–3149, July 2008.[6] R. Candido, M. T. M. Silva, and V. H. Nascimento, “Afﬁne combinationsof adaptive ﬁlters,” in
Conf. Rec. of the 42nd Asilomar Conf. on Sign.,Syst. & Comp.
, 2008.[7] E. Eweda, “Comparison of RLS, LMS, and sign algorithms for trackingrandomly time-varying channels,”
IEEE Trans. Signal Process.
, vol. 42,no. 11, pp. 2937–2944, Nov. 1994.[8] A. H. Sayed,
Fundamentals of Adaptive Filtering
, Wiley-Interscience,2003.[9] L. A. Azpicueta-Ruiz, A. R. Figueiras-Vidal, and J. Arenas-Garc´ıa, “A
normalized adaptation scheme for the convex combination of two adap-tive ﬁlters,” in
Proc., ICASSP 2008
, Las Vegas, NV, USA, pp. 3301–3304.

Recommended

Related Search

Treaty On The Non Proliferation Of Nuclear WeThesis on the Positive Impact of Forensic AccOn the Political Sociology of Intellectualsimpact of FBI on the trade policy of nigeria Write Detailed Note on the Gulen Movement of Position Paper on the Falling Standard of PhyOn the composite nature of subject islandsOn the Social Construction of HellenismCold WEffect Of Ict On The Performance Of Banking SEffect of maintenance on the performance of s

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks

SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...Sign Now!

We are very appreciated for your Prompt Action!

x