End (topology): Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>David Eppstein
Ends of graphs
 
en>Tamfang
Examples: don't number a list that is not a sequence and whose count is unimportant
Line 1: Line 1:
Alyson is what my spouse enjoys to contact me but I don't like when individuals use my complete name. My working day job is an invoicing officer but I've already utilized for an additional 1. To perform lacross is the factor I adore most of all. For a while I've been in Mississippi but now I'm contemplating other choices.<br><br>my webpage: [http://si.dgmensa.org/xe/index.php?document_srl=48014&mid=c0102 cheap psychic readings] chat [http://ltreme.com/index.php?do=/profile-127790/info/ online reader] ([http://www.indosfriends.com/profile-253/info/ http://www.indosfriends.com/profile-253/info/])
'''MUltiple SIgnal Classification''' ('''MUSIC''') is an algorithm used for [[frequency estimation]]<ref>Hayes, Monson H., ''Statistical Digital Signal Processing and Modeling'', John Wiley & Sons, Inc., 1996. ISBN 0-471-59431-8.</ref> and emitter location.<ref>Schmidt, R.O, "Multiple Emitter Location and Signal Parameter Estimation," IEEE Trans. Antennas Propagation, Vol. AP-34 (March 1986), pp.276-280.</ref>
 
== MUSIC algorithm ==
 
In many practical signal processing problems, the objective is to estimate from measurements a set of constant parameters upon which the received signals depend. There have been several approaches to such problems including the so-called maximum likelihood (ML) method of Capon (1969) and Burg's maximum entropy (ME) method. Although often successful and widely used, these methods have certain fundamental limitations (especially bias and sensitivity in parameter estimates), largely because they use an incorrect model (e.g., [[Autoregressive model|AR]] rather than special [[Autoregressive–moving-average model|ARMA]]) of the measurements. Pisarenko (1973) was one of the first to exploit the structure of the data model, doing so in the context of estimation of parameters of cisoids in additive noise using a covariance approach. Schmidt (1977), while working at ESL (now part of Northrop Grumman) and independently Bienvenu (1979) were the first to correctly exploit the measurement model in the case of sensor arrays of arbitrary form. Schmidt, in particular, accomplished this by first deriving a complete geometric solution in the absence of noise, then cleverly extending the geometric concepts to obtain a reasonable approximate solution in the presence of noise. The resulting algorithm was called MUSIC (Multiple SIgnal Classification) and has been widely studied. In a detailed evaluation based on thousands of simulations, M.I.T.'s Lincoln Laboratory concluded that, among currently accepted high-resolution algorithms, MUSIC was the most promising and a leading candidate for further study and actual hardware implementation. However, although the performance advantages of MUSIC are substantial, they are achieved at a cost in computation (searching over parameter space) and storage (of array calibration data).
 
== Application to frequency estimation ==
MUSIC estimates the frequency content of a [[Signal (electronics)|signal]] or [[autocorrelation|autocorrelation matrix]] using an [[eigenspace]] method. This method assumes that a signal, <math>x(n)</math>, consists of <math>p</math> complex exponentials in the presence of Gaussian white noise. Given an <math>M \times M</math> autocorrelation matrix, <math>\mathbf{R}_x</math>, if the eigenvalues are sorted in decreasing order, the eigenvectors corresponding to the <math>p</math> largest eigenvalues (i.e. directions of largest variability) span the signal subspace. The remaining <math>M-p</math> eigenvectors span the orthogonal space, where there is only noise. Note that for <math>M = p + 1</math>, MUSIC is identical to [[Pisarenko harmonic decomposition]]. The general idea is to use averaging to improve the performance of the Pisarenko estimator.
 
The frequency estimation function for MUSIC is
 
:<math>\hat P_{MU}(e^{j \omega}) = \frac{1}{\sum_{i=p+1}^{M} |\mathbf{e}^{H} \mathbf{v}_i|^2},</math>
 
where <math>\mathbf{v}_i</math> are the noise eigenvectors and
 
:<math>e = \begin{bmatrix}1 & e^{j \omega} & e^{j 2 \omega} & \cdots & e^{j (M-1) \omega}\end{bmatrix}^T.</math>
The locations of the <math>p</math> largest peaks of the estimation function give the frequency estimates for the <math>p</math> signal components.
 
MUSIC is a generalization and computationalization of [[Pisarenko harmonic decomposition|Pisarenko's method]].  In Pisarenko, only a single eigenvector is used and taken to be a set of [[Autoregressive model|autoregressive]] coefficients, whose zeros can be found analytically or with polynomial root finding algorithms.  In contrast, MUSIC assumes that several such functions have been added together, so zeros may not be present.  Instead there are local minima, which can be located by computationally searching the estimation function for peaks.
 
==Comparison to other methods==
MUSIC outperforms simple methods such as picking peaks of DFT spectra in the presence of noise, when the number of components is known in advance, because it exploits knowledge of this number to ignore the noise in its final report.
 
Unlike DFT, it is able to estimate frequencies with accuracy higher than one sample, because its estimation function can be evaluated for any frequency, not just those of DFT bins.  This is a form of [[superresolution]].
 
Its chief disadvantage is that it requires the number of components to be known in advance, so cannot be used in more general cases.
 
==History==
 
MUSIC was originated by R. O. Schmidt in 1979 as an improvement to Pisarenko's method.
 
==References==
The estimation and tracking of frequency, Quinn and Hannan, Cambridge University Press 2001.
<references/>
 
[[Category:Digital signal processing]]

Revision as of 01:24, 11 December 2013

MUltiple SIgnal Classification (MUSIC) is an algorithm used for frequency estimation[1] and emitter location.[2]

MUSIC algorithm

In many practical signal processing problems, the objective is to estimate from measurements a set of constant parameters upon which the received signals depend. There have been several approaches to such problems including the so-called maximum likelihood (ML) method of Capon (1969) and Burg's maximum entropy (ME) method. Although often successful and widely used, these methods have certain fundamental limitations (especially bias and sensitivity in parameter estimates), largely because they use an incorrect model (e.g., AR rather than special ARMA) of the measurements. Pisarenko (1973) was one of the first to exploit the structure of the data model, doing so in the context of estimation of parameters of cisoids in additive noise using a covariance approach. Schmidt (1977), while working at ESL (now part of Northrop Grumman) and independently Bienvenu (1979) were the first to correctly exploit the measurement model in the case of sensor arrays of arbitrary form. Schmidt, in particular, accomplished this by first deriving a complete geometric solution in the absence of noise, then cleverly extending the geometric concepts to obtain a reasonable approximate solution in the presence of noise. The resulting algorithm was called MUSIC (Multiple SIgnal Classification) and has been widely studied. In a detailed evaluation based on thousands of simulations, M.I.T.'s Lincoln Laboratory concluded that, among currently accepted high-resolution algorithms, MUSIC was the most promising and a leading candidate for further study and actual hardware implementation. However, although the performance advantages of MUSIC are substantial, they are achieved at a cost in computation (searching over parameter space) and storage (of array calibration data).

Application to frequency estimation

MUSIC estimates the frequency content of a signal or autocorrelation matrix using an eigenspace method. This method assumes that a signal, x(n), consists of p complex exponentials in the presence of Gaussian white noise. Given an M×M autocorrelation matrix, Rx, if the eigenvalues are sorted in decreasing order, the eigenvectors corresponding to the p largest eigenvalues (i.e. directions of largest variability) span the signal subspace. The remaining Mp eigenvectors span the orthogonal space, where there is only noise. Note that for M=p+1, MUSIC is identical to Pisarenko harmonic decomposition. The general idea is to use averaging to improve the performance of the Pisarenko estimator.

The frequency estimation function for MUSIC is

P^MU(ejω)=1i=p+1M|eHvi|2,

where vi are the noise eigenvectors and

e=[1ejωej2ωej(M1)ω]T.

The locations of the p largest peaks of the estimation function give the frequency estimates for the p signal components.

MUSIC is a generalization and computationalization of Pisarenko's method. In Pisarenko, only a single eigenvector is used and taken to be a set of autoregressive coefficients, whose zeros can be found analytically or with polynomial root finding algorithms. In contrast, MUSIC assumes that several such functions have been added together, so zeros may not be present. Instead there are local minima, which can be located by computationally searching the estimation function for peaks.

Comparison to other methods

MUSIC outperforms simple methods such as picking peaks of DFT spectra in the presence of noise, when the number of components is known in advance, because it exploits knowledge of this number to ignore the noise in its final report.

Unlike DFT, it is able to estimate frequencies with accuracy higher than one sample, because its estimation function can be evaluated for any frequency, not just those of DFT bins. This is a form of superresolution.

Its chief disadvantage is that it requires the number of components to be known in advance, so cannot be used in more general cases.

History

MUSIC was originated by R. O. Schmidt in 1979 as an improvement to Pisarenko's method.

References

The estimation and tracking of frequency, Quinn and Hannan, Cambridge University Press 2001.

  1. Hayes, Monson H., Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc., 1996. ISBN 0-471-59431-8.
  2. Schmidt, R.O, "Multiple Emitter Location and Signal Parameter Estimation," IEEE Trans. Antennas Propagation, Vol. AP-34 (March 1986), pp.276-280.