|
|
Line 1: |
Line 1: |
| {{merge from|Frequency spectrum|date=September 2013}}
| | When you were younger, maybe you were 1 of those fortunate persons that didnt need fat reduction help. Maybe we were chasing kids, functioning at an outside job, cooking, cleaning plus living an active lifestyle. Its not which guys over 50 and post-menopausal ladies are not active. However often they are less active. This might be because of bodily limitations, including arthritis, or considering they have really gotten selected to a more sedentary lifestyle in retirement or even when they are still working.<br><br>The difference inside the overweight group is likely to be muscle. Folks with more muscle are more fit plus healthy, but that muscle puts them in the overweight group for their height. Numbers which get tossed about frequently are that 60% of Americans are overweight plus half of those are obese. These numbers are based strictly off the BMI, plus the group of overweight Americans is likely to be much lower.<br><br>Many individuals go their whole lives without ever breaking any bones. If somebody suddenly develops a high likeliness for fractures due to brittle bones it may indicate osteoporosis, which is a side effect of malnurition. Granted, inside certain cases this may be age-related. But not consuming enough calories plus calcium may weaken even a young person's bones.<br><br>So What Does All This Mean To You? Set the weight loss goals in line with the BMI (Body Mass Index) plus your BP (Body Fat Percentage) to do maximum total health. Your BMI determines your ideal healthy fat for the height. The CDC (Center for Disease Control & Prevention) has an convenient [http://safedietplansforwomen.com/bmi-calculator bmi calculator men] for adults and children at .<br><br>However, correlation does not imply causation. In different words - a higher BMI seems to cause a lower death rate, however which would not become the case. Lung condition, among the unusual causes of death in the overweight, is usually caused by smoking, a habit which leads to being thin or underweight. And there are a myriad of different wellness risks connected with a BMI between 25 plus 30. Women with a BMI of 27 or higher that are on the birth control pill can anticipate a much higher risk of pregnancy, analysis claims.<br><br>Instead of "going on a diet", try adding one serving of steamed greens to the meal program each day. After a couple of weeks, try switching to a whole grain bread. Make the change to healthier eating progressive plus consider it because though you're adding healthy foods to a diet rather of taking away everything you love. Many foods is prepared inside healthier techniques plus there are plenty of cookbooks to help with which. If you continue to add healthy foods, there are which the unhealthy ones get crowded from your diet and after a couple of months, you will see the change on the scale.<br><br>In terms of its dodgy mathematics, the BMI is borderline junk science. But in this regard, BMI is less egregious than Global Warming 'studies', in that many of the big-name 'researchers' cherry-pick, hide, or even fabricate information. Click found on the link for my long hub on the topic. |
| {{refimprove|date=May 2008}}
| |
| In [[statistical signal processing]], [[statistics]], and [[physics]], the spectrum of a time-series or signal is a positive real function of a frequency variable associated with a [[Stationary process|stationary]] [[stochastic process]], or a deterministic function of time, which has dimensions of power per [[hertz]] (Hz), or energy per hertz. Intuitively, the spectrum decomposes the content of a [[stochastic process]] into different frequencies present in that process, and helps identify periodicities. More specific terms which are used are the '''power spectrum''', '''spectral density''', '''power spectral density''', or '''energy spectral density'''.
| |
| | |
| == Explanation ==
| |
| | |
| In [[physics]], the signal is usually a wave, such as an [[electromagnetic wave]], [[random vibration]], or an [[sound wave|acoustic wave]]. The spectral density of the wave, when multiplied by an appropriate factor, will give the [[power (physics)|power]] carried by the wave, per unit frequency, known as the ''power spectral density'' (PSD) of the signal. Power spectral density is commonly expressed in [[watt]]s per [[hertz]] (W/Hz).<ref>{{cite book
| |
| | title = VSAT Networks
| |
| | author = Gérard Maral
| |
| | publisher = John Wiley and Sons
| |
| | year = 2003
| |
| | ISBN = 0-470-86684-5
| |
| | url = http://books.google.com/books?id=CMx5HQ1Mr_UC&pg=PR20&dq=%22power+spectral+density%22+W/Hz&lr=&as_brr=0&ei=VYwvSImyA4L4sQPxxJXzAg&sig=-bko0DhmJwzISN6PcHszF9E3qUE#PPR20,M1
| |
| }}</ref>
| |
| | |
| For [[voltage]] signals, it is customary to use units of V<sup>2</sup> Hz<sup>−1</sup> for the PSD and V<sup>2</sup> s Hz<sup>−1</sup> for the ESD (''energy spectral density'').<ref>{{cite book
| |
| | title = Fundamentals of Noise and Vibration Analysis for Engineers
| |
| | author = Michael Peter Norton and Denis G. Karczub
| |
| | publisher = Cambridge University Press
| |
| | year = 2003
| |
| | isbn = 0-521-49913-5
| |
| | url = http://books.google.com/books?id=jDeRCSqtev4C&pg=PA352&dq=%22power+spectral+density%22+%22energy+spectral+density%22&lr=&as_brr=3&ei=i3IvSLL6H4-KsgPfze13&sig=RJgA8uGocYf5d6mC6rKKS-X_2bc
| |
| }}</ref> Often it is convenient to work with an ''amplitude spectral density'' (ASD), which is the square root of the PSD; the ASD of a voltage signal has units of V Hz<sup>−1/2</sup>.<ref>{{cite web
| |
| | url = http://www.lumerink.com/courses/ece697/docs/Papers/The%20Fundamentals%20of%20FFT-Based%20Signal%20Analysis%20and%20Measurements.pdf
| |
| | title = The Fundamentals of FFT-Based Signal Analysis and Measurement
| |
| | author = Michael Cerna and Audrey F. Harvey
| |
| | year = 2000
| |
| }}</ref>
| |
| For random vibration analysis, units of ''g''<sup>2</sup> Hz<sup>−1</sup> are sometimes used for the PSD of [[acceleration]]. Here ''g'' denotes the [[g-force]].<ref>{{cite book
| |
| | title = Reliability Engineering
| |
| | author = Alessandro Birolini
| |
| | publisher = Springer
| |
| | year = 2007
| |
| | isbn = 978-3-540-49388-4
| |
| | page = 83
| |
| | url = http://books.google.com/books?id=xPIW3AI9tdAC&pg=PA83&dq=acceleration-spectral-density+g+hz&as_brr=3&ei=q24xSpKOBZXkzASPrs39BQ
| |
| }}</ref>
| |
| | |
| Although it is not necessary to assign physical dimensions to the signal or its argument, in the following discussion the terms used will assume that the signal varies in time.
| |
| | |
| == Preliminary conventions on notations for time series ==
| |
| | |
| The phrase ''time series'' has been defined as "... a collection of observations made sequentially in time."<ref>{{cite book | title = The Analysis of Time Series—An Introduction | author = C. Chatfield | edition = fourth | publisher = Chapman and Hall, London | year = 1989 | isbn=0-412-31820-2 | page = 1}}</ref> But it is also used to refer to a [[stochastic process]] that functions as the underlying theoretical model for the process that generated the data (and thus includes consideration of all the other possible sequences of data that could have been observed, but were not). Furthermore, the 'time' can be either continuous or discrete. There are, therefore, four different but closely related definitions and formulas for the power spectrum of a time series.
| |
| | |
| If <math> X_n</math> (discrete time) or <math> X_t</math> (continuous time) is a stochastic process, we will refer to a possible time series of data coming from it as a ''sample'' or ''path'' or ''signal'' of the stochastic process. To avoid confusion, we will reserve the word ''process'' for a stochastic process, and use one of the words ''signal'', or ''sample'', to refer to a time series of data.
| |
| | |
| For any random variable <math>X</math>, standard notations of angle brackets or <math>\operatorname{E}</math> will be used for [[ensemble average]], also known as [[expected value|statistical expectation]], and <math>\operatorname{Var}</math> for the theoretical [[variance]].
| |
| | |
| == Motivating example ==
| |
| | |
| Suppose <math>x_n</math>, from <math>n=0</math> to <math>N-1</math> is a time series (discrete time) with zero mean. Suppose that it is a sum of a finite number of periodic components (all frequencies are positive):
| |
| | |
| :<math>
| |
| x_n = \sum_k [a_k \cos (2\pi \nu_k n) + b_k \sin (2\pi \nu_k n)].
| |
| </math>
| |
| | |
| The variance of <math>x_n</math> is, for a zero-mean function as above, given by <math>\frac 1N \sum_{n=0}^{N-1} x_n^2</math>. If these data were samples taken from an electrical signal, this would be its average power (power is energy per unit time, so it is analogous to variance if energy is analogous to the amplitude squared).
| |
| | |
| Now, for simplicity, suppose the signal extends infinitely in time, so we pass to the limit as <math>N\rightarrow \infty</math>. If the average power is bounded, which is almost always the case in reality, then the following limit exists and is the variance of the data.
| |
| | |
| :<math>
| |
| \lim _ {N\rightarrow \infty} \frac 1N \sum_{n=0}^{N-1} x_n^2.
| |
| </math>
| |
| | |
| Again, for simplicity, we will pass to continuous time, and assume that the signal extends infinitely in time in both directions. Then these two formulas become
| |
| | |
| :<math>x(t) = \sum_k [a_k \cos (2\pi \nu_k t) + b_k \sin (2\pi \nu_k t)]
| |
| </math>
| |
| | |
| and
| |
| | |
| :<math>
| |
| \lim _ {T\rightarrow\infty} \frac 1{2T} \int_{-T}^T x(t)^2 dt.
| |
| </math>
| |
| | |
| But obviously the root mean square of either <math>\cos</math> or <math>\sin</math> is <math>1/\sqrt{2}</math>, so the variance of
| |
| <math>a_k \cos (2\pi \nu_k t)</math>
| |
| is <math>a_k^2 / 2 </math>
| |
| and that of
| |
| <math>b_k \sin (2\pi \nu_k t)</math>
| |
| is <math> b_k^2 / 2</math>. Hence, the power of <math>x(t)</math> which comes from the component with frequency <math>\nu_k</math> is <math> (a_k^2 + b_k^2) / 2</math>.
| |
| All these contributions add up to the power of <math>x(t)</math>.
| |
| | |
| Then the power as a function of frequency is obviously
| |
| <math>(a_k^2 + b_k^2) / 2</math>, and its statistical [[cumulative distribution function]] <math>F(\nu)</math> will be
| |
| | |
| :<math>F(\nu) = \sum _ {k : \nu_k < \nu} \frac 12 (a_k^2 + b_k^2).</math>
| |
| | |
| <math>F</math> is a [[step function]], monotonically non-decreasing. Its jumps occur at the frequencies of the [[periodic]] components of <math>x</math>, and the value of each jump is the power or variance of that component. | |
| | |
| The variance is the covariance of the data with itself. If we now consider the same data but with a lag of <math>\tau</math>, we can take the [[covariance]] of <math>x(t)</math> with <math>x(t+\tau)</math>, and define this to be the [[autocorrelation function]] <math>c</math> of the signal (or data) <math>x</math>:
| |
| | |
| :<math>
| |
| c (\tau) = \lim _ {T\rightarrow\infty} \frac 1{2T} \int_{-T}^T x(t) x(t+\tau) dt.
| |
| </math>
| |
| | |
| When it exists, it is an even function of <math>\tau</math>. If the average power is bounded, then <math>c</math> exists everywhere, is finite, and is bounded by <math>c(0)</math>, which is the power or variance of the data.
| |
| | |
| It is elementary to show that <math>c</math> can be decomposed into periodic components with the same periods as <math>x</math>:
| |
| | |
| :<math>
| |
| c(\tau) =
| |
| \sum_k \frac 12 (a_k^2 + b_k^2)\cos (2\pi \nu_k \tau).
| |
| </math>
| |
| | |
| This is in fact the spectral decomposition of <math>c</math> over the different frequencies, and is obviously related to the distribution of power of <math>x</math> over the frequencies: the amplitude of a frequency component of <math>c</math> is its contribution to the power of the signal.
| |
| | |
| == Definition ==
| |
| | |
| === Energy spectral density{{Anchor|energy spectral density}} ===
| |
| | |
| Energy spectral density describes how the [[Energy (signal processing)|energy]] of a signal or a [[time series]] is distributed with frequency. Here, the term [[Energy (signal processing)|energy]] is used in the generalized sense of signal processing; that is, the energy of a signal <math>x(t)</math> is<ref name="oppenheim">{{cite book|last1=Oppenheim|last2=Verghese|title=Signals, Systems, and Inference|pages=32–4}}</ref>
| |
| :<math> \int\limits_{-\infty}^\infty |x(t)|^2\, dt.</math>
| |
| The energy spectral density is most suitable for transients—that is, pulse-like signals—having a finite total energy. In this case, [[Parseval's theorem]] gives us an alternate expression for the energy of the signal in terms of its [[Fourier transform]], <math>\hat{x}(\omega)</math>:<ref name="oppenheim" />
| |
| | |
| :<math>\int\limits_{-\infty}^\infty |x(t)|^2\, dt = \frac{1}{2\pi}\int\limits_{-\infty}^\infty |\hat{x}(\omega)|^2\, d\omega.</math>
| |
| | |
| Here <math>\omega</math> is the [[angular frequency]]. Since the integral on the right-hand side is the energy of the signal, the integrand <math>|\hat{x}(\omega)|^2</math> can be interpreted as a [[density function]] describing the energy per unit frequency contained in the signal at frequency <math>\omega</math>. In light of this, the energy spectral density of a signal <math>x(t)</math> is defined as<ref name="oppenheim" /><ref group="N">Here the Fourier transform is defined so that the forward transform (from <math>x(t)</math> to <math>\hat{x}(\omega)</math>) carries no factor of 1/2π; this factor instead appears on the inverse transform.</ref>
| |
| :<math> S_{xx}(\omega) = |\hat{x}(\omega)|^2 = \left|\int\limits_{-\infty}^\infty x(t)e^{-i\omega t}\,{d}t\right|^2.</math>
| |
| | |
| As a physical example of how one might measure the energy spectral density of a signal, suppose <math>V(t)</math> represents the [[electric potential|potential]] (in [[volt]]s) of an electrical pulse propagating along a [[transmission line]] of [[impedance]] <math>Z</math>, and suppose the line is terminated with a [[impedance matching|matched]] resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). By [[Ohm's law]], the power delivered to the resistor at time <math>t</math> is equal to <math>V(t)^2/Z</math>, so the total energy is found by integrating <math>V(t)^2/Z</math> with respect to time over the duration of the pulse. To find the value of the energy spectral density <math>S_{xx}(\omega)</math> at frequency <math>\omega</math>, one could insert between the transmission line and the resistor a [[bandpass filter]] which passes only a narrow range of frequencies (<math>\Delta \omega</math>, say) near the frequency of interest and then measure the total energy <math>E(\omega)</math> dissipated across the resistor. The value of the energy spectral density at <math>\omega</math> is then estimated to be <math>E(\omega)/\Delta\omega</math>. In this example, since the power <math>V(t)^2/Z</math> has units of V<sup>2</sup> Ω<sup>−1</sup>, the energy <math>E(\omega)</math> has units of V<sup>2</sup> s Ω<sup>−1</sup> = J, and hence the estimate <math>E(\omega)/\Delta\omega</math> of the energy spectral density has units of J Hz<sup>−1</sup>, as required. In many situations, it is common to forgo the step of dividing by <math>Z</math> so that the energy spectral density instead has units of V<sup>2</sup> s Hz<sup>−1</sup>.
| |
| | |
| This definition generalizes in a straightforward manner to a discrete signal with an infinite number of values <math>x_n</math> such as a signal sampled at discrete times <math>x_n=x(n\,\Delta t)</math>:
| |
| | |
| :<math>S_{xx}(\omega) = (\Delta t)^2\left|\sum_{n=-\infty}^\infty x_n e^{-i\omega n}\right|^2= (\Delta t)^2 \hat x_d(\omega)\hat x_d^*(\omega),</math> | |
| | |
| where <math>\hat x_d(\omega)</math> is the [[discrete Fourier transform]] of <math>x_n .</math> The sampling interval <math>\Delta t</math> is needed to keep the correct physical units and to ensure that we recover the continuous case in the limit <math>\Delta t\rightarrow 0</math>; however, in the mathematical sciences, the interval is often set to 1.
| |
| | |
| === Power spectral density ===
| |
| {{distinguish|Spectral power distribution}}
| |
| | |
| The above definition of energy spectral density is most suitable for transients, i.e., pulse-like signals, for which the Fourier transforms of the signals exist. For continued signals that describe, for example, [[stationary process|stationary]] physical processes, it makes more sense to define a ''power spectral density'' (PSD), which describes how the [[power (physics)|power]] of a signal or time series is distributed over the different frequencies, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, can be defined as the squared value of the signal. The total power ''P'' of a signal <math>x(t)</math> is the following time average:
| |
| | |
| : <math> P = \lim_{T\rightarrow \infty} \frac 1 {2T} \int_{-T}^T x(t)^2\,dt.</math>
| |
| | |
| The power of a signal may be finite even if the energy is infinite. For example, a 10-volt power supply connected to a 1 kΩ resistor delivers {{nowrap|(10 V)<sup>2</sup> / (1 kΩ)}} = 0.1 W of power at any given time; however, if the supply is allowed to operate for an infinite amount of time, it will deliver an infinite amount of energy (0.1 J each second for an infinite number of seconds).
| |
| | |
| In analyzing the frequency content of the signal <math>x(t)</math>, one might like to compute the ordinary Fourier transform <math>\hat{x}(\omega)</math>; however, for many signals of interest this Fourier transform does not exist.{{#tag:ref|Some authors (e.g. Risken<ref>
| |
| {{cite book
| |
| | title = The Fokker–Planck Equation: Methods of Solution and Applications
| |
| | edition = 2nd
| |
| | author = Hannes Risken
| |
| | publisher = Springer
| |
| | year = 1996
| |
| | isbn = 9783540615309
| |
| | page = 30
| |
| | url = http://books.google.com/books?id=MG2V9vTgSgEC&pg=PA30
| |
| }}</ref>
| |
| ) still use the non-normalized Fourier transform in a formal way to formulate a definition of the power spectral density
| |
| | |
| :<math> \langle \hat x(\omega) \hat x^\ast(\omega') \rangle = 2\pi\,f(\omega)\,\delta(\omega-\omega')</math>,
| |
| | |
| where <math> \delta(\omega-\omega')</math> is the [[Dirac delta function]]. Such formal statements may be sometimes useful to guide the intuition, but should always be used with utmost care.|group="N"}} Because of this, it is advantageous to work with a truncated Fourier transform <math> \hat{x}_T(\omega)</math>, where the signal is integrated only over a finite interval [0, ''T'']:
| |
| | |
| :<math> \hat{x}_T(\omega) = \frac{1}{\sqrt{T}} \int_0^T x(t) e^{-i\omega t}\, dt.</math>
| |
| | |
| Then the power spectral density can be defined as<ref>
| |
| {{cite book
| |
| | title = Spikes: Exploring the Neural Code (Computational Neuroscience)
| |
| | author = Fred Rieke, William Bialek, and David Warland
| |
| | publisher = MIT Press
| |
| | year = 1999
| |
| | isbn = 978-0262681087
| |
| }}</ref><ref name="millers370">
| |
| {{cite book
| |
| | title = Probability and random processes
| |
| | author = Scott Millers and Donald Childers
| |
| | publisher = Academic Press
| |
| | year = 2012
| |
| | pages = 370–5
| |
| }}</ref>
| |
| | |
| :<math> S_{xx}(\omega) = \lim_{T \rightarrow \infty} \mathbf{E} \left[ | \hat{x}_T(\omega) | ^ 2 \right]. </math>
| |
| | |
| Here '''E''' denotes the [[expected value]]; explicitly, we have<ref name="millers370" />
| |
| :<math> \mathbf{E} \left[ | \hat{x}_T(\omega) |^2 \right] = \mathbf{E} \left[ \frac{1}{T} \int\limits_0^T x^*(t) e^{i\omega t}\, dt \int\limits_0^T x(t') e^{-i\omega t'}\, dt' \right] = \frac{1}{T} \int\limits_0^T \int\limits_0^T \mathbf{E}\left[x^*(t) x(t')\right] e^{i\omega (t-t')}\, dt\, dt'.</math>
| |
| | |
| Using such formal reasoning, one may already guess that for a [[stationary process|stationary random process]], the power spectral density <math>f(\omega)</math> and the [[autocorrelation function]] of this signal <math> \gamma(\tau)=\langle X(t) X(t+\tau)\rangle</math> should be a Fourier transform pair. Provided that <math> \gamma(\tau) </math> is absolutely integrable, which is not always true, then
| |
| | |
| : <math>S_{xx}(\omega)=\int_{-\infty}^\infty \,\gamma(\tau)\,e^{-i\omega\tau}\,d \tau=\hat \gamma(\omega). </math>
| |
| | |
| The [[Wiener–Khinchin theorem]] makes sense of this formula for any [[wide-sense stationary process]] under weaker hypotheses: <math> \gamma </math> does not need to be absolutely integrable, it only needs to exist. But the integral can no longer be interpreted as usual. The formula also makes sense if interpreted as involving [[Distribution (mathematics)|distributions]] (in the sense of [[Laurent Schwartz]], not in the sense of a statistical [[Cumulative distribution function]]) instead of functions. If <math> \gamma </math> is continuous, [[Bochner's theorem]] can be used to prove that its Fourier transform exists as a positive [[measure]], whose distribution function is F (but not necessarily as a function and not necessarily possessing a probability density).
| |
| | |
| Many authors use this equality to actually ''define'' the power spectral density.<ref>{{cite book | title = Echo Signal Processing | author = Dennis Ward Ricker | publisher = Springer | year = 2003 | ISBN = 1-4020-7395-X | url = http://books.google.com/books?id=NF2Tmty9nugC&pg=PA23&dq=%22power+spectral+density%22+%22energy+spectral+density%22&lr=&as_brr=3&ei=HZMvSPSWFZyStwPWsfyBAw&sig=1ZZcHwxXkErvNXtAHv21ijTXoP8#PPA23,M1 }}</ref>
| |
| | |
| The power of the signal in a given frequency band <math>[\omega_1,\omega_2]</math> can be calculated by integrating over positive and negative frequencies,
| |
| | |
| : <math>
| |
| \int_{\omega_1}^{\omega_2}\,S_{xx}(\omega)+S_{xx}(-\omega) \,d \omega = F(\omega_2) - F(-\omega_2)
| |
| </math>
| |
| | |
| where <math>F</math> is the integrated spectrum whose derivative is <math>S_{xx}</math>.
| |
| | |
| More generally, similar techniques may be used to estimate a time-varying spectral density.{{citation needed|date=January 2013}}
| |
| | |
| The definition of the power spectral density generalizes in a straightforward manner to finite time-series <math>x_n</math> with <math>1\le n\le N</math>, such as a signal sampled at discrete times <math>x_n=x(n\Delta t)</math> for a total measurement period <math>T=N \Delta t</math>.
| |
| | |
| :<math>S_{xx}(\omega)=\frac{(\Delta t)^2}{T}\left|\sum_{n=1}^N x_n e^{-i\omega n}\right|^2</math>.
| |
| | |
| In a real-world application, one would typically average this single-measurement PSD over several repetitions of the measurement to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called [[periodogram]]. One can prove that this periodogram converges to the true PSD when the averaging time interval T goes to infinity (Brown & Hwang<ref>{{cite book | title = Introduction to Random Signals and Applied Kalman Filtering | author = Robert Grover Brown & Patrick Y.C. Hwang | publisher = John Wiley & Sons | year = 1997 | ISBN = 0-471-12839-2 | url = http://www.amazon.com/dp/0471128392}}</ref>) to approach the Power Spectral Density (PSD).
| |
| | |
| If two signals both possess power spectral densities, then a cross-spectral density can be calculated by using their [[cross-correlation]] function.
| |
| | |
| ==== Properties of the power spectral density ====
| |
| | |
| Some properties of the PSD include:<ref>{{Cite book
| |
| | publisher = Cambridge University Press
| |
| | isbn = 0-521-01230-9
| |
| | last = Storch
| |
| | first = H. Von
| |
| | coauthors = F. W Zwiers
| |
| | title = Statistical analysis in climate research
| |
| | year = 2001
| |
| }}</ref>
| |
| | |
| * The spectrum of a real valued process is an [[even function]] of frequency: <math>S_{xx}(-\omega) = S_{xx}(\omega)</math>.
| |
| * If the process is continuous and purely indeterministic, the autocovariance function can be reconstructed by using the [[Inverse Fourier transform]]
| |
| * it describes the distribution of the [[variance]] over frequency. In particular,
| |
| *: <math>\text{Var}(X_n) = \gamma_0 = 2 \int_0^{\infty} S_{xx}(\omega) d\omega.</math>
| |
| * It is a linear function of the autocovariance function in the sense that if <math>\gamma</math> is decomposed into two functions <math>\gamma(\tau) = \alpha_1 \gamma_1(\tau) + \alpha_2 \gamma_2(\tau)</math>, then
| |
| *: <math>f = \alpha_1 S_{xx,1} + \alpha_2 S_{xx,2}.</math>
| |
| | |
| The ''integrated spectrum'' or ''power spectral distribution'' <math>F(\omega)</math> is defined as<ref>An Introduction to the Theory of Random Signals and Noise, Wilbur B. Davenport and Willian L. Root, IEEE Press, New York, 1987, ISBN 0-87942-235-1</ref>
| |
| | |
| : <math>F(\omega)= \int _{-\infty}^\omega S_{xx}(\omega')\, d\omega'. </math>
| |
| | |
| === Cross-spectral density ===
| |
| | |
| Given two signals <math>x(t)</math> and <math>y(t)</math>, each of which possess power spectral densities <math>S_{xx}(\omega)</math> and <math>S_{yy}(\omega)</math>, it is possible to define a ''cross-spectral density'' (CSD) given by
| |
| | |
| :<math>S_{xy}(\omega) = \lim_{T\rightarrow\infty} \mathbf{E}\left\{\left[F_x^T(\omega)\right]^*F_y^T(\omega)\right\}.</math>
| |
| | |
| The cross-spectral density (or 'cross power spectrum') is thus the Fourier transform of the [[cross-correlation]] function.
| |
| | |
| :<math>S_{xy}(\omega) = \int_{-\infty}^{\infty} R_{xy}(t) e^{-j \omega t} dt = \int_{-\infty}^{\infty} \left[ \int_{-\infty}^{\infty} x(\tau) \cdot y(\tau+t) d\tau \right] \, e^{-j \omega t} dt,</math>
| |
| | |
| where <math>R_{xy}(t)</math> is the [[cross-correlation]] of <math>x(t)</math> and <math>y(t)</math>.
| |
| | |
| By an extension of the Wiener–Khinchin theorem, the Fourier transform of the cross-spectral density <math>S_{xy}(\omega)</math> is the [[Cross-covariance#Signal_processing|cross-covariance]] function.<ref>{{cite web
| |
| | url = http://www.fil.ion.ucl.ac.uk/~wpenny/course/course.html
| |
| | title = Signal Processing Course, chapter 7
| |
| | year = 2009
| |
| | author = William D Penny
| |
| }}</ref> In light of this, the PSD is seen to be a special case of the CSD for <math>x(t) = y(t)</math>.
| |
| | |
| For discrete signals ''x<sub>n</sub>'' and ''y<sub>n</sub>'', the relationship between the cross-spectral density and the cross-covariance is
| |
| : <math>
| |
| S_{xy}(\omega)=\frac{1}{2\pi}\sum_{n=-\infty}^\infty R_{xy}(n)e^{-j\omega n}
| |
| </math>
| |
| | |
| == Estimation ==
| |
| {{main|Spectral density estimation}}
| |
| | |
| The goal of spectral density estimation is to [[estimation theory|estimate]] the spectral density of a [[random signal]] from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involve [[parametric statistics|parametric]] or [[non-parametric statistics|non-parametric]] approaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to an [[autoregressive model]]. A common non-parametric technique is the [[periodogram]].
| |
| | |
| The spectral density is usually estimated using [[Fourier transform]] methods (such as the [[Welch method]]), but other techniques such as the [[Maximum entropy spectral estimation|maximum entropy]] method can also be used.
| |
| | |
| == Properties ==
| |
| | |
| * The spectral density of <math>f(t)</math> and the [[autocorrelation]] of <math>f(t)</math> form a Fourier transform pair (for PSD versus ESD, different definitions of autocorrelation function are used).
| |
| | |
| * One of the results of Fourier analysis is [[Parseval's theorem]] which states that the area under the energy spectral density curve is equal to the area under the square of the magnitude of the signal, the total energy:
| |
| | |
| ::<math>\int_{-\infty}^\infty \left| f(t) \right|^2\, dt = \int_{-\infty}^\infty ESD(\omega)\, d\omega.</math>
| |
| | |
| :The above theorem holds true in the discrete cases as well. A similar result holds for power: the area under the power spectral density curve is equal to the total signal power, which is <math> R(0) </math>, the autocorrelation function at zero lag. This is also (up to a constant which depends on the normalization factors chosen in the definitions employed) the variance of the data comprising the signal.
| |
| | |
| == Related concepts ==
| |
| | |
| * Most spectrum graphs really display only the power spectral density. Sometimes{{when|date=September 2013}} the complete frequency spectrum is graphed in two parts, amplitude versus frequency and [[phase (waves)|phase]] versus frequency (which contains the rest of the information from the frequency spectrum). The original function <math>f(t)</math> cannot be recovered from the amplitude spectral density part alone — the temporal information is lost. See [[spectral phase]] annd [[phase noise]].
| |
| | |
| * The [[spectral centroid]] of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two equal parts.
| |
| | |
| * The [[spectral edge frequency]] of a signal is an extension of the previous concept to any proportion instead of two equal parts.
| |
| | |
| * Spectral density is a function of frequency, not a function of time. However, the spectral density of small windows of a longer signal may be calculated, and plotted versus time associated with the window. Such a graph is called a ''[[spectrogram]]''. This is the basis of a number of spectral analysis techniques such as the [[short-time Fourier transform]] and [[wavelets]].
| |
| | |
| *In [[radiometry]] and [[colorimetry]] (or [[color science]] more generally), the [[spectral power distribution]] (SPD) of a [[light source]] is a measure of the power carried by each frequency or color in a light source. The light spectrum is usually measured at points (often 31) along the [[visible spectrum]], in wavelength space instead of frequency space, which makes it not strictly a spectral density. Some [[spectrophotometry|spectrophotometers]] can measure increments as fine as one to two [[nanometer]]s. Values are used to calculate other specifications and then plotted to demonstrate the spectral attributes of the source. This can be a helpful tool in analyzing the [[color]] characteristics of a particular source.
| |
| | |
| == Applications ==
| |
| | |
| === Electrical engineering ===
| |
| | |
| The concept and use of the [[power spectrum]] of a signal is fundamental in [[electrical engineering]], especially in [[communication systems|electronic communication system]]s, including [[radio communication]]s, [[radar]]s, and related systems, plus passive [remote sensing] technology. Much effort has been expended and millions of dollars spent on developing and producing electronic instruments called "[[spectrum analyzer]]s" for aiding electrical engineers and technicians in observing and measuring the '''''power spectra''''' of signals. The cost of a spectrum analyzer varies depending on its frequency range, its [[bandwidth (signal processing)|bandwidth]], and its accuracy. The higher the frequency range ([[S-band]], [[C-band]], [[X-band]], [[Ku band|Ku-band]], [[K-band]], [[Ka band|Ka-band]], etc.), the more difficult the components are to make, assemble, and test and the more expensive the spectrum analyzer is. Also, the wider the bandwidth that a spectrum analyzer possesses, the more costly that it is, and the capability for more accurate measurements increases costs as well.
| |
| | |
| The spectrum analyzer measures the magnitude of the [[short-time Fourier transform]] (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density. These devices work in low frequencies and with small bandwidths.
| |
| | |
| === Coherence ===
| |
| | |
| See [[Coherence (signal processing)]] for use of the cross-spectral density.
| |
| | |
| == See also ==
| |
| | |
| * [[Noise spectral density]]
| |
| * [[Spectral density estimation]]
| |
| * [[Spectral efficiency]]
| |
| * [[Spectral power distribution]]
| |
| * [[Brightness temperature]]
| |
| * [[Colors of noise]]
| |
| * [[Spectral leakage]]
| |
| * [[Window function]]
| |
| * [[Frequency domain]]
| |
| * [[Frequency spectrum]]
| |
| * [[Bispectrum]]
| |
| | |
| == Notes ==
| |
| | |
| {{reflist|group="N"}}
| |
| | |
| == References ==
| |
| | |
| {{Reflist}}
| |
| | |
| == External links ==
| |
| * [http://vibrationdata.wordpress.com/category/power-spectral-density/ Power Spectral Density Matlab scripts]
| |
| | |
| {{decibel}}
| |
| | |
| {{DEFAULTSORT:Spectral Density}}
| |
| [[Category:Frequency domain analysis]]
| |
| [[Category:Signal processing]]
| |
| [[Category:Waves]]
| |