Window function

From formulasearchengine
Jump to navigation Jump to search

Template:Rellink In signal processing, a window function (also known as an apodization function or tapering function[1]) is a mathematical function that is zero-valued outside of some chosen interval. For instance, a function that is constant inside the interval and zero elsewhere is called a rectangular window, which describes the shape of its graphical representation. When another function or waveform/data-sequence is multiplied by a window function, the product is also zero-valued outside the interval: all that is left is the part where they overlap, the "view through the window".

Applications of window functions include spectral analysis, filter design, and beamforming. In typical applications, the window functions used are non-negative smooth "bell-shaped" curves,[2] though rectangle, triangle, and other functions can be used.

A more general definition of window functions does not require them to be identically zero outside an interval, as long as the product of the window multiplied by its argument is square integrable, and, more specifically, that the function goes sufficiently rapidly toward zero.[3]


Applications of window functions include spectral analysis and the design of finite impulse response filters.

Spectral analysis

The Fourier transform of the function cos ωt is zero, except at frequency ±ω. However, many other functions and waveforms do not have convenient closed form transforms. Alternatively, one might be interested in their spectral content only during a certain time period.

In either case, the Fourier transform (or something similar) can be applied on one or more finite intervals of the waveform. In general, the transform is applied to the product of the waveform and a window function. Any window (including rectangular) affects the spectral estimate computed by this method.

Figure 1: Zoomed view of spectral leakage


Windowing of a simple waveform like cos ωt causes its Fourier transform to develop non-zero values (commonly called spectral leakage) at frequencies other than ω. The leakage tends to be worst (highest) near ω and least at frequencies farthest from ω.

If the waveform under analysis comprises two sinusoids of different frequencies, leakage can interfere with the ability to distinguish them spectrally. If their frequencies are dissimilar and one component is weaker, then leakage from the larger component can obscure the weaker one’s presence. But if the frequencies are similar, leakage can render them unresolvable even when the sinusoids are of equal strength.

The rectangular window has excellent resolution characteristics for sinusoids of comparable strength, but it is a poor choice for sinusoids of disparate amplitudes. This characteristic is sometimes described as low-dynamic-range.

At the other extreme of dynamic range are the windows with the poorest resolution. These high-dynamic-range low-resolution windows are also poorest in terms of sensitivity; this is, if the input waveform contains random noise close to the frequency of a sinusoid, the response to noise, compared to the sinusoid, will be higher than with a higher-resolution window. In other words, the ability to find weak sinusoids amidst the noise is diminished by a high-dynamic-range window. High-dynamic-range windows are probably most often justified in wideband applications, where the spectrum being analyzed is expected to contain many different components of various amplitudes.

In between the extremes are moderate windows, such as Hamming and Hann. They are commonly used in narrowband applications, such as the spectrum of a telephone channel. In summary, spectral analysis involves a tradeoff between resolving comparable strength components with similar frequencies and resolving disparate strength components with dissimilar frequencies. That tradeoff occurs when the window function is chosen.

Discrete-time signals

When the input waveform is time-sampled, instead of continuous, the analysis is usually done by applying a window function and then a discrete Fourier transform (DFT). But the DFT provides only a coarse sampling of the actual DTFT spectrum. Figure 1 shows a portion of the DTFT for a rectangularly windowed sinusoid. The actual frequency of the sinusoid is indicated as "0" on the horizontal axis. Everything else is leakage, exaggerated by the use of a logarithmic presentation. The unit of frequency is "DFT bins"; that is, the integer values on the frequency axis correspond to the frequencies sampled by the DFT. So the figure depicts a case where the actual frequency of the sinusoid happens to coincide with a DFT sample,[note 1] and the maximum value of the spectrum is accurately measured by that sample. When it misses the maximum value by some amount (up to 1/2 bin), the measurement error is referred to as scalloping loss (inspired by the shape of the peak). But the most interesting thing about this case is that all the other samples coincide with nulls in the true spectrum. (The nulls are actually zero-crossings, which cannot be shown on a logarithmic scale such as this.) So in this case, the DFT creates the illusion of no leakage. Despite the unlikely conditions of this example, it is a common misconception that visible leakage is some sort of artifact of the DFT. But since any window function causes leakage, its apparent absence (in this contrived example) is actually the DFT artifact.

This figure compares the processing losses of three window functions for sinusoidal inputs, with both minimum and maximum scalloping loss.

Noise bandwidth

The concepts of resolution and dynamic range tend to be somewhat subjective, depending on what the user is actually trying to do. But they also tend to be highly correlated with the total leakage, which is quantifiable. It is usually expressed as an equivalent bandwidth, B. It can be thought of as redistributing the DTFT into a rectangular shape with height equal to the spectral maximum and width B.[note 2][4] The more the leakage, the greater the bandwidth. It is sometimes called noise equivalent bandwidth or equivalent noise bandwidth, because it is proportional to the average power that will be registered by each DFT bin when the input signal contains a random noise component (or is just random noise). A graph of the power spectrum, averaged over time, typically reveals a flat noise floor, caused by this effect. The height of the noise floor is proportional to B. So two different window functions can produce different noise floors.

Processing gain and losses

In signal processing, operations are chosen to improve some aspect of quality of a signal by exploiting the differences between the signal and the corrupting influences. When the signal is a sinusoid corrupted by additive random noise, spectral analysis distributes the signal and noise components differently, often making it easier to detect the signal's presence or measure certain characteristics, such as amplitude and frequency. Effectively, the signal to noise ratio (SNR) is improved by distributing the noise uniformly, while concentrating most of the sinusoid's energy around one frequency. Processing gain is a term often used to describe an SNR improvement. The processing gain of spectral analysis depends on the window function, both its noise bandwidth (B) and its potential scalloping loss. These effects partially offset, because windows with the least scalloping naturally have the most leakage.

The figure at right depicts the effects of three different window functions on the same data set, comprising two equal strength sinusoids in additive noise. The frequencies of the sinusoids are chosen such that one encounters no scalloping and the other encounters maximum scalloping. Both sinusoids suffer less SNR loss under the Hann window than under the BlackmanHarris window. In general (as mentioned earlier), this is a deterrent to using high-dynamic-range windows in low-dynamic-range applications.

Sampled window functions are generated differently for filter design and spectral analysis applications. And the asymmetrical ones often used in spectral analysis are also generated in a couple of different ways. Using the triangular function, for example, 3 different outcomes for an 8-point window sequence are illustrated.
Three different ways to create an 8-point Hann window sequence.

Filter design


Windows are sometimes used in the design of digital filters, in particular to convert an "ideal" impulse response of infinite duration, such as a sinc function, to a finite impulse response (FIR) filter design. That is called the window method.[5][6]

Symmetry and asymmetry

Window functions generated for digital filter design are symmetrical sequences, usually an odd length with a single maximum at the center. Windows for DFT/FFT usage, such as in spectral analysis, are often created by deleting the right-most coefficient of an odd-length, symmetrical window. Such truncated sequences are known as periodic.[7] The deleted coefficient is effectively restored (by a virtual copy of the symmetrical left-most coefficient) when the truncated sequence is periodically extended (which is the time-domain equivalent of sampling the DTFT). A different way of saying the same thing is that the DFT "samples" the DTFT of the window at the exact points that are not affected by spectral leakage from the discontinuity. The advantage of this trick is that a 512 length window (for example) enjoys the slightly better performance metrics of a 513 length design. Such a window is generated by the Matlab function hann(512,'periodic'), for instance. To generate it with the formula in this article (below), the window length (N) is 513, and the 513th coefficient of the generated sequence is discarded.

Another type of asymmetric window, called DFT-even,[8] is limited to even length sequences. The generated sequence is offset (cyclically) from its zero-phase counterpart by exactly half the sequence length. In the frequency domain, that corresponds to a multiplication by the trivial sequence (-1)k, which can have implementation advantages for windows defined by their frequency domain form. Compared to a symmetrical window, the DFT-even sequence has an offset of ½ sample. As illustrated in the figure at right, that means the asymmetry is limited to just one missing coefficient. Therefore, as in the periodic case, it is effectively restored (by a virtual copy of the symmetrical left-most coefficient) when the truncated sequence is periodically extended.

Applications for which windows should not be used

In some applications, it is preferable not to use a window function. For example:

  • In impact modal testing, when analyzing transient signals such as an excitation signal from hammer blow (see Impulse excitation technique), where most of the energy is located at the beginning of the recording. Using a non-rectangular window would attenuate most of the energy and spread the frequency response unnecessarily.[9]
  • A generalization of above, when measuring a self-windowing signal, such as an impulse, a shock response, a sine burst, a chirp burst, noise burst. Such signals are used in modal analysis. Applying a window function in this case would just deteriorate the signal-to-noise ratio.[9]
  • When measuring a pseudo-random noise (PRN) excitation signal with period T, and using the same recording period T. A PRN signal is periodic and therefore all spectral components of the signal will coincide with FFT bin centers with no leakage.[10]
  • When measuring a repetitive signal locked-in to the sampling frequency, for example measuring the vibration spectrum analysis during Shaft alignment, fault diagnosis of bearings, engines, gearboxes etc. Since the signal is repetitive, all spectral energy is confined to multiples of the base repetition frequency.
  • In an OFDM receiver, the input signal is directly multiplied by FFT without a window function. The frequency sub-carriers (aka symbols) are designed to align exactly to the FFT frequency bins. A cyclic prefix is usually added to the transmitted signal, allowing frequency-selective fading due to multipath to be modeled as circular convolution, thus avoiding intersymbol interference, which in OFDM is equivalent to spectral leakage.

A list of window functions


  • Each figure label includes the corresponding noise equivalent bandwidth metric (B),[note 2] in units of DFT bins.

B-spline windows

B-spline windows can be obtained as k-fold convolutions of the rectangular window. They include the rectangular window itself (k = 1), the triangular window (k = 2) and the Parzen window (k = 4).[12] Alternative definitions sample the appropriate normalized B-spline basis functions instead of convolving discrete-time windows. A kth order B-spline basis function is a piece-wise polynomial function of degree k−1 that is obtained by k-fold self-convolution of the rectangular function.

Rectangular window

Rectangular window; B = 1.0000.[13]

The rectangular window (sometimes known as the boxcar or Dirichlet window) is the simplest window, equivalent to replacing all but N values of a data sequence by zeros, making it appear as though the waveform suddenly turns on and off:

Other windows are designed to moderate these sudden changes because discontinuities have undesirable effects on the discrete-time Fourier transform (DTFT) and/or the algorithms that produce samples of the DTFT.[14][15]

The rectangular window is the 1st order B-spline window as well as the 0th power cosine window.

Triangular window (with L=N-1) or equivalently the Bartlett window; B = 1.3333.[13]

Triangular window

Triangular windows are given by:

where L can be N,[8][16] N+1,[17] or N-1.[18] The last one is also known as Bartlett window. All three definitions converge at large N.

The triangular window is the 2nd order B-spline window and can be seen as the convolution of two half-sized rectangular windows, giving it twice the width of the regular windows.

Parzen window

Parzen window; B = 1.92.[8]

Template:Distinguish The Parzen window, also known as the de la Vallée Poussin window, is the 4th order B-spline window.

Other polynomial windows

Welch window

Welch window; B = 1.20.[8]

The Welch window consists of a single parabolic section:


The defining quadratic polynomial reaches a value of zero at the samples just outside the span of the window.

Generalized Hamming windows

Generalized Hamming windows are of the form:


They have only three non-zero DFT coefficients and share the benefits of a sparse frequency domain representation with higher-order generalized cosine windows.

Hann (Hanning) window

Hann window; B = 1.5000.[13]


The Hann window named after Julius von Hann and also known as the Hanning (for being similar in name and form to the Hamming window), von Hann and the raised cosine window is defined by:[19][20]

  • zero-phase version:

The ends of the cosine just touch zero, so the side-lobes roll off at about 18 dB per octave.[21]

Hamming window

Hamming window, α = 0.53836 and β = 0.46164; B = 1.37. The original Hamming window would have α = 0.54 and β = 0.46; B = 1.3628.[13]

The window with these particular coefficients was proposed by Richard W. Hamming. The window is optimized to minimize the maximum (nearest) side lobe, giving it a height of about one-fifth that of the Hann window.[22][23]


instead of both constants being equal to 1/2 in the Hann window. The constants are approximations of values α = 25/46 and β = 21/46, which cancel the first sidelobe of the Hann window by placing a zero at frequency 5π/(N − 1).[8] Approximation of the constants to two decimal places substantially lowers the level of sidelobes,[8] to a nearly equiripple condition.[23] In the equiripple sense, the optimal values for the coefficients are α = 0.53836 and β = 0.46164.[23]

  • zero-phase version:

Higher-order generalized cosine windows

Windows of the form:

have only 2K + 1 non-zero DFT coefficients, which makes them good choices for applications that require windowing by convolution in the frequency-domain. In those applications, the DFT of the unwindowed data vector is needed for a different purpose than spectral analysis. (see Overlap-save method). Generalized cosine windows with just two terms (K = 1) belong in the subfamily generalized Hamming windows.

Blackman windows

Blackman window; α = 0.16; B = 1.73.[8]

Blackman windows are defined as:

By common convention, the unqualified term Blackman window refers to α = 0.16, as this most closely approximates the "exact Blackman",[24] with a0 = 7938/18608 ≈ 0.42659, a1 = 9240/18608 ≈ 0.49656, and a2 = 1430/18608 ≈ 0.076849.[25] These exact values place zeros at the third and fourth sidelobes.[8]

Nuttall window, continuous first derivative

Nuttall window, continuous first derivative; B = 2.0212.[13]

Considering n as a real number, the function and its first derivative are continuous everywhere.

Blackman–Nuttall window

Blackman–Nuttall window; B = 1.9761.[13]

Blackman–Harris window

Blackman–Harris window; B = 2.0044.[13]

A generalization of the Hamming family, produced by adding more shifted sinc functions, meant to minimize side-lobe levels[26][27]

Flat top window

SRS flat top window; B = 3.7702.[13]

A flat top window is a partially negative-valued window that has a flat top in the frequency domain.[13] Such windows have been made available in spectrum analyzers for the measurement of amplitudes of sinusoidal frequency components.[13] They have a low amplitude measurement error suitable for this purpose, achieved by the spreading of the energy of a sine wave over multiple bins in the spectrum.[13][28] This ensures that the unattenuated amplitude of the sinusoid can be found on at least one of the neighboring bins.[28] The drawback of the broad bandwidth is poor frequency resolution.[13][28] To compensate, a longer window length may be chosen.[13]

Flat top windows can be designed using low-pass filter design methods,[28] or they may be of the usual sum-of-cosine-terms variety.[13] An example of the latter is the flat top window available in the Stanford Research Systems (SRS) SR785 spectrum analyzer:


Rife–Vincent window

Rife and Vincent define three classes of windows constructed as sums of cosines; the classes are generalizations of the Hanning window.[29] Their order-P windows are of the form (normalized to have unity average as opposed to unity max as the windows above are):


For order 1, this formula can match the Hanning window for a1 = −1; this is the Rife–Vincent class-I window, defined by minimizing the high-order sidelobe amplitude. The class-I order-2 Rife–Vincent window has a1 = −4/3 and a2 = 1/3. Coefficients for orders up to 4 are tabulated.[30] For orders greater than 1, the Rife–Vincent window coefficients can be optimized for class II, meaning minimized main-lobe width for a given maximum side-lobe, or for class III, a compromise for which order 2 resembles Blackmann's window.[30][31] Given the wide variety of Rife–Vincent windows, plots are not given here.

Power-of-cosine windows

Window functions in the power-of-cosine family are of the form:

The rectangular window (α = 0), the cosine window (α = 1), and the Hann window (α = 2) are members of this family.

Cosine window

Cosine window; B = 1.23.[8]

The cosine window is also known as the sine window. Cosine window describes the shape of

A cosine window convolved by itself is known as the Bohman window.

Adjustable windows

Gaussian window

Gaussian window, σ = 0.4; B = 1.45.

The Fourier transform of a Gaussian is also a Gaussian (it is an eigenfunction of the Fourier Transform). Since the Gaussian function extends to infinity, it must either be truncated at the ends of the window, or itself windowed with another zero-ended window.[32]

Since the log of a Gaussian produces a parabola, this can be used for nearly exact quadratic interpolation in frequency estimation.[33][34][35]

The standard deviation of the Gaussian function is σ(N−1)/2 sampling periods.

Confined Gaussian window, σt = 0.1N; B = 1.9982.

Confined Gaussian window

The confined Gaussian window yields the smallest possible root mean square frequency width σω for a given temporal width σt.[36] These windows optimize the RMS time-frequency bandwidth products. They are computed as the minimum eigenvectors of a parameter-dependent matrix. The confined Gaussian window family contains the cosine window and the Gaussian window in the limiting cases of large and small σt, respectively.

Approximate confined Gaussian window, σt = 0.1N; B = 1.9979.

Approximate confined Gaussian window

A confined Gaussian window of temporal width σt is well approximated by:[36]

with the Gaussian:

The temporal width of the approximate window is asymptotically equal to σt for σt < 0.14 N.[36]

Generalized normal window

A more generalized version of the Gaussian window is the generalized normal window.[37] Retaining the notation from the Gaussian window above, we can represent this window as

for any even . At , this is a Gaussian window and as approaches , this approximates to a rectangular window. The Fourier transform of this window does not exist in a closed form for a general . However, it demonstrates the other benefits of being smooth, adjustable bandwidth. Like the Tukey window discussed later, this window naturally offers a "flat top" to control the amplitude attenuation of a time-series (on which we don't have a control with Gaussian window). In essence, it offers a good (controllable) compromise, in terms of spectral leakage, frequency resolution and amplitude attenuation, between the Gaussian window and the rectangular window. See also [38] for a study on time-frequency representation of this window (or function).

Tukey window

Tukey window, α = 0.5; B = 1.22.[8]

The Tukey window,[8][39] also known as the tapered cosine window, can be regarded as a cosine lobe of width αN/2 that is convolved with a rectangular window of width (1 − α/2)N.

At α = 0 it becomes rectangular, and at α = 1 it becomes a Hann window.

Planck-taper window

Planck-taper window, ε = 0.1; B = 1.10.

The so-called "Planck-taper" window is a bump function that has been widely used[40] in the theory of partitions of unity in manifolds. It is a function everywhere, but is exactly zero outside of a compact region, exactly one over an interval within that region, and varies smoothly and monotonically between those limits. Its use as a window function in signal processing was first suggested in the context of gravitational-wave astronomy, inspired by the Planck distribution.[41] It is defined as a piecewise function:


The amount of tapering (the region over which the function is exactly 1) is controlled by the parameter ε, with smaller values giving sharper transitions.

DPSS or Slepian window

DPSS window, α = 2; B = 1.47.
DPSS window, α = 3; B = 1.77.

The DPSS (discrete prolate spheroidal sequence) or Slepian window is used to maximize the energy concentration in the main lobe.[42]

The main lobe ends at a bin given by the parameter α.[43]

Kaiser window

Kaiser window, α = 2; B = 1.4963.[13]
Kaiser window, α = 3; B = 1.7952.[13]

{{#invoke:main|main}} The Kaiser, or Kaiser-Bessel, window is a simple approximation of the DPSS window using Bessel functions, discovered by Jim Kaiser.[43][44]

where I0 is the zero-th order modified Bessel function of the first kind. Variable parameter α determines the tradeoff between main lobe width and side lobe levels of the spectral leakage pattern. The main lobe width, in between the nulls, is given by    in units of DFT bins,[45]  and a typical value of α is 3.

  • zero-phase version:

Dolph–Chebyshev window

Dolph–Chebyshev window, α = 5; B = 1.94.

Minimizes the Chebyshev norm of the side-lobes for a given main lobe width.[46]

The zero-phase Dolph–Chebyshev window function w0(n) is usually defined in terms of its real-valued discrete Fourier transform, W0(k):

where the parameter α sets the Chebyshev norm of the sidelobes to −20α decibels.[46]

The window function can be calculated from W0(k) by an inverse discrete Fourier transform (DFT):[46]

The lagged version of the window, with 0 ≤ n ≤ N−1, can be obtained by:

which for even values of N must be computed as follows:

which is an inverse DFT of  


Ultraspherical window

The Ultraspherical window's µ parameter determines whether its Fourier transform's side-lobe amplitudes decrease, are level, or (shown here) increase with frequency.

The Ultraspherical window was introduced in 1984 by Roy Streit[47] and has application in antenna array design,[48] non-recursive filter design,[47] and spectrum analysis.[49]

Like other adjustable windows, the Ultraspherical window has parameters that can be used to control its Fourier transform main-lobe width and relative side-lobe amplitude. Uncommon to other windows, it has an additional parameter which can be used to set the rate at which side-lobes decrease (or increase) in amplitude.[49][50]

The window can be expressed in the time-domain as follows:[49]

where is the Ultraspherical polynomial of degree N, and and control the side-lobe patterns.[49]

Certain specific values of yield other well-known windows: and give the Dolph–Chebyshev and Saramäki windows respectively.[47] See here for illustration of Ultraspherical windows with varied parametrization.

Exponential or Poisson window

Exponential window, τ = N/2, B = 1.08.
Exponential window, τ = (N/2)/(60/8.69), B = 3.46.

The Poisson window, or more generically the exponential window increases exponentially towards the center of the window and decreases exponentially in the second half. Since the exponential function never reaches zero, the values of the window at its limits are non-zero (it can be seen as the multiplication of an exponential function by a rectangular window [51]). It is defined by

where τ is the time constant of the function. The exponential function decays as e ≃ 2.71828 or approximately 8.69 dB per time constant.[52] This means that for a targeted decay of D dB over half of the window length, the time constant τ is given by

Hybrid windows

Window functions have also been constructed as multiplicative or additive combinations of other windows.

Bartlett–Hann window

Bartlett–Hann window; B = 1.46.

Planck–Bessel window

Planck–Bessel window, ε = 0.1, α = 4.45; B = 2.16.

A Planck-taper window multiplied by a Kaiser window which is defined in terms of a modified Bessel function. This hybrid window function was introduced to decrease the peak side-lobe level of the Planck-taper window while still exploiting its good asymptotic decay.[53] It has two tunable parameters, ε from the Planck-taper and α from the Kaiser window, so it can be adjusted to fit the requirements of a given signal.

Hann–Poisson window

Hann–Poisson window, α = 2; B = 2.02[8]

A Hann window multiplied by a Poisson window, which has no side-lobes, in the sense that its Fourier transform drops off forever away from the main lobe. It can thus be used in hill climbing algorithms like Newton's method.[54] The Hann–Poisson window is defined by:

where α is a parameter that controls the slope of the exponential.

Other windows

Lanczos window

Sinc or Lanczos window; B = 1.30.[8]
  • used in Lanczos resampling
  • for the Lanczos window, sinc(x) is defined as sin(πx)/(πx)
  • also known as a sinc window, because:
is the main lobe of a normalized sinc function

Comparison of windows

File:Window functions in the frequency domain.png
Window functions in the frequency domain ("spectral leakage")

When selecting an appropriate window function for an application, this comparison graph may be useful. The frequency axis has units of FFT "bins" when the window of length N is applied to data and a transform of length N is computed. For instance, the value at frequency ½ "bin" (third tick mark) is the response that would be measured in bins k and k+1 to a sinusoidal signal at frequency k+½. It is relative to the maximum possible response, which occurs when the signal frequency is an integer number of bins. The value at frequency ½ is referred to as the maximum scalloping loss of the window, which is one metric used to compare windows. The rectangular window is noticeably worse than the others in terms of that metric.

Other metrics that can be seen are the width of the main lobe and the peak level of the sidelobes, which respectively determine the ability to resolve comparable strength signals and disparate strength signals. The rectangular window (for instance) is the best choice for the former and the worst choice for the latter. What cannot be seen from the graphs is that the rectangular window has the best noise bandwidth, which makes it a good candidate for detecting low-level sinusoids in an otherwise white noise environment. Interpolation techniques, such as zero-padding and frequency-shifting, are available to mitigate its potential scalloping loss.

Overlapping windows

When the length of a data set to be transformed is larger than necessary to provide the desired frequency resolution, a common practice is to subdivide it into smaller sets and window them individually. To mitigate the "loss" at the edges of the window, the individual sets may overlap in time. See Welch method of power spectral analysis and the modified discrete cosine transform.

Two-dimensional windows

Two-dimensional windows are used in, e.g., image processing. They can be constructed from one-dimensional windows in either of two forms.[55]

The separable form, is trivial to compute. The radial form, , which involves the radius , is isotropic, independent on the orientation of the coordinate axes. Only the Gaussian function is both separable and isotropic.[56] The separable forms of all other window functions have corners that depend on the choice of the coordinate axes. The isotropy/anisotropy of a two-dimensional window function is shared by its two-dimensional Fourier transform. The difference between the separable and radial forms is akin to the result of diffraction from rectangular vs. circular appertures, which can be visualized in terms of the product of two sinc functions vs. an Airy function, respectively.

See also



  1. Another way of stating that condition is that the sinusoid happens to have an exact integer number of cycles within the length of the rectangular window. The periodic repetition of such a segment contains no discontinuities.
  2. 2.0 2.1 Mathematically, the noise equivalent bandwidth of transfer function H is the bandwidth of an ideal rectangular filter with the same peak gain as H that would pass the same power with white noise input. In the units of frequency f (e.g. hertz), it is given by:


  1. {{#invoke:citation/CS1|citation |CitationClass=book }}
  2. {{#invoke:citation/CS1|citation |CitationClass=book }}
  3. {{#invoke:citation/CS1|citation |CitationClass=book }}
  4. {{#invoke:citation/CS1|citation |CitationClass=book }}
  6. Mastering Windows: Improving Reconstruction
  8. 8.00 8.01 8.02 8.03 8.04 8.05 8.06 8.07 8.08 8.09 8.10 8.11 8.12 8.13 {{#invoke:Citation/CS1|citation |CitationClass=journal }} The fundamental 1978 paper on FFT windows by Harris, which specified many windows and introduced key metrics used to compare them.
  9. 9.0 9.1 The Fundamentals of Signal Analysis Application Note 243
  10. Technical Review 1987-3 Use of Weighting Functions in DFT/FFT Analysis (Part I); Signals and Units
  11. {{#invoke:citation/CS1|citation |CitationClass=book }}
  12. Template:Cite doi
  13. 13.00 13.01 13.02 13.03 13.04 13.05 13.06 13.07 13.08 13.09 13.10 13.11 13.12 13.13 13.14 13.15 13.16 Template:Cite techreport
  17. 17.0 17.1 Template:Cite doi
  22. {{#invoke:citation/CS1|citation |CitationClass=book }}
  23. 23.0 23.1 23.2
  28. 28.0 28.1 28.2 28.3 {{#invoke:citation/CS1|citation |CitationClass=book }}
  29. {{#invoke:citation/CS1|citation |CitationClass=citation }}
  30. 30.0 30.1 {{#invoke:citation/CS1|citation |CitationClass=citation }}
  31. {{#invoke:citation/CS1|citation |CitationClass=citation }}
  36. 36.0 36.1 36.2 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  37. Debejyo Chakraborty and Narayan Kovvali Generalized Normal Window for Digital Signal Processing in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2013 6083 -- 6087 doi: 10.1109/ICASSP.2013.6638833
  38. Diethorn, E.J., "The generalized exponential time-frequency distribution," Signal Processing, IEEE Transactions on , vol.42, no.5, pp.1028,1037, May 1994 doi: 10.1109/78.295214
  39. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  40. {{#invoke:citation/CS1|citation |CitationClass=book }}
  41. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  43. 43.0 43.1
  44. 44.0 44.1
  45. Template:Cite doi
  46. 46.0 46.1 46.2
  47. 47.0 47.1 47.2 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  48. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  49. 49.0 49.1 49.2 49.3 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  50. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  51. {{#invoke:citation/CS1|citation |CitationClass=citation }}
  52. Template:Cite web
  53. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  55. Matt A. Bernstein, Kevin Franklin King, Xiaohong Joe Zhou (2007), Handbook of MRI Pulse Sequences, Elsevier; p.495-499. [1]
  56. Template:Cite doi

Further reading

  • {{#invoke:Citation/CS1|citation

|CitationClass=journal }} Extends Harris' paper, covering all the window functions known at the time, along with key metric comparisons.

  • {{#invoke:citation/CS1|citation

|CitationClass=book }}

  • {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

  • {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

|CitationClass=book }}

External links