|
|
Line 1: |
Line 1: |
| The '''Cooley–Tukey [[algorithm]]''', named after [[James Cooley|J.W. Cooley]] and [[John Tukey]], is the most common [[fast Fourier transform]] (FFT) algorithm. It re-expresses the [[discrete Fourier transform]] (DFT) of an arbitrary [[composite number|composite]] size ''N'' = ''N''<sub>1</sub>''N''<sub>2</sub> in terms of smaller DFTs of sizes ''N''<sub>1</sub> and ''N''<sub>2</sub>, [[recursion|recursively]], in order to reduce the computation time to O(''N'' log ''N'') for highly-composite ''N'' ([[smooth number]]s). Because of the algorithm's importance, specific variants and implementation styles have become known by their own names, as described below.
| | Prior to this enjoying a brand additional clash of clans chop tool, see the gain a advantage book. Most quests possess a book it is buy individually. You ought to think about doing specific and studying it a person begin play, or even just one playing. In them manner, you can complete out of your gameplay.<br><br> |
|
| |
|
| Because the Cooley-Tukey algorithm breaks the DFT into smaller DFTs, it can be combined arbitrarily with any other algorithm for the DFT. For example, [[Rader's FFT algorithm|Rader's]] or [[Bluestein's FFT algorithm|Bluestein's]] algorithm can be used to handle large prime factors that cannot be decomposed by Cooley–Tukey, or the [[prime-factor FFT algorithm|prime-factor algorithm]] can be exploited for greater efficiency in separating out [[relatively prime]] factors.
| | The upsides of video flash games can include fun, cinema and even education. The downsides range by means of addictive game play in which to younger individuals seeing and after that hearing things they are hands down not old enough to suit. With luck, one particular ideas presented within this article can help customers manage video games highly within your home for everyone's benefit.<br><br>Nevertheless be aware of how multi player works. Should you're investing in a new game exclusively for it has the multiplayer, be sure individuals have everything required needed for this. If you're the one planning on playing in opposition t a person in your good household, you may discover that you will have two copies of specific clash of clans cheats to work against one another.<br><br>An outstanding method to please children with a gaming body and ensure they be placed fit is to purchase a Wii. This video gaming system needs real task perform. Your children won't be put for hours on end playing clash of [http://Thesaurus.com/browse/clans+hack clans hack]. They end up being moving around as best ways to play the games on this particular system.<br><br>Second, when your husband pinpoints to commit adultery, or perhaps creates a problem in which it forces you to prepare some serious decisions. Step one turn at your Xbox sign from the dash board. It is unforgivable and also disappointing to say a minimum. I think we always be start differentiating between currently the public interest, and a complete proper definition of just what that means, and content articles that the media pick out the public people 'd be interested in. Ford introduced the before anything else production woodie in 1929. The varieties in fingers you perform over No-Limit Holdem vary than all those in Limitation.<br><br>A person's world can be driven by supply and need to have. We shall look during the the Greek-Roman model. Using special care that will highlight the role including [http://circuspartypanama.com clash of clans hack] tool no survey from the vast framework this also usually this provides.<br><br>Disclaimer: I aggregate the guidance on this commodity by world a lot of CoC and accomplishing some study. To the best involving my knowledge, is it authentic along with I accept amateur requested all abstracts and methods. Nevertheless, it is consistently accessible we accept fabricated a aberration about or which that this bold has afflicted back publication. Use as part of your very own risk, I am accommodate virtually any guarantees. Please get in blow if the public acquisition annihilation amiss. |
| | |
| See also the [[fast Fourier transform]] for information on other FFT algorithms, specializations for real and/or symmetric data, and accuracy in the face of finite [[floating-point]] precision.
| |
| | |
| == History ==
| |
| This algorithm, including its recursive application, was invented around 1805 by [[Carl Friedrich Gauss]], who used it to interpolate the trajectories of the [[asteroid]]s [[2 Pallas|Pallas]] and [[3 Juno|Juno]], but his work was not widely recognized (being published only posthumously and in [[New Latin|neo-Latin]]).<ref>Gauss, Carl Friedrich, "[http://lseet.univ-tln.fr/~iaroslav/Gauss_Theoria_interpolationis_methodo_nova_tractata.php Theoria interpolationis methodo nova tractata]", Werke, Band 3, 265–327 (Königliche Gesellschaft der Wissenschaften, Göttingen, 1866)</ref><ref name=Heideman84>Heideman, M. T., D. H. Johnson, and [[C. Sidney Burrus|C. S. Burrus]], "[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1162257 Gauss and the history of the fast Fourier transform]," IEEE ASSP Magazine, 1, (4), 14–21 (1984)</ref> Gauss did not analyze the asymptotic computational time, however. Various limited forms were also rediscovered several times throughout the 19th and early 20th centuries.<ref name=Heideman84/> FFTs became popular after [[James Cooley]] of [[International Business Machines|IBM]] and [[John Tukey]] of [[Princeton University|Princeton]] published a paper in 1965 reinventing the algorithm and describing how to perform it conveniently on a computer.<ref name=CooleyTukey65>{{cite journal |last=Cooley |first=James W. |first2=John W. |last2=Tukey |title=An algorithm for the machine calculation of complex Fourier series |journal=[[Mathematics of Computation|Math. Comput.]] |volume=19 |issue= |pages=297–301 |year=1965 |doi=10.2307/2003354 }}</ref>
| |
| | |
| Tukey reportedly came up with the idea during a meeting of a US presidential advisory committee discussing ways to detect [[nuclear testing|nuclear-weapon tests]] in the [[Soviet Union]].<ref>{{cite journal |last=Cooley |first=James W. |first2=Peter A. W. |last2=Lewis |first3=Peter D. |last3=Welch |title=Historical notes on the fast Fourier transform |journal=IEEE Trans. on Audio and Electroacoustics |volume=15 |issue=2 |pages=76–79 |year=1967 }}</ref><ref>Rockmore, Daniel N. , ''Comput. Sci. Eng.'' '''2''' (1), 60 (2000). [http://www.cs.dartmouth.edu/~rockmore/cse-fft.pdf The FFT — an algorithm the whole family can use] Special issue on "top ten algorithms of the century "[http://amath.colorado.edu/resources/archive/topten.pdf ]</ref> Another participant at that meeting, [[Richard Garwin]] of IBM, recognized the potential of the method and put Tukey in touch with Cooley, who implemented it for a different (and less-classified) problem: analyzing 3d crystallographic data (see also: [[Fast Fourier transform#Algorithms|multidimensional FFTs]]). Cooley and Tukey subsequently published their joint paper, and wide adoption quickly followed.
| |
| | |
| The fact that Gauss had described the same algorithm (albeit without analyzing its asymptotic cost) was not realized until several years after Cooley and Tukey's 1965 paper.<ref name=Heideman84/> Their paper cited as inspiration only work by I. J. Good on what is now called the [[prime-factor FFT algorithm]] (PFA);<ref name=CooleyTukey65/> although Good's algorithm was initially mistakenly thought to be equivalent to the Cooley–Tukey algorithm, it was quickly realized that PFA is a quite different algorithm (only working for sizes that have [[relatively prime]] factors and relying on the [[Chinese Remainder Theorem]], unlike the support for any composite size in Cooley–Tukey).<ref>James W. Cooley, Peter A. W. Lewis, and Peter W. Welch, "Historical notes on the fast Fourier transform," ''Proc. IEEE'', vol. '''55''' (no. 10), p. 1675–1677 (1967).</ref>
| |
| | |
| == The radix-2 DIT case ==
| |
| A '''radix-2''' decimation-in-time ('''DIT''') FFT is the simplest and most common form of the Cooley–Tukey algorithm, although highly optimized Cooley–Tukey implementations typically use other forms of the algorithm as described below. Radix-2 DIT divides a DFT of size ''N'' into two interleaved DFTs (hence the name "radix-2") of size ''N''/2 with each recursive stage.
| |
| | |
| The discrete Fourier transform (DFT) is defined by the formula:
| |
| :<math> X_k = \sum_{n=0}^{N-1} x_n e^{-\frac{2\pi i}{N} nk},</math>
| |
| where <math>k</math> is an integer ranging from <math>0</math> to <math>N-1</math>.
| |
| | |
| Radix-2 DIT first computes the DFTs of the even-indexed inputs
| |
| <math>x_{2m}</math> <math>(x_0, x_2, \ldots, x_{N-2})</math>
| |
| and of the odd-indexed inputs <math>x_{2m+1}</math> <math>(x_1, x_3, \ldots, x_{N-1})</math>, and then combines those two results to produce the DFT of the whole sequence. This idea can then be performed [[recursion|recursively]] to reduce the overall runtime to O(''N'' log ''N''). This simplified form assumes that ''N'' is a [[power of two]]; since the number of sample points ''N'' can usually be chosen freely by the application, this is often not an important restriction.
| |
| | |
| The Radix-2 DIT algorithm rearranges the DFT of the function <math>x_n</math> into two parts: a sum over the even-numbered indices <math>n={2m}</math> and a sum over the odd-numbered indices <math>n={2m+1}</math>:
| |
| | |
| | |
| :<math>
| |
| \begin{matrix} X_k & =
| |
| & \sum \limits_{m=0}^{N/2-1} x_{2m}e^{-\frac{2\pi i}{N} (2m)k} + \sum \limits_{m=0}^{N/2-1} x_{2m+1} e^{-\frac{2\pi i}{N} (2m+1)k}
| |
| \end{matrix}
| |
| </math>
| |
| | |
| | |
| One can factor a common multiplier <math>e^{-\frac{2\pi i}{N}k}</math> out of the second sum, as shown in the equation below. It is then clear that the two sums are the DFT of the even-indexed part <math>x_{2m}</math> and the DFT of odd-indexed part <math>x_{2m+1}</math> of the function <math>x_n</math>. Denote the DFT of the '''''E'''''ven-indexed inputs <math>x_{2m}</math> by <math>E_k</math> and the DFT of the '''''O'''''dd-indexed inputs <math>x_{2m + 1}</math> by <math>O_k</math> and we obtain:
| |
| | |
| | |
| :<math>
| |
| \begin{matrix} X_k= \underbrace{\sum \limits_{m=0}^{N/2-1} x_{2m} e^{-\frac{2\pi i}{N/2} mk}}_{\mathrm{DFT\;of\;even-indexed\;part\;of\;} x_m} {} + e^{-\frac{2\pi i}{N}k}
| |
| \underbrace{\sum \limits_{m=0}^{N/2-1} x_{2m+1} e^{-\frac{2\pi i}{N/2} mk}}_{\mathrm{DFT\;of\;odd-indexed\;part\;of\;} x_m} = E_k + e^{-\frac{2\pi i}{N}k} O_k.
| |
| \end{matrix}
| |
| </math>
| |
| | |
| | |
| Thanks to the periodicity of the DFT, we know that
| |
| :<math>E_{{k} + \frac{N}{2}} = E_k</math> and <math>O_{{k} + \frac{N}{2}} = O_k</math>.
| |
| Therefore, we can rewrite the above equation as
| |
| | |
| | |
| :<math>
| |
| \begin{matrix} X_k & = & \left\{
| |
| \begin{matrix}
| |
| E_k + e^{-\frac{2\pi i}{N}k} O_k & \mbox{for } 0 \leq k < N/2 \\ \\
| |
| E_{k-N/2} + e^{-\frac{2\pi i}{N}k} O_{k-N/2} & \mbox{for } N/2 \leq k < N . \\
| |
| \end{matrix}
| |
| \right. \end{matrix}
| |
| </math>
| |
| | |
| | |
| We also know that the [[twiddle factor]] <math>e^{-2\pi i k/ N}</math> obeys the following relation:
| |
| | |
| | |
| :<math>
| |
| \begin{matrix} e^{\frac{-2\pi i}{N} (k + N/2)} & = & e^{\frac{-2\pi i k}{N} - {\pi i}} \\
| |
| & = & e^{-\pi i} e^{\frac{-2\pi i k}{N}} \\
| |
| & = & -e^{\frac{-2\pi i k}{N}}
| |
| \end{matrix}
| |
| </math>
| |
| | |
| | |
| This allows us to cut the number of "twiddle factor" calculations in half also. For <math> 0 \leq k < \frac{N}{2}</math>, we have
| |
| :<math>
| |
| \begin{matrix}
| |
| X_k & =
| |
| & E_k + e^{-\frac{2\pi i}{N}k} O_k \\
| |
| X_{k+\frac{N}{2}} & =
| |
| & E_k - e^{-\frac{2\pi i}{N}k} O_k
| |
| \end{matrix}
| |
| </math>
| |
| | |
| | |
| This result, expressing the DFT of length ''N'' recursively in terms of two DFTs of size ''N''/2, is the core of the radix-2 DIT fast Fourier transform. The algorithm gains its speed by re-using the results of intermediate computations to compute multiple DFT outputs. Note that final outputs are obtained by a +/− combination of <math>E_k</math> and <math>O_k \exp(-2\pi i k/N)</math>, which is simply a size-2 DFT (sometimes called a [[butterfly diagram|butterfly]] in this context); when this is generalized to larger radices below, the size-2 DFT is replaced by a larger DFT (which itself can be evaluated with an FFT).
| |
| [[Image:DIT-FFT-butterfly.png|thumb|300px|right|Data flow diagram for ''N''=8: a decimation-in-time radix-2 FFT breaks a length-''N'' DFT into two length-''N''/2 DFTs followed by a combining stage consisting of many size-2 DFTs called "butterfly" operations (so-called because of the shape of the data-flow diagrams).]]
| |
| | |
| This process is an example of the general technique of [[divide and conquer algorithm]]s; in many traditional implementations, however, the explicit recursion is avoided, and instead one traverses the computational tree in [[breadth-first search|breadth-first]] fashion.
| |
| | |
| The above re-expression of a size-''N'' DFT as two size-''N''/2 DFTs is sometimes called the '''[[G. C. Danielson|Danielson]]–[[Cornelius Lanczos|Lanczos]]''' [[lemma (mathematics)|lemma]], since the identity was noted by those two authors in 1942<ref>Danielson, G. C., and C. Lanczos, "Some improvements in practical Fourier analysis and their application to X-ray scattering from liquids," ''J. Franklin Inst.'' '''233''', 365–380 and 435–452 (1942).</ref> (influenced by [[Carl David Tolmé Runge|Runge's]] 1903 work<ref name=Heideman84/>). They applied their lemma in a "backwards" recursive fashion, repeatedly ''doubling'' the DFT size until the transform spectrum converged (although they apparently didn't realize the [[linearithmic]] [i.e., order ''N'' log ''N''] asymptotic complexity they had achieved). The Danielson–Lanczos work predated widespread availability of [[computer]]s and required hand calculation (possibly with mechanical aids such as [[adding machine]]s); they reported a computation time of 140 minutes for a size-64 DFT operating on [[Fast Fourier transform#FFT algorithms specialized for real and/or symmetric data|real inputs]] to 3–5 significant digits. Cooley and Tukey's 1965 paper reported a running time of 0.02 minutes for a size-2048 complex DFT on an [[IBM 7094]] (probably in 36-bit [[floating point|single precision]], ~8 digits).<ref name=CooleyTukey65/> Rescaling the time by the number of operations, this corresponds roughly to a speedup factor of around 800,000. (To put the time for the hand calculation in perspective, 140 minutes for size 64 corresponds to an average of at most 16 seconds per floating-point operation, around 20% of which are multiplications.)
| |
| | |
| ===Pseudocode===
| |
| | |
| In [[pseudocode]], the above procedure could be written:<ref name="Johnson08"/>
| |
| | |
| ''X''<sub>0,...,''N''−1</sub> ← '''ditfft2'''(''x'', ''N'', ''s''): ''DFT of (x''<sub>0</sub>, ''x<sub>s</sub>'', ''x''<sub>2''s''</sub>, ..., ''x''<sub>(''N''-1)''s''</sub>):
| |
| if ''N'' = 1 then
| |
| ''X''<sub>0</sub> ← ''x''<sub>0</sub> ''trivial size-1 DFT base case''
| |
| else
| |
| ''X''<sub>0,...,''N''/2−1</sub> ← '''ditfft2'''(''x'', ''N''/2, 2''s'') ''DFT of (x''<sub>0</sub>, ''x''<sub>2''s''</sub>, ''x''<sub>4''s''</sub>, ...)
| |
| ''X<sub>N</sub>''<sub>/2,...,''N''−1</sub> ← '''ditfft2'''(''x''+s, ''N''/2, 2''s'') ''DFT of (x<sub>s</sub>'', ''x<sub>s</sub>''<sub>+2''s''</sub>, ''x<sub>s</sub>''<sub>+4''s''</sub>, ...)
| |
| for ''k'' = 0 to ''N''/2−1 ''combine DFTs of two halves into full DFT:''
| |
| t ← ''X<sub>k</sub>''
| |
| ''X<sub>k</sub>'' ← t + exp(−2π''i'' ''k''/''N'') ''X<sub>k</sub>''<sub>+''N''/2</sub>
| |
| ''X<sub>k</sub>''<sub>+''N''/2</sub> ← t − exp(−2π''i'' ''k''/''N'') ''X<sub>k</sub>''<sub>+''N''/2</sub>
| |
| endfor
| |
| endif
| |
| | |
| Here, <code>'''ditfft2'''</code>(''x'',''N'',1), computes ''X''=DFT(''x'') [[In-place algorithm|out-of-place]] by a radix-2 DIT FFT, where ''N'' is an integer power of 2 and ''s''=1 is the [[stride of an array|stride]] of the input ''x'' [[Array data structure|array]]. ''x''+''s'' denotes the array starting with ''x<sub>s</sub>''.
| |
| | |
| (The results are in the correct order in ''X'' and no further [[bit-reversal permutation]] is required; the often-mentioned necessity of a separate bit-reversal stage only arises for certain in-place algorithms, as described below.)
| |
| | |
| High-performance FFT implementations make many modifications to the implementation of such an algorithm compared to this simple pseudocode. For example, one can use a larger base case than ''N''=1 to [[amortize]] the overhead of recursion, the twiddle factors <math>\exp[-2\pi i k/ N]</math> can be precomputed, and larger radices are often used for [[cache (computing)|cache]] reasons; these and other optimizations together can improve the performance by an order of magnitude or more.<ref name="Johnson08">S. G. Johnson and M. Frigo, “[http://cnx.org/content/m16336/ Implementing FFTs in practice],” in ''Fast Fourier Transforms'' (C. S. Burrus, ed.), ch. 11, Rice University, Houston TX: Connexions, September 2008.</ref> (In many textbook implementations the [[depth-first]] recursion is eliminated entirely in favor of a nonrecursive [[breadth-first]] approach, although depth-first recursion has been argued to have better [[memory locality]].<ref name="Johnson08"/><ref name=Singleton67/>) Several of these ideas are described in further detail below.
| |
| | |
| == General factorizations ==
| |
| [[File:Cooley-tukey-general.png|thumb|right|500px|The basic step of the Cooley–Tukey FFT for general factorizations can be viewed as re-interpreting a 1d DFT as something like a 2d DFT. The 1d input array of length ''N'' = ''N''<sub>1</sub>''N''<sub>2</sub> is reinterpreted as a 2d ''N''<sub>1</sub>×''N''<sub>2</sub> matrix stored in [[column-major order]]. One performs smaller 1d DFTs along the ''N''<sub>2</sub> direction (the non-contiguous direction), then multiplies by phase factors (twiddle factors), and finally performs 1d DFTs along the ''N''<sub>1</sub> direction. The transposition step can be performed in the middle, as shown here, or at the beginning or end. This is done recursively for the smaller transforms.]]
| |
| | |
| More generally, Cooley–Tukey algorithms recursively re-express a DFT of a composite size ''N'' = ''N''<sub>1</sub>''N''<sub>2</sub> as:<ref name=DuhamelVe90>Duhamel, P., and M. Vetterli, "Fast Fourier transforms: a tutorial review and a state of the art," ''Signal Processing'' '''19''', 259–299 (1990)</ref>
| |
| | |
| # Perform ''N''<sub>1</sub> DFTs of size ''N''<sub>2</sub>.
| |
| # Multiply by complex [[roots of unity]] called [[twiddle factor]]s.
| |
| # Perform ''N''<sub>2</sub> DFTs of size ''N''<sub>1</sub>.
| |
| | |
| Typically, either ''N''<sub>1</sub> or ''N''<sub>2</sub> is a small factor (''not'' necessarily prime), called the '''radix''' (which can differ between stages of the recursion). If ''N''<sub>1</sub> is the radix, it is called a '''decimation in time''' (DIT) algorithm, whereas if ''N''<sub>2</sub> is the radix, it is '''decimation in frequency''' (DIF, also called the Sande-Tukey algorithm). The version presented above was a radix-2 DIT algorithm; in the final expression, the phase multiplying the odd transform is the twiddle factor, and the +/- combination (''butterfly'') of the even and odd transforms is a size-2 DFT. (The radix's small DFT is sometimes known as a [[butterfly (FFT algorithm)|butterfly]], so-called because of the shape of the [[dataflow diagram]] for the radix-2 case.)
| |
| | |
| There are many other variations on the Cooley–Tukey algorithm. '''Mixed-radix''' implementations handle composite sizes with a variety of (typically small) factors in addition to two, usually (but not always) employing the O(''N''<sup>2</sup>) algorithm for the prime base cases of the recursion <nowiki>(</nowiki>it is also possible to employ an ''N'' log ''N'' algorithm for the prime base cases, such as [[Rader's FFT algorithm|Rader]]'s or [[Bluestein's FFT algorithm|Bluestein]]'s algorithm<nowiki>)</nowiki>. [[Split-radix FFT algorithm|Split radix]] merges radices 2 and 4, exploiting the fact that the first transform of radix 2 requires no twiddle factor, in order to achieve what was long the lowest known arithmetic operation count for power-of-two sizes,<ref name=DuhamelVe90/> although recent variations achieve an even lower count.<ref>Lundy, T., and J. Van Buskirk, "A new matrix approach to real FFTs and convolutions of length 2<sup>''k''</sup>," ''Computing'' '''80''', 23-45 (2007).</ref><ref>Johnson, S. G., and M. Frigo, "[http://www.fftw.org/newsplit.pdf A modified split-radix FFT with fewer arithmetic operations]," ''IEEE Trans. Signal Processing'' '''55''' (1), 111–119 (2007).</ref> (On present-day computers, performance is determined more by [[CPU cache|cache]] and [[CPU pipeline]] considerations than by strict operation counts; well-optimized FFT implementations often employ larger radices and/or hard-coded base-case transforms of significant size.<ref name=FrigoJohnson05/>) Another way of looking at the Cooley–Tukey algorithm is that it re-expresses a size ''N'' one-dimensional DFT as an ''N''<sub>1</sub> by ''N''<sub>2</sub> two-dimensional DFT (plus twiddles), where the output matrix is [[transpose]]d. The net result of all of these transpositions, for a radix-2 algorithm, corresponds to a bit reversal of the input (DIF) or output (DIT) indices. If, instead of using a small radix, one employs a radix of roughly √''N'' and explicit input/output matrix transpositions, it is called a '''four-step''' algorithm (or ''six-step'', depending on the number of transpositions), initially proposed to improve memory locality,<ref name=GenSande66>Gentleman W. M., and G. Sande, "Fast Fourier transforms—for fun and profit," ''Proc. AFIPS'' '''29''', 563–578 (1966).</ref><ref name=Bailey90>Bailey, David H., "FFTs in external or hierarchical memory," ''J. Supercomputing'' '''4''' (1), 23–35 (1990)</ref> e.g. for cache optimization or [[out-of-core]] operation, and was later shown to be an optimal [[cache-oblivious algorithm]].<ref name=Frigo99>M. Frigo, C.E. Leiserson, H. Prokop, and S. Ramachandran. Cache-oblivious algorithms. In ''Proceedings of the 40th IEEE Symposium on Foundations of Computer Science'' (FOCS 99), p.285-297. 1999. [http://ieeexplore.ieee.org/iel5/6604/17631/00814600.pdf?arnumber=814600 Extended abstract at IEEE], [http://citeseer.ist.psu.edu/307799.html at Citeseer].</ref>
| |
| | |
| The general Cooley–Tukey factorization rewrites the indices ''k'' and ''n'' as <math>k = N_2 k_1 + k_2</math> and <math>n = N_1 n_2 + n_1</math>, respectively, where the indices ''k''<sub>a</sub> and ''n''<sub>a</sub> run from 0..''N''<sub>a</sub>-1 (for ''a'' of 1 or 2). That is, it re-indexes the input (''n'') and output (''k'') as ''N''<sub>1</sub> by ''N''<sub>2</sub> two-dimensional arrays in [[column-major order|column-major]] and [[row-major order]], respectively; the difference between these indexings is a transposition, as mentioned above. When this re-indexing is substituted into the DFT formula for ''nk'', the <math>N_1 n_2 N_2 k_1</math> cross term vanishes (its exponential is unity), and the remaining terms give | |
| | |
| :<math>X_{N_2 k_1 + k_2} =
| |
| \sum_{n_1=0}^{N_1-1} \sum_{n_2=0}^{N_2-1}
| |
| x_{N_1 n_2 + n_1}
| |
| e^{-\frac{2\pi i}{N_1 N_2} \cdot (N_1 n_2 + n_1) \cdot (N_2 k_1 + k_2) }</math>
| |
| ::<math>=
| |
| \sum_{n_1=0}^{N_1-1}
| |
| \left[ e^{-\frac{2\pi i}{N} n_1 k_2 } \right]
| |
| \left( \sum_{n_2=0}^{N_2-1} x_{N_1 n_2 + n_1}
| |
| e^{-\frac{2\pi i}{N_2} n_2 k_2 } \right)
| |
| e^{-\frac{2\pi i}{N_1} n_1 k_1 }
| |
| </math>
| |
| | |
| where each inner sum is a DFT of size ''N''<sub>2</sub>, each outer sum is a DFT of size ''N''<sub>1</sub>, and the <nowiki>[...]</nowiki> bracketed term is the twiddle factor.
| |
| | |
| An arbitrary radix ''r'' (as well as mixed radices) can be employed, as was shown by both Cooley and Tukey<ref name=CooleyTukey65/> as well as Gauss (who gave examples of radix-3 and radix-6 steps).<ref name=Heideman84/> Cooley and Tukey originally assumed that the radix butterfly required O(''r''<sup>2</sup>) work and hence reckoned the complexity for a radix ''r'' to be O(''r''<sup>2</sup> ''N''/''r'' log<sub>''r''</sub>''N'') = O(''N'' log<sub>2</sub>(''N'') ''r''/log<sub>2</sub>''r''); from calculation of values of ''r''/log<sub>2</sub>''r'' for integer values of ''r'' from 2 to 12 the optimal radix is found to be 3 (the closest integer to ''[[e (mathematical constant)|e]]'', which minimizes ''r''/log<sub>2</sub>''r'').<ref name=CooleyTukey65/><ref>Cooley, J. W., P. Lewis and P. Welch, "The Fast Fourier Transform and its Applications", ''IEEE Trans on Education'' '''12''', 1, 28-34 (1969)</ref> This analysis was erroneous, however: the radix-butterfly is also a DFT and can be performed via an FFT algorithm in O(''r'' log ''r'') operations, hence the radix ''r'' actually cancels in the complexity O(''r'' log(''r'') ''N''/''r'' log<sub>''r''</sub>''N''), and the optimal ''r'' is determined by more complicated considerations. In practice, quite large ''r'' (32 or 64) are important in order to effectively exploit e.g. the large number of [[processor register]]s on modern processors,<ref name=FrigoJohnson05/> and even an unbounded radix ''r''=√''N'' also achieves O(''N'' log ''N'') complexity and has theoretical and practical advantages for large ''N'' as mentioned above.<ref name=GenSande66/><ref name=Bailey90/><ref name=Frigo99/>
| |
| | |
| == Data reordering, bit reversal, and in-place algorithms ==
| |
| Although the abstract Cooley–Tukey factorization of the DFT, above, applies in some form to all implementations of the algorithm, much greater diversity exists in the techniques for ordering and accessing the data at each stage of the FFT. Of special interest is the problem of devising an [[in-place algorithm]] that overwrites its input with its output data using only O(1) auxiliary storage.
| |
| | |
| The most well-known reordering technique involves explicit '''bit reversal''' for in-place radix-2 algorithms. [[Bit-reversal permutation|Bit reversal]] is the [[permutation]] where the data at an index ''n'', written in [[binary numeral system|binary]] with digits ''b''<sub>4</sub>''b''<sub>3</sub>''b''<sub>2</sub>''b''<sub>1</sub>''b''<sub>0</sub> (e.g. 5 digits for ''N''=32 inputs), is transferred to the index with reversed digits ''b''<sub>0</sub>''b''<sub>1</sub>''b''<sub>2</sub>''b''<sub>3</sub>''b''<sub>4</sub> . Consider the last stage of a radix-2 DIT algorithm like the one presented above, where the output is written in-place over the input: when <math>E_k</math> and <math>O_k</math> are combined with a size-2 DFT, those two values are overwritten by the outputs. However, the two output values should go in the first and second ''halves'' of the output array, corresponding to the ''most'' significant bit ''b''<sub>4</sub> (for ''N''=32); whereas the two inputs <math>E_k</math> and <math>O_k</math> are interleaved in the even and odd elements, corresponding to the ''least'' significant bit ''b''<sub>0</sub>. Thus, in order to get the output in the correct place, these two bits must be swapped. If you include all of the recursive stages of a radix-2 DIT algorithm, ''all'' the bits must be swapped and thus one must pre-process the input (''or'' post-process the output) with a bit reversal to get in-order output. (If each size-''N''/2 subtransform is to operate on contiguous data, the DIT ''input'' is pre-processed by bit-reversal.) Correspondingly, if you perform all of the steps in reverse order, you obtain a radix-2 DIF algorithm with bit reversal in post-processing (or pre-processing, respectively). Alternatively, some applications (such as convolution) work equally well on bit-reversed data, so one can perform forward transforms, processing, and then inverse transforms all without bit reversal to produce final results in the natural order.
| |
| | |
| Many FFT users, however, prefer natural-order outputs, and a separate, explicit bit-reversal stage can have a non-negligible impact on the computation time,<ref name=FrigoJohnson05/> even though bit reversal can be done in O(''N'') time and has been the subject of much research.<ref>{{cite journal |last=Karp |first=Alan H. |title=Bit reversal on uniprocessors |journal=SIAM Review |volume=38 |issue=1 |pages=1–26 |year=1996 |jstor=2132972 |doi=10.1137/1038001}}</ref><ref>{{Cite journal|last=Carter |first=Larry |first2=Kang Su |last2=Gatlin |title=Towards an optimal bit-reversal permutation program |work=Proc. 39th Ann. Symp. on Found. of Comp. Sci. (FOCS) |pages=544–553 |year=1998 |doi=10.1109/SFCS.1998.743505 }}</ref><ref>{{cite journal |last=Rubio |first=M. |first2=P. |last2=Gómez |first3=K. |last3=Drouiche |title=A new superfast bit reversal algorithm |journal=Intl. J. Adaptive Control and Signal Processing |volume=16 |issue=10 |pages=703–707 |year=2002 |doi=10.1002/acs.718 }}</ref> Also, while the permutation is a bit reversal in the radix-2 case, it is more generally an arbitrary (mixed-base) digit reversal for the mixed-radix case, and the permutation algorithms become more complicated to implement. Moreover, it is desirable on many hardware architectures to re-order intermediate stages of the FFT algorithm so that they operate on consecutive (or at least more localized) data elements. To these ends, a number of alternative implementation schemes have been devised for the Cooley–Tukey algorithm that do not require separate bit reversal and/or involve additional permutations at intermediate stages.
| |
| | |
| The problem is greatly simplified if it is '''out-of-place''': the output array is distinct from the input array or, equivalently, an equal-size auxiliary array is available. The '''Stockham auto-sort''' algorithm<ref>Originally attributed to Stockham in W. T. Cochran ''et al.'', [http://dx.doi.org/10.1109/PROC.1967.5957 What is the fast Fourier transform?], ''Proc. IEEE'' vol. 55, 1664–1674 (1967).</ref><ref name=Swarztrauber84/> performs every stage of the FFT out-of-place, typically writing back and forth between two arrays, transposing one "digit" of the indices with each stage, and has been especially popular on [[SIMD]] architectures.<ref name=Swarztrauber84>P. N. Swarztrauber, [http://dx.doi.org/10.1016/S0167-8191(84)90413-7 FFT algorithms for vector computers], ''Parallel Computing'' vol. 1, 45–63 (1984).</ref><ref>{{cite book |last=Swarztrauber |first=P. N. |chapter=Vectorizing the FFTs |editor-first=G. |editor-last=Rodrigue |title=Parallel Computations |publisher=Academic Press |location=New York |year=1982 |pages=51–83 |isbn=0-12-592101-2 }}</ref> Even greater potential SIMD advantages (more consecutive accesses) have been proposed for the '''Pease''' algorithm,<ref>{{cite journal |last=Pease |first=M. C. |title=An adaptation of the fast Fourier transform for parallel processing |journal=J. ACM |volume=15 |issue=2 |pages=252–264 |year=1968 |doi=10.1145/321450.321457 }}</ref> which also reorders out-of-place with each stage, but this method requires separate bit/digit reversal and O(''N'' log ''N'') storage. One can also directly apply the Cooley–Tukey factorization definition with explicit ([[depth-first search|depth-first]]) recursion and small radices, which produces natural-order out-of-place output with no separate permutation step (as in the pseudocode above) and can be argued to have [[cache-oblivious algorithm|cache-oblivious]] locality benefits on systems with [[cache (computing)|hierarchical memory]].<ref name=Singleton67>{{cite journal |last=Singleton |first=Richard C. |title=On computing the fast Fourier transform |journal=Commun. of the ACM |volume=10 |issue=10 |year=1967 |pages=647–654 |doi=10.1145/363717.363771 }}</ref><ref name=FrigoJohnson05>{{cite journal |last=Frigo |first=M. |first2=S. G. |last2=Johnson |url=http://fftw.org/fftw-paper-ieee.pdf |title=The Design and Implementation of FFTW3 |journal=Proceedings of the IEEE |volume=93 |issue=2 |pages=216–231 |year=2005 |doi=10.1109/JPROC.2004.840301}}</ref><ref>{{cite web |last=Frigo |first=Matteo |first2=Steven G. |last2=Johnson |title=FFTW |url=http://www.fftw.org/ }} A free ([[GNU General Public License|GPL]]) C library for computing discrete Fourier transforms in one or more dimensions, of arbitrary size, using the Cooley–Tukey algorithm</ref>
| |
| | |
| A typical strategy for in-place algorithms without auxiliary storage and without separate digit-reversal passes involves small matrix transpositions (which swap individual pairs of digits) at intermediate stages, which can be combined with the radix butterflies to reduce the number of passes over the data.<ref name=FrigoJohnson05/><ref>{{cite journal |last=Johnson |first=H. W. |first2=C. S. |last2=Burrus |title=An in-place in-order radix-2 FFT |journal=Proc. ICASSP |volume= |issue= |pages=28A.2.1–28A.2.4 |year=1984 |doi= }}</ref><ref>{{cite journal |last=Temperton |first=C. |title=Self-sorting in-place fast Fourier transform |journal=SIAM Journal on Scientific and Statistical Computing |volume=12 |issue=4 |pages=808–823 |year=1991 |doi=10.1137/0912043 }}</ref><ref>{{cite journal |last=Qian |first=Z. |first2=C. |last2=Lu |first3=M. |last3=An |first4=R. |last4=Tolimieri |title=Self-sorting in-place FFT algorithm with minimum working space |journal=IEEE Trans. ASSP |volume=52 |issue=10 |pages=2835–2836 |year=1994 |doi=10.1109/78.324749 }}</ref><ref>{{cite journal |last=Hegland |first=M. |title=A self-sorting in-place fast Fourier transform algorithm suitable for vector and parallel processing |journal=Numerische Mathematik |volume=68 |issue=4 |pages=507–547 |year=1994 |doi=10.1007/s002110050074 }}</ref>
| |
| | |
| ==References==
| |
| {{reflist|30em}}
| |
| | |
| == External links ==
| |
| * [http://www.librow.com/articles/article-10 a simple, pedagogical radix-2 Cooley–Tukey FFT algorithm in C++.]
| |
| * [http://sourceforge.net/projects/kissfft/ KISSFFT]: a simple mixed-radix Cooley–Tukey implementation in C (open source)
| |
| | |
| {{DEFAULTSORT:Cooley-Tukey Fft Algorithm}}
| |
| [[Category:FFT algorithms]]
| |
| [[Category:Articles with example pseudocode]]
| |