|
|
Line 1: |
Line 1: |
| In [[mathematics]] and [[computer algebra]], '''automatic differentiation''' ('''AD'''), also called '''algorithmic differentiation''' or '''computational differentiation''',<ref>{{cite journal|last=Neidinger|first=Richard D.|title=Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming|journal=SIAM Review|year=2010|volume=52|issue=3|pages=545–563|url=http://www.davidson.edu/math/neidinger/SIAMRev74362.pdf|doi=10.1137/080743627}}</ref><ref>http://www.ec-securehost.com/SIAM/SE24.html</ref> is a set of techniques to numerically evaluate the [[derivative]] of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the [[chain rule]] repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
| | == 5Jj1OK Brazil 6s 0d72pVfI == |
|
| |
|
| Automatic differentiation is not:
| | Mine are at 31.5 volts. If you pull the pack below the cutoff voltage, it WILL damage the pack. A few weeks ago, a colleague sent me a link to an article in the Wall Street Journal. It described a "laboratory adept European entrepreneur" and his chief chemist, who were mining the scientific literature to find ideas for new designer drugs dubbed legal highs. <br><br>In other words, the command to contract the biceps provokes another command within the spine to prevent contraction of the triceps. In this way, these antagonist muscles are kept from resisting one another. There information that going to be there that can validate your story, validate the facts."Now, eight years since the accident and a killer plea, the family continues to live with what happened. Walker, who was sitting beside his sister that night, says losing his Erica has motivated him."I will always do everything for her, just knowing that she up there looking down on [http://greeleyhatworks.com/wp-content/news/ sport blue 6s] me saying: that my brother, look what he doing now, said [http://www.noworriespoolservice.net/images/sport-blue-6s.html jordan 6 sport blue] Walker. <br><br>Michael Brown rallies across the nationPhoto essay: Police and protesters in FergusonCardinals hold off Marlins, 5 2A fifth night of unrest in FergusonA fifth day of rallies and protests in FergusonPlayers to watch galleryDoctor John's Adult Boutique started as a lingerie outlet in 1987 and expanded into an adult novelties by request but then growing into selling club apparel and shoes. We pride ourselves in customer service and a very knowledgeable staff. <br><br>The next important milestone we completed this quarter was [http://greeleyhatworks.com/wp-content/news/ sport blue 6s] the announcement of the three months data from the pharm/tox primate study. This was the first time point in the 12 month study, and so we're not necessarily looking for efficacy data. I am a 78 year old woman who was diagnosed with PD in March and given a prescription for carbidopa/levodopa. A second opinion in June confirmed the diagnosis, but I am reluctant to start taking the meds. |
| [[Image:AutomaticDifferentiationNutshell.png|right|thumb|300px|Figure 1: How automatic differentiation relates to symbolic differentiation]] | | 相关的主题文章: |
| * [[Symbolic differentiation]], or
| | <ul> |
| * [[Numerical differentiation]] (the method of finite differences).
| | |
| | <li>[http://www.tcnweb.ne.jp/~sitamachi/yybbs/yybbs.cgi http://www.tcnweb.ne.jp/~sitamachi/yybbs/yybbs.cgi]</li> |
| | |
| | <li>[http://en.wikiatex.com/User:Bidmerazie#GvWZ0x_Brazil_6s_UkyblYEf http://en.wikiatex.com/User:Bidmerazie#GvWZ0x_Brazil_6s_UkyblYEf]</li> |
| | |
| | <li>[http://www.zwzj.com/upload/forum.php?mod=viewthread&tid=29263 http://www.zwzj.com/upload/forum.php?mod=viewthread&tid=29263]</li> |
| | |
| | </ul> |
|
| |
|
| These classical methods run into problems: symbolic differentiation leads to inefficient code (unless carefully done) and faces the difficulty of converting a computer program into a single expression, while numerical differentiation can introduce [[round-off error]]s in the [[discretization]] process and cancellation. Both classical methods have problems with calculating higher derivatives, where the complexity and errors increase. Finally, both classical methods are slow at computing the partial derivatives of a function with respect to ''many'' inputs, as is needed for [[gradient descent|gradient]]-based [[Optimization (mathematics)|optimization]] algorithms. Automatic differentiation solves all of these problems.
| | == bbdz8x Jordan 6 World Cup Brazil DsUZQdKg == |
|
| |
|
| == The chain rule, forward and reverse accumulation ==
| | She has a daughter named Kirsten, whom she loves and cares for so much. Awhile ago her daughter wanted to speak to Tina through Skype, all we could do was just turn around [http://www.alohabeachfront.com/images/sport-blue-6s.html jordan 6 sport blue] and cry and told Kirsten that mommy was still busy.. Recommendations you proposed included items noted in the 2008 Kroll report, as well as additional items that some members felt should be addressed, the secretary wrote. Item was addressed and in some cases implementations have already occurred or will occur in the [http://www.alohabeachfront.com/images/sport-blue-6s.html jordan 6 sport blue] very near future. <br><br>Unfortunately, we do not have a ride along program for the aviation unit. Due to the inherent danger involved in aerial law enforcement operations, it would not be feasible to accommodate such a request. You need something to settle the ship. If he not able to do it, find someone to do it for him. <br><br>The Supertooth 3 is stylish in design, easy to use and is operational within seconds. With no installation requirements, the Supertooth 3 eliminates the hassle and expense of installing a fully integrated car kit. In April 2010, the National Enquirer reported that Woods confessed to having sex with 120 women while married to Nordegren. Allegedly, the confession occurred while Woods was undergoing sex addiction therapy. <br><br>Friday, Aug. 15, 2014Police reveal name of Missouri cop who shot teenThe police chief in the St. And when he died a decade [http://www.avalancheconsulting.com/wp-content/shop sport blue 6s] ago, after a long bout with Parkinson's disease and pancreatitis, he left Eric Ebron feeling adrift in the world looking for something to latch onto. Marine Corps. |
| | 相关的主题文章: |
| | <ul> |
| | |
| | <li>[http://zufss.com/shownews.asp?id=8 http://zufss.com/shownews.asp?id=8]</li> |
| | |
| | <li>[http://cgi.www5c.biglobe.ne.jp/~hani/yybbs/yybbs.cgi http://cgi.www5c.biglobe.ne.jp/~hani/yybbs/yybbs.cgi]</li> |
| | |
| | <li>[http://www.wakeariwonderland.com/wiki/index.php?title=User:Bijhtnifdu#aci6kl_Brazil_6s_QKv2I0il http://www.wakeariwonderland.com/wiki/index.php?title=User:Bijhtnifdu#aci6kl_Brazil_6s_QKv2I0il]</li> |
| | |
| | </ul> |
|
| |
|
| Fundamental to AD is the decomposition of differentials provided by the [[chain rule]]. For the simple composition <math>f(x) = g(h(x))</math> the chain rule gives
| | == T83zja Jordan 6 Brazil 72u10E5U == |
|
| |
|
| :<math>\frac{df}{dx} = \frac{dg}{dh} \frac{dh}{dx}</math>
| | The second crashed occurred on Monday afternoon when a dump truck collided with a passenger car that was traveling east on Highway 522 just east of Highway 9. Three people in the passenger car were injured, one of them critically. Mostly small planes right now because my goodness, how can they manage with their gps systems wigging out constantly. They're pushing against a magnetic flow which is like a skinny kid trying to push Arnold Schwartzeneger out of the way. <br><br>I been calling their phone, but it keeps ringing and ringing and ringing. The Statesville church, people were crying and hugging each other. This release also tends to follow its own priorities, which may include the resolution of whatever is unfinished of a practical nature and reception of permission to let go from [http://knijanka.com/images/news/ sport blue 6s] family members. These events are the normal, natural way in which the spirit prepares to move from this existence [http://www.banyanhouse.com/images/sport-blue-6s.html jordan 6 sport blue] into the next dimension of life. <br><br>I loaded the injured mare up and took her to the vet. The vet decided to leave both.. Benign twitch can be due electrolyte and vitamin deficiency. You can avoid taking caffeine, avoid anxiety which are common precipitating factors. 2010; Iwase et al. 1998), hypertension (Iwase et al. <br><br>Wilkenfeld also mentioned that it persuades some parents to let teens get alicense sooner. Sometimes, parents [http://www.banyanhouse.com/images/sport-blue-6s.html jordan 6 sport blue] go into the course wanting their offspringto wait until they're 18 for alicense. Sergey Brin's fortune passed the $30 billion mark for the first time recently as shares in the Mountain View, Calif. Based search giant's stock reached new heights. |
| | | 相关的主题文章: |
| Usually, two distinct modes of AD are presented, ''forward accumulation'' (or forward mode) and ''reverse accumulation'' (or reverse mode). Forward accumulation specifies that one traverses the chain rule from right to left (that is, first one computes <math>dh/dx</math> and then <math>dg/dh</math>), while reverse accumulation has the traversal from left to right.
| | <ul> |
| | | |
| [[Image:ForwardAccumulationAutomaticDifferentiation.png|right|thumb|300px|Figure 2: Example of forward accumulation with computational graph]]
| | <li>[http://www.ltzqfkk.com/shownews.asp?id=191 http://www.ltzqfkk.com/shownews.asp?id=191]</li> |
| | | |
| ===Forward accumulation===
| | <li>[http://kasugarokkaku.sakura.ne.jp/woody/stylebbs/stylebbs.cgi http://kasugarokkaku.sakura.ne.jp/woody/stylebbs/stylebbs.cgi]</li> |
| | | |
| Forward accumulation automatic differentiation is the easiest to understand and to implement. The function <math>f(x_1,x_2) = x_1 x_2 + \sin(x_1)</math> is interpreted (by a computer or human programmer) as the sequence of elementary operations on the work variables <math>w_i</math>, and an AD tool for forward accumulation adds the corresponding operations on the second component of the augmented arithmetic.
| | <li>[http://cwlai.net/wiki/index.php?title=User:Bhrxveswvg#tITY8j_Jordan_6_World_Cup_Brazil_3JjSiYan http://cwlai.net/wiki/index.php?title=User:Bhrxveswvg#tITY8j_Jordan_6_World_Cup_Brazil_3JjSiYan]</li> |
| {| class="wikitable"
| | |
| |-
| | </ul> |
| ! Original code statements
| |
| ! Added statements for derivatives
| |
| |-
| |
| | <math>w_1 = x_1</math>
| |
| | <math>w'_1 = 1</math> (seed)
| |
| |-
| |
| | <math>w_2 = x_2</math>
| |
| | <math>w'_2 = 0</math> (seed)
| |
| |-
| |
| | <math>w_3 = w_1 w_2</math>
| |
| | <math>w'_3 = w'_1 w_2 + w_1 w'_2 = 1 x_2 + x_1 0 = x_2</math>
| |
| |-
| |
| | <math>w_4 = \sin(w_1)</math>
| |
| | <math>w'_4 = \cos(w_1)w'_1 = \cos(x_1) 1</math>
| |
| |-
| |
| | <math>w_5 = w_3 + w_4</math>
| |
| | <math>w'_5 = w'_3 + w'_4 = x_2 + \cos(x_1)</math>
| |
| |}
| |
| | |
| The derivative computation for <math>f(x_1,x_2) = x_1 x_2 + \sin(x_1)</math> needs to be seeded in order to distinguish between the derivative with respect to <math>x_1</math> or <math>x_2</math>. The table above seeds the computation with <math>w'_1=1</math> and <math>w'_2=0</math> and we see that this results in <math>x_2 + \cos(x_1)</math> which is the derivative with respect to <math>x_1</math>. Note that although the table displays the symbolic derivative, in the computer it is always the evaluated (numeric) value that is stored. Figure 2 represents the above statements in a computational graph.
| |
| | |
| In order to compute the gradient of this example function, that is <math>\partial f/\partial x_1</math> and <math>\partial f / \partial x_2</math>, two sweeps over the computational graph is needed, first with the seeds <math>w'_1 = 1</math> and <math>w'_2 = 0</math>, then with <math>w'_1 = 0</math> and <math>w'_2 = 1</math>.
| |
| | |
| The [[Computational complexity theory|computational complexity]] of one sweep of forward accumulation is proportional to the complexity of the original code.
| |
| | |
| Forward accumulation is superior to reverse accumulation for functions <math>f:\mathbb{R} \rightarrow \mathbb{R}^m</math> with <math>m \gg 1</math> as only one sweep is necessary, compared to <math>m</math> sweeps for reverse accumulation.
| |
| | |
| [[Image:ReverseaccumulationAD.png|right|thumb|300px|Figure 3: Example of reverse accumulation with computational graph]]
| |
| | |
| ===Reverse accumulation===
| |
| | |
| Reverse accumulation traverses the chain rule from left to right, or in the case of the computational graph in Figure 3, from top to bottom. The example function is real-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed in order to calculate the (two-component) gradient. This is only half the work when compared to forward accumulation, but reverse accumulation requires the storage of some of the work variables <math>w_i</math>, which may represent a significant memory issue.
| |
| | |
| The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a unary function <math>y=f(x)</math> in the primal causes <math>x'=f'(x) y'</math> in the adjoint; etc.
| |
| | |
| Reverse accumulation is superior to forward accumulation for functions <math>f:\mathbb{R}^n \rightarrow \mathbb{R}</math> with <math>n \gg 1</math>, where forward accumulation requires roughly ''n'' times as much work.
| |
| | |
| [[Backpropagation]] of errors in multilayer perceptrons, a technique used in machine learning, is a special case of reverse mode AD.
| |
| | |
| ===Jacobian computation===
| |
| | |
| The [[Jacobian]] <math>J</math> of <math>f:\mathbb{R}^n \rightarrow \mathbb{R}^m</math> is an <math>m \times n</math> matrix. The Jacobian can be computed using <math>n</math> sweeps of forward accumulation, of which each sweep can yield a column vector of the Jacobian, or with <math>m</math> sweeps of reverse accumulation, of which each sweep can yield a row vector of the Jacobian.
| |
| | |
| ===Beyond forward and reverse accumulation===
| |
| | |
| Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of <math>F:\mathbb{R}^n \rightarrow \mathbb{R}^m</math> with a minimum number of arithmetic operations is known as the "optimal Jacobian accumulation" (OJA) problem. OJA is [[NP-complete]].<ref>{{Cite journal|first=Uwe|last=Naumann|contribution=Optimal Jacobian accumulation is NP-complete|journal=Mathematical Programming|volume=112|issue=2|pages=427–441|date=April 2008|doi=10.1007/s10107-006-0042-z|title=Optimal Jacobian accumulation is NP-complete|postscript=<!--None-->}}</ref>
| |
| Central to this proof is the idea that there may exist algebraic dependences between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent.
| |
| | |
| == Automatic differentiation using dual numbers ==
| |
| | |
| Forward mode automatic differentiation is accomplished by augmenting the [[algebra]] of [[real numbers]] and obtaining a new [[arithmetic]]. An additional component is added to every number which will represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. The augmented algebra is the algebra of [[dual numbers]]. Computer programs often implement this using the complex number representation.
| |
| | |
| Replace every number <math>\,x</math> with the number <math>x + x'\varepsilon</math>, where <math>x'</math> is a real number, but <math>\varepsilon</math> is nothing but a symbol with the property <math>\varepsilon^2=0</math>. Using only this, we get for the regular arithmetic
| |
| :<math>(x + x'\varepsilon) + (y + y'\varepsilon) = x + y + (x' + y')\varepsilon</math> | |
| :<math>(x + x'\varepsilon) \cdot (y + y'\varepsilon) = xy + xy'\varepsilon + yx'\varepsilon + x'y'\varepsilon^2 = xy + (x y' + yx')\varepsilon</math>
| |
| and likewise for subtraction and division.
| |
| | |
| Now, we may calculate [[polynomials]] in this augmented arithmetic. If <math>P(x) = p_0 + p_1 x + p_2x^2 + \cdots + p_n x^n</math>, then
| |
| | |
| {| class="wikitable",border=0
| |
| |-
| |
| |<math>P(x + x'\varepsilon) </math>
| |
| |<math>=\,</math>
| |
| |<math>p_0 + p_1(x + x'\varepsilon) + \cdots + p_n (x + x'\varepsilon)^n</math>
| |
| |-
| |
| |
| |
| |<math>=\,</math>
| |
| |<math>p_0 + p_1 x + \cdots + p_n x^n</math>
| |
| |-
| |
| |
| |
| |
| |
| | <math>\, {} + p_1x'\varepsilon + 2p_2xx'\varepsilon + \cdots + np_n x^{n-1} x'\varepsilon</math>
| |
| |-
| |
| |
| |
| |<math>=\,</math>
| |
| |<math>P(x) + P^{(1)}(x)x'\varepsilon</math>
| |
| |}
| |
| | |
| where <math>P^{(1)}</math> denotes the derivative of <math>P</math> with respect to its first argument, and
| |
| <math>x'</math>, called a ''seed'', can be chosen arbitrarily.
| |
| | |
| The new arithmetic consists of [[ordered pair]]s, elements written <math>\langle x, x' \rangle</math>, with ordinary arithmetics on the first component, and first order differentiation arithmetic on the second component, as described above. Extending the above results on polynomials to [[analytic functions]] we obtain a list of the basic arithmetic and some standard functions for the new arithmetic:
| |
| :<math>\langle u,u'\rangle +\langle v,v'\rangle = \langle u+v, u'+v' \rangle </math>
| |
| :<math>\langle u,u'\rangle -\langle v,v'\rangle = \langle u-v, u'-v' \rangle </math>
| |
| :<math>\langle u,u'\rangle *\langle v,v'\rangle = \langle u v, u'v+uv' \rangle </math>
| |
| :<math>\langle u,u'\rangle /\langle v,v'\rangle = \left\langle \frac{u}{v}, \frac{u'v-uv'}{v^2} \right\rangle \quad ( v\ne 0) </math>
| |
| :<math>\sin\langle u,u'\rangle = \langle \sin(u) , u' \cos(u) \rangle </math>
| |
| :<math>\cos\langle u,u'\rangle = \langle \cos(u) , -u' \sin(u) \rangle </math>
| |
| :<math>\exp\langle u,u'\rangle = \langle \exp u , u' \exp u \rangle </math>
| |
| :<math>\log\langle u,u'\rangle = \langle \log(u) , u'/u \rangle \quad (u>0) </math>
| |
| :<math>\langle u,u'\rangle^k = \langle u^k , k u^{k-1} u' \rangle \quad (u \ne 0) </math>
| |
| :<math>\left| \langle u,u'\rangle \right| = \langle \left| u \right| , u' \mbox{sign} u \rangle \quad (u \ne 0)</math>
| |
| and in general for the primitive function <math>g</math>,
| |
| :<math>g(\langle u,u' \rangle , \langle v,v' \rangle ) = \langle g(u,v) , g_u(u,v) u' + g_v(u,v) v' \rangle</math>
| |
| where <math>g_u</math> and <math>g_v</math> are the derivatives of <math>g</math> with respect to its first and second arguments, respectively.
| |
| | |
| When a binary basic arithmetic operation is applied to mixed arguments—the pair <math>\langle u, u' \rangle</math> and the real number <math>c</math>—the real number is first lifted to <math>\langle c, 0 \rangle</math>. The derivative of a function <math>f : \mathbb{R}\rightarrow\mathbb{R}</math> at the point <math>x_0</math> is now found by calculating <math>f(\langle x_0, 1 \rangle)</math> using the above arithmetic, which gives <math>\langle f ( x_0 ) , f' ( x_0 ) \rangle </math> as the result.
| |
| | |
| ===Vector arguments and functions===
| |
| | |
| Multivariate functions can be handled with the same efficiency and mechanisms as univariate functions by adopting a directional derivative operator. That is, if it is sufficient to compute <math>y' = \nabla f(x)\cdot x'</math>, the directional derivative <math>y' \in \mathbb{R}^m</math> of <math>f:\mathbb{R}^n\rightarrow\mathbb{R}^m</math> at <math>x \in \mathbb{R}^n</math> in the direction <math>x' \in \mathbb{R}^n</math>, this may be calculated as <math>(\langle y_1,y'_1\rangle, \ldots, \langle y_m,y'_m\rangle) = f(\langle x_1,x'_1\rangle, \ldots, \langle x_n,x'_n\rangle)</math> using the same arithmetic as above. If all the elements of <math>\nabla f</math> are desired, then <math>n</math> function evaluations are required. Note that in many optimization applications, the directional derivative is indeed sufficient.
| |
| | |
| ===Higher order differentials===
| |
| | |
| The above arithmetic can be generalized, in the natural way, to calculate parts of the second order and higher derivatives. However, the arithmetic rules quickly grow very complicated: complexity will be quadratic in the highest derivative degree. Instead, truncated [[Taylor series]] arithmetic is used. This is possible because the Taylor summands in a Taylor series of a function are products of known coefficients and derivatives of the function. Currently, there exists efficient [[Hessian automatic differentiation]] methods that calculate the entire Hessian matrix with a single forward and reverse accumulation. There also exist a number of specialized methods for calculating large sparse Hessian matrices.
| |
| | |
| == Implementation ==
| |
| | |
| Forward-mode AD is implemented by a [[nonstandard interpretation]] of the program in which real numbers are replaced by dual numbers, constants are lifted to dual numbers with a zero epsilon coefficient, and the numeric primitives are lifted to operate on dual numbers. This nonstandard interpretation is generally implemented using one of two strategies: ''source code transformation'' or ''operator overloading''.
| |
| | |
| === Source code transformation (SCT) ===
| |
| [[Image:SourceTransformationAutomaticDifferentiation.png|thumb|right|300px|Figure 4: Example of how source code transformation could work]]
| |
| | |
| The source code for a function is replaced by an automatically generated source code that includes statements for calculating the derivatives interleaved with the original instructions.
| |
| | |
| Source code transformation can be implemented for all programming languages, and it is also easier for the compiler to do compile time optimizations. However, the implementation of the AD tool itself is more difficult.
| |
| | |
| === Operator overloading (OO) ===
| |
| | |
| [[Image:OperatorOverloadingAutomaticDifferentiation.png|thumb|right|300px|Figure 5: Example of how operator overloading could work]]
| |
| [[Operator overloading]] is a possibility for source code written in a language supporting it. Objects for real numbers and elementary mathematical operations must be overloaded to cater for the augmented arithmetic depicted above. This requires no change in the form or sequence of operations in the original source code for the function to be differentiated, but often requires changes in basic data types for numbers and vectors to support overloading and often also involves the insertion of special flagging operations.
| |
| | |
| Operator overloading for forward accumulation is easy to implement, and also possible for reverse accumulation. However, current compilers lag behind in optimizing the code when compared to forward accumulation.
| |
| | |
| == Software ==
| |
| * '''C/C++'''
| |
| :{| class="wikitable" border=1
| |
| |-
| |
| ! Package
| |
| ! License
| |
| ! Approach
| |
| ! Brief Info
| |
| |-
| |
| | [http://www.vivlabs.com/subpage_adc_v4.php ADC Version 4.0]
| |
| | nonfree
| |
| | OO
| |
| |
| |
| |-
| |
| | [http://www.met.rdg.ac.uk/clouds/adept/ Adept]
| |
| | [[GPL]] 3
| |
| | OO
| |
| | First-order forward and reverse modes. Very fast due to its use of expression templates and an efficient tape structure.
| |
| |-
| |
| | [http://www.mcs.anl.gov/adic/ ADIC]
| |
| | free for noncommercial
| |
| | SCT
| |
| | forward mode
| |
| |-
| |
| | [[ADMB]]
| |
| | {{BSD-lic}}
| |
| | SCT+OO
| |
| |
| |
| |-
| |
| | [http://code.google.com/p/adnumber/ ADNumber]
| |
| | dual license
| |
| | OO
| |
| | arbitrary order forward/reverse
| |
| |-
| |
| | [http://www.coin-or.org/projects/ADOL-C.xml ADOL-C]
| |
| | CPL 1.0 or [[GPL]] 2.0
| |
| | OO
| |
| | arbitrary order forward/reverse, part of [[COIN-OR]]
| |
| |-
| |
| | [[AMPL]]
| |
| | free for students
| |
| | SCT
| |
| |
| |
| |-
| |
| | [http://www.fadbad.com/ FADBAD++]
| |
| | free for <br>noncommercial
| |
| | OO
| |
| | uses operator new
| |
| |-
| |
| | [http://www.casadi.org/ CasADi]
| |
| | [[LGPL]]
| |
| | SCT
| |
| | Forward/reverse modes, matrix-valued atomic operations.
| |
| |-
| |
| | [http://ceres-solver.googlecode.com ceres-solver]
| |
| | [[BSD]]
| |
| | OO
| |
| | A portable C++ library that allows for modeling and solving large complicated nonlinear least squares problems
| |
| |-
| |
| | [http://www.coin-or.org/CppAD CppAD]
| |
| | [[Eclipse Public License|EPL]] 1.0 or [[GPL]] 3.0
| |
| | OO
| |
| | arbitrary order forward/reverse, AD<Base> for arbitrary Base including AD<Other_Base>, part of [[COIN-OR]]; can also be used to produce C source code using the [http://github.com/joaoleal/CppADCodeGen CppADCodeGen] library.
| |
| |-
| |
| | [http://www.mcs.anl.gov/OpenAD/ OpenAD]
| |
| | depends on components
| |
| | SCT
| |
| |
| |
| |-
| |
| | [http://trilinos.sandia.gov/packages/ Sacado]
| |
| | {{GPL-lic}}
| |
| | OO
| |
| | A part of the [http://trilinos.sandia.gov/ Trilinos] collection, forward/reverse modes.
| |
| |-
| |
| | [http://mc-stan.org/ Stan]
| |
| | {{BSD-lic}}
| |
| | OO
| |
| | forward- and reverse-mode automatic differentiation with library of special functions, probability functions, matrix operators, and linear algebra solvers
| |
| |-
| |
| | [http://www-sop.inria.fr/tropics/ TAPENADE]
| |
| | Free for noncommercial
| |
| | SCT
| |
| |
| |
| |-
| |
| | [https://ctaylor.codeplex.com/ CTaylor]
| |
| | free
| |
| | OO
| |
| | truncated taylor series, multi variable, high performance, calculating and storing only potentially nonzero derivatives, calculates higher order derivatives, order of derivatives increases when using matching operations until maximum order (parameter) is reached, example source code and executable available for testing performance
| |
| |}
| |
| | |
| * '''Fortran'''
| |
| :{| class="wikitable" border=1
| |
| |-
| |
| ! Package
| |
| ! License
| |
| ! Approach
| |
| ! Brief Info
| |
| |-
| |
| | [http://www.vivlabs.com/subpage_adf_v4.php ADF Version 4.0]
| |
| | nonfree
| |
| | OO
| |
| |
| |
| |-
| |
| | [http://www-unix.mcs.anl.gov/autodiff/ADIFOR/ ADIFOR]
| |
| | [http://www.mcs.anl.gov/research/projects/adifor/AdiforPublicLicense.html >>>] <br>(free for non-commercial)
| |
| | SCT
| |
| |
| |
| |-
| |
| | [http://cpc.cs.qub.ac.uk/summaries/ADLS AUTO_DERIV]
| |
| | free for non-commercial
| |
| | OO
| |
| |-
| |
| | [http://www.mcs.anl.gov/OpenAD/ OpenAD]
| |
| | depends on components
| |
| | SCT
| |
| |
| |
| |-
| |
| | [http://tapenade.inria.fr:8080/tapenade/index.jsp TAPENADE]
| |
| | Free for noncommercial
| |
| | SCT
| |
| |
| |
| |}
| |
| | |
| * '''Matlab'''
| |
| :{| class="wikitable" border=1
| |
| |-
| |
| ! Package
| |
| ! License
| |
| ! Approach
| |
| ! Brief Info
| |
| |-
| |
| | [http://www.mathworks.com/matlabcentral/fileexchange/15235 AD for MATLAB]
| |
| | {{GPL-lic}}
| |
| | OO
| |
| | Forward (1st & 2nd derivative, Uses MEX files & Windows DLLs)
| |
| |-
| |
| | [http://www.mathworks.com/matlabcentral/fileexchange/26807-automatic-differentiation-with-matlab-objects Adiff]
| |
| | {{BSD-lic}}
| |
| | OO
| |
| | Forward (1st derivative)
| |
| |-
| |
| | [http://matlabad.com/ MAD]
| |
| | Proprietary
| |
| | OO
| |
| |
| |
| |-
| |
| | [http://www.sc.rwth-aachen.de/adimat/ ADiMat]
| |
| | ?
| |
| | SCT
| |
| | Forward (1st & 2nd derivative) & Reverse (1st)
| |
| |}
| |
| | |
| * '''Python'''
| |
| :{| class="wikitable" border=1
| |
| |-
| |
| ! Package
| |
| ! License
| |
| ! Approach
| |
| ! Brief Info
| |
| |-
| |
| | [http://pypi.python.org/pypi/ad/ ad]
| |
| | {{BSD-lic}}
| |
| | OO
| |
| | first and second-order, reverse accumulation, transparent on-the-fly calculations, basic [[NumPy]] support, written in pure python
| |
| |-
| |
| | [[FuncDesigner]]
| |
| | {{BSD-lic}}
| |
| | OO
| |
| | uses [[NumPy]] arrays and [[SciPy]] sparse matrices,<br>also allows to solve linear/non-linear/ODE systems and <br> to perform numerical optimizations by [[OpenOpt]]
| |
| |-
| |
| | [http://dirac.cnrs-orleans.fr/ScientificPython/ ScientificPython]
| |
| | [[CeCILL]]
| |
| | OO
| |
| | see modules Scientific.Functions.FirstDerivatives and <br>Scientific.Functions.Derivatives
| |
| |-
| |
| | [http://www.seanet.com/~bradbell/pycppad/index.htm pycppad]
| |
| | {{BSD-lic}}
| |
| | OO
| |
| | arbitrary order forward/reverse, implemented as wrapper for CppAD including AD<double> and AD< AD<double> >.
| |
| |-
| |
| |[http://github.com/b45ch1/pyadolc pyadolc]
| |
| | {{BSD-lic}}
| |
| | OO
| |
| |wrapper for ADOL-C, hence arbitrary order derivatives in the (combined) forward/reverse mode of AD, supports sparsity pattern propagation and sparse derivative computations
| |
| |-
| |
| | [http://packages.python.org/uncertainties/ uncertainties]
| |
| | {{BSD-lic}}
| |
| | OO
| |
| | first-order derivatives, reverse mode, transparent calculations
| |
| |-
| |
| | [http://pypi.python.org/pypi/algopy algopy]
| |
| | {{BSD-lic}}
| |
| | OO
| |
| | same approach as pyadolc and thus compatible, support to differentiate through numerical linear algebra functions like the matrix-matrix product, solution of linear systems, QR and Cholesky decomposition, etc.
| |
| |-
| |
| | [https://github.com/forrestv/pyderiv pyderiv]
| |
| | {{GPL-lic}}
| |
| | OO
| |
| | automatic differentiation and (co)variance calculation
| |
| |-
| |
| | [http://www.casadi.org/ CasADi]
| |
| | [[LGPL]]
| |
| | SCT
| |
| | Python front-end to CasADi. Forward/reverse modes, matrix-valued atomic operations.
| |
| |-
| |
| | [http://deeplearning.net/software/theano/ Theano]
| |
| | {{BSD-lic}}
| |
| | OO
| |
| | [[Theano (software)|Theano]] is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays both on CPU and GPU efficiently.
| |
| |}
| |
| | |
| * '''.NET'''
| |
| {| class="wikitable" border=1
| |
| |-
| |
| ! Package
| |
| ! License
| |
| ! Approach
| |
| ! Brief Info
| |
| |-
| |
| | [http://autodiff.codeplex.com/ AutoDiff]
| |
| | {{GPL-lic}}
| |
| | OO
| |
| | Automatic differentiation with C# operators overloading.
| |
| |-
| |
| | [http://funclib.codeplex.com/ FuncLib]
| |
| | [[MIT License|MIT]]
| |
| | OO
| |
| | Automatic differentiation and numerical optimization, operator overloading, unlimited order of differentiation, compilation to IL code for very fast evaluation.
| |
| |}
| |
| | |
| * '''Haskell'''
| |
| {| class="wikitable" border=1
| |
| |-
| |
| ! Package
| |
| ! License
| |
| ! Approach
| |
| ! Brief Info
| |
| |-
| |
| | [http://hackage.haskell.org/package/ad ad]
| |
| | {{BSD-lic}}
| |
| | OO
| |
| | Forward Mode (1st derivative or arbitrary order derivatives via lazy lists and sparse tries)<br>Reverse Mode<br>Combined forward-on-reverse Hessians. <br> Uses Quantification to allow the implementation automatically choose appropriate modes.<br> Quantification prevents perturbation/sensitivity confusion at compile time.
| |
| |-
| |
| | [http://hackage.haskell.org/package/fad fad]
| |
| | {{BSD-lic}}
| |
| | OO
| |
| | Forward Mode (lazy list). Quantification prevents perturbation confusion at compile time.
| |
| |-
| |
| | [http://hackage.haskell.org/package/rad rad]
| |
| | {{BSD-lic}}
| |
| | OO
| |
| | Reverse Mode. (Subsumed by 'ad'). <br> Quantification prevents sensitivity confusion at compile time.
| |
| |}
| |
| | |
| * '''Octave'''
| |
| {| class="wikitable" border=1
| |
| |-
| |
| ! Package
| |
| ! License
| |
| ! Approach
| |
| ! Brief Info
| |
| |-
| |
| | [http://www.casadi.org/ CasADi]
| |
| | [[LGPL]]
| |
| | SCT
| |
| | Octave front-end to CasADi. Forward/reverse modes, matrix-valued atomic operations.
| |
| |}
| |
| | |
| * '''Java'''
| |
| {| class="wikitable" border=1
| |
| |-
| |
| ! Package
| |
| ! License
| |
| ! Approach
| |
| ! Brief Info
| |
| |-
| |
| | [https://github.com/uniker9/JAutoDiff JAutoDiff]
| |
| | [[-]]
| |
| | OO
| |
| | Provides a framework to compute derivatives of functions on arbitrary types of field using generics. Coded in 100% pure Java.
| |
| |-
| |
| | [http://commons.apache.org/proper/commons-math Apache Commons Math]
| |
| | [[Apache License]] v2
| |
| | OO
| |
| | This class is an implementation of the extension to Rall's numbers described in Dan Kalman's paper<ref>{{cite journal|last=Kalman|first=Dan|title=Doubly Recursive Multivariate Automatic Differentiation|journal=Mathematics Magazine|date=June 2002|volume=75|issue=3|pages=187–202|url=http://www.math.american.edu/People/kalman/pdffiles/mmgautodiff.pdf}}</ref>
| |
| |
| |
| |-
| |
| | [http://github.com/lambder/Deriva Deriva]
| |
| | [[Eclipse Public License]] v1.0
| |
| | DSL+Code Generation
| |
| | Deriva automates algorithmic differentiation in Java and Clojure projects. It defines DSL for building extended arithmetic expressions (the extension being support for conditionals, allowing to express non analytic functions). The DSL is used to generate flat byte-code at runtime, providing implementation without overhead of function calls.
| |
| |
| |
| |-
| |
| | [http://link.springer.com/chapter/10.1007/978-3-642-30023-3_22 Jap]
| |
| | [[Public]]
| |
| | OO/SCT
| |
| | Jap is a tools using Virtual Operator Overloading for java class. Jap was developed in the [http://hal.archives-ouvertes.fr/docs/00/65/66/25/PDF/pham-quang_these.pdf thesis] of Phuong PHAM-QUANG 2008-2011.
| |
| |}
| |
| | |
| * '''Clojure'''
| |
| {| class="wikitable" border=1
| |
| |-
| |
| ! Package
| |
| ! License
| |
| ! Approach
| |
| ! Brief Info
| |
| |-
| |
| | [http://github.com/lambder/Deriva Deriva]
| |
| | [[Eclipse Public License]] v1.0
| |
| | DSL+Code Generation
| |
| | Deriva automates algorithmic differentiation in Java and Clojure projects. It defines DSL for building extended arithmetic expressions (the extension being support for conditionals, allowing to express non analytic functions). The DSL is used to generate flat byte-code at runtime, providing implementation without overhead of function calls.
| |
| |}
| |
| | |
| ==References==
| |
| | |
| {{reflist}}
| |
| | |
| == Literature ==
| |
| | |
| * {{cite book
| |
| | last = Rall | |
| | first = Louis B.
| |
| | title = Automatic Differentiation: Techniques and Applications
| |
| | publisher = [[Springer Science+Business Media|Springer]]
| |
| | series = Lecture Notes in Computer Science
| |
| | volume = 120
| |
| | year = 1981
| |
| | isbn = 3-540-10861-0
| |
| }}
| |
| * {{cite book
| |
| | last1 = Griewank
| |
| | first1 = Andreas
| |
| | last2 = Walther
| |
| | first2 = Andrea
| |
| | title = Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation
| |
| | edition = 2nd
| |
| | publisher = [[Society for Industrial and Applied Mathematics|SIAM]]
| |
| | series = Other Titles in Applied Mathematics
| |
| | volume = 105
| |
| | year = 2008
| |
| | isbn = 978-0-89871-659-7
| |
| | url = http://www.ec-securehost.com/SIAM/OT105.html
| |
| <!-- | accessdate = 10-21-2009 -->
| |
| }}
| |
| * {{cite journal
| |
| |last=Neidinger
| |
| |first=Richard
| |
| |title=Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming
| |
| |journal=SIAM Review
| |
| |year=2010|volume=52
| |
| |issue=3
| |
| |pages=545–563
| |
| |doi=10.1137/080743627
| |
| |url=http://www.davidson.edu/math/neidinger/SIAMRev74362.pdf
| |
| |accessdate=2013-03-15
| |
| }}
| |
| | |
| ==External links==
| |
| * [http://www.autodiff.org/ www.autodiff.org], An "entry site to everything you want to know about automatic differentiation"
| |
| * [http://www.autodiff.org/?module=Applications&application=HC1 Automatic Differentiation of Parallel OpenMP Programs]
| |
| * [http://homepage.mac.com/sigfpe/paper.pdf Automatic Differentiation, C++ Templates and Photogrammetry]
| |
| * [http://www.vivlabs.com/subpage_ad.php Automatic Differentiation, Operator Overloading Approach]
| |
| * [http://tapenade.inria.fr:8080/tapenade/index.jsp Compute analytic derivatives of any Fortran77, Fortran95, or C program through a web-based interface] Automatic Differentiation of Fortran programs
| |
| * [http://www.win-vector.com/dfiles/AutomaticDifferentiationWithScala.pdf Description and example code for forward Automatic Differentiation in Scala]
| |
| * [http://developers.opengamma.com/quantitative-research/Adjoint-Algorithmic-Differentiation-OpenGamma.pdf Adjoint Algorithmic Differentiation: Calibration and Implicit Function Theorem]
| |
| | |
| {{DEFAULTSORT:Automatic Differentiation}}
| |
| [[Category:Differential calculus]]
| |
| [[Category:Computer algebra]]
| |