Bromley equation: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Saehry
mNo edit summary
 
en>Jonesey95
m Fixing "CS1 errors: ISSN" error in citation.
Line 1: Line 1:
'''Simultaneous perturbation stochastic approximation''' (SPSA) is an [[algorithmic]] method for optimizing systems with multiple unknown [[parameters]]. It is a type of [[stochastic approximation]] algorithm. As an optimization method, it is appropriately suited to large-scale population models, adaptive modeling, simulation [[optimization]], and [[atmospheric model]]ing. Many examples are presented at the SPSA website http://www.jhuapl.edu/SPSA. A comprehensive recent book on the subject is Bhatnagar et al. (2013). An early paper on the subject is Spall (1987) and the foundational paper providing the key theory and justification is Spall (1992).


SPSA is a descent method capable of finding global minima. Its main feature is the gradient approximation that requires only two measurements of the objective function, regardless of the dimension of the optimization problem. Recall that we want to find the optimal control <math>u^*</math> with loss
function <math>J(u)</math>:


In fact, the [http://answers.yahoo.com/search/search_result?p=discerning+consumer&submit-go=Search+Y!+Answers discerning consumer] may want to consider its own share of advantages and disadvantages If you have any inquiries with regards to the place and how to use [http://secondwatersoftenerslizard.officialgottagotravel.net/ different types of water softener systems], you can make contact with us at the website. .
:<math>u^* = \arg  \min_{u \in U} J(u).</math>
 
Both [[Finite Differences Stochastic Approximation]] (FDSA)
and SPSA use the same iterative process:
 
:<math>u_{n+1} = u_n - a_n\hat{g}_n(u_n),</math>
 
where <math>u_n=((u_n)_1,(u_n)_2,\ldots,(u_n)_p)^T</math>
represents the <math>n^{th}</math> iterate, <math>\hat{g}_n(u_n)</math> is the estimate of the gradient of the objective function <math>g(u)= \frac{\partial}{\partial u}J(u)</math> evaluated at <math>{u_n}</math>, and <math>\{a_n\}</math> is a positive number sequence converging to 0. If <math>u_n</math> is a ''p''-dimensional vector, the <math>i^{th}</math> component of the [[symmetric]] finite difference gradient estimator is:
 
:'''FD:''' <math>(\hat{g_n}(u_n))_i = \frac{J(u_n+c_ne_i)-J(u_n-c_ne_i)}{2c_n},</math>
 
''1 ≤i ≤p'', where <math>e_i</math> is the unit vector with a 1 in the <math>i^{th}</math>
place, and <math>c_n</math>is a small positive number that decreases with ''n''. With this method, ''2p'' evaluations of ''J'' for each <math>g_n</math> are needed. Clearly, when ''p'' is large, this estimator loses efficiency.
 
Let now  <math>\Delta_n</math> be a random perturbation vector. The <math>i^{th}</math> component of the stochastic perturbation gradient estimator is:
 
:'''SP:''' <math>(\hat{g_n}(u_n))_i = \frac{J(u_n+c_n\Delta_n)-J(u_n-c_n\Delta_n)}{2c_n(\Delta_n)_i}.</math>
 
Remark that FD perturbs only one direction at a time, while the SP estimator disturbs all directions at the same time (the numerator is identical in all ''p'' components). The number of loss function measurements needed in the SPSA method for each <math>g_n</math> is always 2, independent of the [[dimension]] ''p''. Thus, SPSA uses ''p'' times fewer function evaluations than FDSA, which makes it a lot more efficient.
 
Simple experiments with ''p=2'' showed that SPSA converges in the same number of iterations as FDSA. The latter follows [[Approximation|approximately]] the [[steepest]] descent direction, behaving like the gradient method. On the other hand, SPSA, with the random search direction, does not follow exactly the gradient path. In average though, it tracks it nearly because the gradient approximation is an almost [[unbiased]]
estimator of the gradient, as shown in the following lemma.
 
== Convergence lemma ==
Denote by
 
:<math>b_n = E[\hat{g}_n|u_n] -\nabla J(u_n) </math>
 
the bias in the estimator <math>\hat{g}_n</math>. Assume that <math>\{(\Delta_n)_i\}</math> are all mutually independent with zero-mean, bounded second
moments, and <math>E(|(\Delta_n)_i|^{-1})</math> uniformly bounded. Then <math>b_n</math>→0 w.p.&nbsp;1.
 
== Sketch of the proof ==
The main [[idea]] is to use conditioning on <math>\Delta_n</math> to express <math>E[(\hat{g}_n)_i]</math> and then to use a second order Taylor expansion of <math>J(u_n+c_n\Delta_n)_i</math> and <math>J(u_n-c_n\Delta_n)_i</math>. After algebraic manipulations using the zero mean and the independence of <math>\{(\Delta_n)_i\}</math>, we get
 
:<math>E[(\hat{g}_n)_i]=(g_n)_i + O(c_n^2)</math>
 
The result follows from the [[hypothesis]] that <math>c_n</math>→0.
 
Next we resume some of the hypotheses under which <math>u_t</math> converges in [[probability]] to the set of global minima of <math>J(u)</math>. The efficiency of
the method depends on the shape of <math>J(u)</math>, the values of the parameters <math>a_k</math> and <math>c_k</math> and the distribution of the perturbation terms <math>\Delta_{ki}</math>. First, the algorithm parameters must satisfy the
following conditions:
 
*  <math>a_t</math> >0, <math>a_t</math>→0 when t→∝ and <math>\sum_{t=1}^{\infty} a_t = \infty </math>. A good choice would be <math>a_t=\frac{a}{t};</math> a>0;
*  <math>c_t=\frac{c}{t^\gamma}</math>, where c>0, <math> \gamma \in \left[\frac{1}{6},\frac{1}{2}\right]</math>;
* <math>\sum_{t=1}^{\infty} (\frac {a_t}{c_t})^2 < \infty </math>
* <math> \Delta_{ti} </math> must be mutually independent zero-mean random variables, symmetrically distributed about zero, with <math>\Delta_{ki} < a_1 < \infty </math>. The inverse first and second moments of the <math> \Delta_{ti} </math> must be finite.
A good choice for <math>\Delta_{ki}</math> is Bernoulli +-1 with probability 0.5 (other choices are possible too). The uniform and normal distributions do not satisfy the finite inverse moment conditions, so can not be used.
 
The loss function ''J(u)'' must be thrice continuously [[differentiable]] and the individual elements of the third derivative must be bounded: <math>|J^{(3)}(u)| < a_3 < \infty </math>. Also, ''|J(u)|→∝'' as ''u→∝''.
 
In addition, <math>\nabla J</math> must be Lipschitz continuous, bounded and the ODE <math> \dot{u}=g(u)</math> must have a unique solution for each initial condition.
Under these conditions and a few others, <math>u_k</math> [[Convergence (mathematics)|converges]] in probability to the set of global minima of J(u) (see Maryak and Chin, 2008).
 
==References==
* Bhatnagar, S., Prasad, H. L., and Prashanth, L. A. (2013), ''Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods'', Springer.
* Hirokami, T., Maeda, Y., Tsukada, H. (2006) "Parameter estimation using simultaneous perturbation stochastic approximation", Electrical Engineering in Japan, 154 (2), 30–3 [http://dx.doi.org/10.1002/eej.20239]
* Maryak, J.L., and Chin, D.C. (2008), "Global Random Optimization by Simultaneous Perturbation Stochastic Approximation," ''IEEE Transactions on Automatic Control'', vol. 53, pp. 780-783.
* Spall, J. C. (1987), “A Stochastic Approximation Technique for Generating Maximum Likelihood Parameter Estimates,” ''Proceedings of the American Control Conference'', Minneapolis, MN, June 1987, pp. 1161–1167.
* Spall, J. C. (1992), “Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation,” ''IEEE Transactions on Automatic Control'', vol. 37(3), pp. 332–341.
* Spall, J.C. (1998). "Overview of the Simultaneous Perturbation Method for Efficient Optimization" [http://www.jhuapl.edu/SPSA/PDF-SPSA/Spall_An_Overview.PDF 2]. ''Johns Hopkins APL Technical Digest'', 19(4), 482–492.
* Spall, J.C. (2003) ''Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control'', Wiley. ISBN 0-471-33052-3 (Chapter 7)
 
<references/>
 
[[Category:Numerical climate and weather models]]
[[Category:Stochastic algorithms]]
[[Category:Optimization algorithms and methods]]

Revision as of 03:08, 30 November 2013

Simultaneous perturbation stochastic approximation (SPSA) is an algorithmic method for optimizing systems with multiple unknown parameters. It is a type of stochastic approximation algorithm. As an optimization method, it is appropriately suited to large-scale population models, adaptive modeling, simulation optimization, and atmospheric modeling. Many examples are presented at the SPSA website http://www.jhuapl.edu/SPSA. A comprehensive recent book on the subject is Bhatnagar et al. (2013). An early paper on the subject is Spall (1987) and the foundational paper providing the key theory and justification is Spall (1992).

SPSA is a descent method capable of finding global minima. Its main feature is the gradient approximation that requires only two measurements of the objective function, regardless of the dimension of the optimization problem. Recall that we want to find the optimal control u* with loss function J(u):

u*=argminuUJ(u).

Both Finite Differences Stochastic Approximation (FDSA) and SPSA use the same iterative process:

un+1=unang^n(un),

where un=((un)1,(un)2,,(un)p)T represents the nth iterate, g^n(un) is the estimate of the gradient of the objective function g(u)=uJ(u) evaluated at un, and {an} is a positive number sequence converging to 0. If un is a p-dimensional vector, the ith component of the symmetric finite difference gradient estimator is:

FD: (gn^(un))i=J(un+cnei)J(uncnei)2cn,

1 ≤i ≤p, where ei is the unit vector with a 1 in the ith place, and cnis a small positive number that decreases with n. With this method, 2p evaluations of J for each gn are needed. Clearly, when p is large, this estimator loses efficiency.

Let now Δn be a random perturbation vector. The ith component of the stochastic perturbation gradient estimator is:

SP: (gn^(un))i=J(un+cnΔn)J(uncnΔn)2cn(Δn)i.

Remark that FD perturbs only one direction at a time, while the SP estimator disturbs all directions at the same time (the numerator is identical in all p components). The number of loss function measurements needed in the SPSA method for each gn is always 2, independent of the dimension p. Thus, SPSA uses p times fewer function evaluations than FDSA, which makes it a lot more efficient.

Simple experiments with p=2 showed that SPSA converges in the same number of iterations as FDSA. The latter follows approximately the steepest descent direction, behaving like the gradient method. On the other hand, SPSA, with the random search direction, does not follow exactly the gradient path. In average though, it tracks it nearly because the gradient approximation is an almost unbiased estimator of the gradient, as shown in the following lemma.

Convergence lemma

Denote by

bn=E[g^n|un]J(un)

the bias in the estimator g^n. Assume that {(Δn)i} are all mutually independent with zero-mean, bounded second moments, and E(|(Δn)i|1) uniformly bounded. Then bn→0 w.p. 1.

Sketch of the proof

The main idea is to use conditioning on Δn to express E[(g^n)i] and then to use a second order Taylor expansion of J(un+cnΔn)i and J(uncnΔn)i. After algebraic manipulations using the zero mean and the independence of {(Δn)i}, we get

E[(g^n)i]=(gn)i+O(cn2)

The result follows from the hypothesis that cn→0.

Next we resume some of the hypotheses under which ut converges in probability to the set of global minima of J(u). The efficiency of the method depends on the shape of J(u), the values of the parameters ak and ck and the distribution of the perturbation terms Δki. First, the algorithm parameters must satisfy the following conditions:

A good choice for Δki is Bernoulli +-1 with probability 0.5 (other choices are possible too). The uniform and normal distributions do not satisfy the finite inverse moment conditions, so can not be used.

The loss function J(u) must be thrice continuously differentiable and the individual elements of the third derivative must be bounded: |J(3)(u)|<a3<. Also, |J(u)|→∝ as u→∝.

In addition, J must be Lipschitz continuous, bounded and the ODE u˙=g(u) must have a unique solution for each initial condition. Under these conditions and a few others, uk converges in probability to the set of global minima of J(u) (see Maryak and Chin, 2008).

References

  • Bhatnagar, S., Prasad, H. L., and Prashanth, L. A. (2013), Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods, Springer.
  • Hirokami, T., Maeda, Y., Tsukada, H. (2006) "Parameter estimation using simultaneous perturbation stochastic approximation", Electrical Engineering in Japan, 154 (2), 30–3 [1]
  • Maryak, J.L., and Chin, D.C. (2008), "Global Random Optimization by Simultaneous Perturbation Stochastic Approximation," IEEE Transactions on Automatic Control, vol. 53, pp. 780-783.
  • Spall, J. C. (1987), “A Stochastic Approximation Technique for Generating Maximum Likelihood Parameter Estimates,” Proceedings of the American Control Conference, Minneapolis, MN, June 1987, pp. 1161–1167.
  • Spall, J. C. (1992), “Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation,” IEEE Transactions on Automatic Control, vol. 37(3), pp. 332–341.
  • Spall, J.C. (1998). "Overview of the Simultaneous Perturbation Method for Efficient Optimization" 2. Johns Hopkins APL Technical Digest, 19(4), 482–492.
  • Spall, J.C. (2003) Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control, Wiley. ISBN 0-471-33052-3 (Chapter 7)