|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| '''CMA-ES''' stands for Covariance Matrix Adaptation Evolution Strategy. [[Evolution strategies]] (ES) are [[stochastic]], [[derivative]]-free methods for [[numerical optimization]] of non-[[Linear map|linear]] or non-[[Convex function|convex]] [[continuous optimization]] problems. They belong to the class of [[evolutionary algorithms]] and [[evolutionary computation]]. An [[evolutionary algorithm]] is broadly based on the principle of [[biological evolution]], namely the repeated interplay of variation (via mutation and recombination) and selection: in each generation (iteration) new individuals (candidate solutions, denoted as <math>x</math>) are generated by variation, usually in a stochastic way, and then some individuals are selected for the next generation based on their fitness or [[objective function]] value <math>f(x)</math>. Like this, over the generation sequence, individuals with better and better <math>f</math>-values are generated.
| | Hi there, I am Yoshiko Villareal but I never truly liked that name. Her family members life in Idaho. The favorite hobby for him and his children is to play badminton but he is having difficulties to find time for it. I am a cashier and I'll be promoted soon.<br><br>Visit my web-site extended car warranty ([http://Nationalheritagemuseum.org/UserProfile/tabid/43/userId/31686/Default.aspx please click the following internet site]) |
| | |
| In an [[evolution strategy]], new candidate solutions are sampled according to a [[multivariate normal distribution]] in the <math>\mathbb{R}^n</math>. Pairwise dependencies between the variables in this distribution are represented by a [[covariance matrix]]. The covariance matrix adaptation (CMA) is a method to update the [[covariance matrix]] of this distribution. This is particularly useful, if the function <math>f</math> is [[ill-conditioned]].
| |
| | |
| Adaptation of the [[covariance matrix]] amounts to learning a second order model of the underlying [[objective function]] similar to the approximation of the inverse [[Hessian matrix]] in the [[Quasi-Newton method]] in classical [[Optimization (mathematics)|optimization]]. In contrast to most classical methods, fewer assumptions on the nature of the underlying objective function are made. Only the ranking between candidate solutions is exploited for learning the sample distribution and neither derivatives nor even the function values themselves are required by the method.
| |
| | |
| == Principles ==
| |
| [[Image:Concept of directional optimization in CMA-ES algorithm.png|thumb|right|400px|Illustration of an actual optimization run with covariance matrix adaptation on a simple two-dimensional problem. The spherical optimization landscape is depicted with solid lines of equal <math>f</math>-values. The population (dots) is much larger than necessary, but clearly shows how the distribution of the population (dotted line) changes during the optimization. On this simple problem, the population concentrates over the global optimum within a few generations.]]
| |
| Two main principles for the adaptation of parameters of the search distribution are exploited in the CMA-ES algorithm.
| |
| | |
| First, a [[maximum-likelihood]] principle, based on the idea to increase the probability of successful candidate solutions and search steps. The mean of the distribution is updated such that the [[likelihood]] of previously successful candidate solutions is maximized. The [[covariance matrix]] of the distribution is updated (incrementally) such that the likelihood of previously successful search steps is increased. Both updates can be interpreted as a [[Information geometry#Natural gradient|natural gradient]] descent. Also, in consequence, the CMA conducts an iterated [[principal components analysis]] of successful search steps while retaining ''all'' principal axes. [[Estimation of Distribution Algorithms|Estimation of distribution algorithms]] and the [[Cross-Entropy Method]] are based on very similar ideas, but estimate (non-incrementally) the covariance matrix by maximizing the likelihood of successful solution ''points'' instead of successful search ''steps''.
| |
| | |
| Second, two paths of the time evolution of the distribution mean of the strategy are recorded, called search or evolution paths. These paths contain significant information about the correlation between consecutive steps. Specifically, if consecutive steps are taken in a similar direction, the evolution paths become long. The evolution paths are exploited in two ways. One path is used for the covariance matrix adaptation procedure in place of single successful search steps and facilitates a possibly much faster variance increase of favorable directions. The other path is used to conduct an additional step-size control. This step-size control aims to make consecutive movements of the distribution mean orthogonal in expectation. The step-size control effectively prevents [[premature convergence]] yet allowing fast convergence to an optimum.
| |
| | |
| == Algorithm ==
| |
| In the following the most commonly used (μ/μ<sub>w</sub>, λ)-CMA-ES is outlined, where in each iteration step a weighted combination of the μ best out of λ new candidate solutions is used to update the distribution parameters. The main loop consists of three main parts: 1) sampling of new solutions, 2) re-ordering of the sampled solutions based on their fitness, 3) update of the internal state variables based on the re-ordered samples. A [[pseudocode]] of the algorithm looks as follows.
| |
| <code>
| |
| '''set''' <math>\lambda</math> // number of samples per iteration, at least two, generally > 4
| |
| '''initialize''' <math>m</math>, <math>\sigma</math>, <math>C=I</math>, <math>p_\sigma=0</math>, <math>p_c=0</math> // initialize state variables
| |
| '''while''' ''not terminate'' // iterate
| |
| '''for''' <math>i</math> '''in''' <math>\{1...\lambda\}</math> // sample <math>\lambda</math> new solutions and evaluate them
| |
| <math>x_i</math> = sample_multivariate_normal(mean=<math>m</math>, covariance_matrix=<math>\sigma^2 C </math>)
| |
| <math>f_i</math> = fitness(<math>x_i</math>)
| |
| <math>x_{1...\lambda}</math> ← <math>x_{s(1)...s(\lambda)}</math> with <math>s(i)</math> = argsort(<math>f_{1...\lambda}</math>, <math>i</math>) // sort solutions
| |
| <math>m'</math> = <math>m</math> // we need later <math>m - m'</math> and <math>x_i - m'</math>
| |
| <math>m</math> ← update_m<math>(x_1, ... ,</math> <math>x_\lambda)</math> // move mean to better solutions
| |
| <math>p_\sigma</math> ← update_ps<math>(p_\sigma,</math> <math>\sigma^{-1} C^{-1/2} (m - m'))</math> // update isotropic evolution path
| |
| <math>p_c</math> ← update_pc<math>(p_c,</math> <math>\sigma^{-1}(m - m'),</math> <math>||p_\sigma||)</math> // update anisotropic evolution path
| |
| <math>C</math> ← update_C<math>(C,</math> <math>p_c,</math> <math>{(x_1 - m')}/{\sigma},... ,</math> <math>{(x_\lambda - m')}/{\sigma})</math> // update covariance matrix
| |
| <math>\sigma</math> ← update_sigma<math>(\sigma,</math> <math>||p_\sigma||)</math> // update step-size using isotropic path length
| |
| '''return''' <math>m</math> or <math>x_1</math>
| |
| </code>
| |
| The order of the five update assignments is relevant. In the following, the update equations for the five state variables are specified.
| |
| | |
| Given are the search space dimension <math>n</math> and the iteration step <math>k</math>. The five state variables are
| |
| | |
| : <math>m_k\in\mathbb{R}^n</math>, the distribution mean and current favorite solution to the optimization problem,
| |
| | |
| : <math>\sigma_k>0</math>, the step-size,
| |
| | |
| : <math> C_k</math>, a symmetric and [[Positive-definite matrix|positive definite]] <math>n\times n</math> [[covariance matrix]] with <math> C_0 = I</math> and
| |
| | |
| : <math> p_\sigma\in\mathbb{R}^n, p_c\in\mathbb{R}^n</math>, two evolution paths, initially set to the zero vector.
| |
| | |
| The iteration starts with sampling <math>\lambda>1</math> candidate solutions <math>x_i\in\mathbb{R}^n </math> from a [[multivariate normal distribution]] <math>\textstyle \mathcal{N}(m_k,\sigma_k^2 C_k)</math>, i.e.
| |
| for <math>i=1,...,\lambda</math> | |
| | |
| :: <math>
| |
| \begin{align}
| |
| x_i \ &\sim\ \mathcal{N}(m_k,\sigma_k^2 C_k)
| |
| \\&\sim\ m_k + \sigma_k\times\mathcal{N}(0,C_k)
| |
| \end{align}
| |
| </math>
| |
| | |
| The second line suggests the interpretation as perturbation (mutation) of the current favorite solution vector <math>m_k</math> (the distribution mean vector). The candidate solutions <math> x_i</math> are evaluated on the objective function <math>f:\mathbb{R}^n\to\mathbb{R}</math> to be minimized. Denoting the <math>f</math>-sorted candidate solutions as
| |
| | |
| : <math>
| |
| \{x_{i:\lambda}\;|\;i=1\dots\lambda\} = \{x_i\;|\;i=1\dots\lambda\} \;\;\text{and}\;\;
| |
| f(x_{1:\lambda})\le\dots\le f(x_{\mu:\lambda})\le f(x_{\mu+1:\lambda}) \dots,
| |
| </math>
| |
| | |
| the new mean value is computed as
| |
| | |
| :: <math>
| |
| \begin{align}
| |
| m_{k+1} &= \sum_{i=1}^{\mu} w_i\, x_{i:\lambda}
| |
| \\ &= m_k + \sum_{i=1}^{\mu} w_i\, (x_{i:\lambda} - m_k)
| |
| \end{align}
| |
| </math>
| |
| | |
| where the positive (recombination) weights <math> w_1 \ge w_2 \ge \dots \ge w_\mu > 0</math> sum to one. Typically, <math>\mu \le \lambda/2</math> and the weights are chosen such that <math>\textstyle \mu_w := 1 / \sum_{i=1}^\mu w_i^2 \approx \lambda/4</math>. The only feedback used from the objective function here and in the following is an ordering of the sampled candidate solutions due to the indices <math>i:\lambda</math>.
| |
| | |
| The step-size <math>\sigma_k</math> is updated using ''cumulative step-size adaptation'' (CSA), sometimes also denoted as ''path length control''. The evolution path (or search path) <math>p_\sigma</math> is updated first.
| |
| | |
| :: <math>
| |
| p_\sigma \gets \underbrace{(1-c_\sigma)}_{\!\!\!\!\!\text{discount factor}\!\!\!\!\!}\, p_\sigma
| |
| + \overbrace{\sqrt{1 - (1-c_\sigma)^2}}^{
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\text{complements for discounted variance}
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!} \underbrace{\sqrt{\mu_w}
| |
| \,C_k^{\;-1/2} \, \frac{\overbrace{m_{k+1} - m_k}^{\!\!\!\text{displacement of}\; m\!\!\!}}{\sigma_k}}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
| |
| \text{distributed as}\; \mathcal{N}(0,I)\;\text{under neutral selection}
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}
| |
| </math>
| |
| :: <math>
| |
| \sigma_{k+1} = \sigma_k \times \exp\bigg(\frac{c_\sigma}{d_\sigma}
| |
| \underbrace{\left(\frac{\|p_\sigma\|}{E\|\mathcal{N}(0,I)\|} - 1\right)}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
| |
| \text{unbiased about 0 under neutral selection}
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
| |
| }\bigg)
| |
| </math>
| |
| | |
| where
| |
| | |
| : <math>c_\sigma^{-1}\approx n/3</math> is the backward time horizon for the evolution path <math>p_\sigma</math> and larger than one,
| |
| | |
| : <math>\mu_w=\left(\sum_{i=1}^\mu w_i^2\right)^{-1}</math> is the variance effective selection mass and <math>1 \le \mu_w \le \mu</math> by definition of <math>w_i</math>,
| |
| | |
| : <math>C_k^{\;-1/2} = \sqrt{C_k}^{\;-1} = \sqrt{C_k^{\;-1}}</math> is the unique symmetric [[Square root of a matrix|square root]] of the [[Invertible matrix|inverse]] of <math>C_k</math>, and
| |
| | |
| : <math>d_\sigma</math> is the damping parameter usually close to one. For <math>d_\sigma=\infty</math> or <math>c_\sigma=0</math> the step-size remains unchanged.
| |
| | |
| The step-size <math>\sigma_k</math> is increased if and only if <math>\|p_\sigma\|</math> is larger than the [[expected value]]
| |
| | |
| : <math>\begin{align}E\|\mathcal{N}(0,I)\| &= \sqrt{2}\,\Gamma((n+1)/2)/\Gamma(n/2)
| |
| \\&\approx \sqrt{n}\,(1-1/(4\,n)+1/(21\,n^2)) \end{align}</math>
| |
| | |
| and decreased if it is smaller. For this reason, the step-size update tends to make consecutive steps [[Conjugate gradient#The conjugate gradient method as a direct method|<math>C_k^{-1}</math>-conjugate]], in that after the adaptation has been successful <math>\textstyle\left(\frac{m_{k+2}-m_{k+1}}{\sigma_{k+1}}\right)^T\! C_k^{-1} \frac{m_{k+1}-m_{k}}{\sigma_k} \approx 0</math>.<ref>{{Citation
| |
| | first = N.
| |
| | last = Hansen
| |
| | chapter = The CMA evolution strategy: a comparing review
| |
| | title = Towards a new evolutionary computation. Advances on estimation of distribution algorithms
| |
| | pages = 1769–1776
| |
| | publisher = Springer
| |
| | year = 2006
| |
| | location =
| |
| | url =
| |
| | accessdate =
| |
| | id =
| |
| }}</ref>
| |
| | |
| Finally, the [[covariance matrix]] is updated, where again the respective evolution path is updated first.
| |
| | |
| :: <math>
| |
| p_c \gets \underbrace{(1-c_c)}_{\!\!\!\!\!\text{discount factor}\!\!\!\!\!}\,
| |
| p_c +
| |
| \underbrace{\mathbf{1}_{[0,\alpha\sqrt{n}]}(\|p_\sigma\|)}_{\text{indicator function}}
| |
| \overbrace{\sqrt{1 - (1-c_c)^2}}^{
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\text{complements for discounted variance}
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}
| |
| \underbrace{\sqrt{\mu_w}
| |
| \, \frac{m_{k+1} - m_k}{\sigma_k}}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
| |
| \text{distributed as}\; \mathcal{N}(0,C_k)\;\text{under neutral selection}
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}
| |
| </math>
| |
| | |
| :: <math>
| |
| C_{k+1} = \underbrace{(1 - c_1 - c_\mu + c_s)}_{\!\!\!\!\!\text{discount factor}\!\!\!\!\!}
| |
| \, C_k + c_1 \underbrace{p_c p_c^T}_{
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
| |
| \text{rank one matrix}
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}
| |
| + \,c_\mu \underbrace{\sum_{i=1}^\mu w_i \frac{x_{i:\lambda} - m_k}{\sigma_k}
| |
| \left( \frac{x_{i:\lambda} - m_k}{\sigma_k} \right)^T}_{
| |
| \text{rank} \;\min(\mu,n)\; \text{matrix}}
| |
| </math>
| |
| | |
| where <math>T</math> denotes the transpose and
| |
| | |
| : <math>c_c^{-1}\approx n/4</math> is the backward time horizon for the evolution path <math>p_c</math> and larger than one,
| |
| | |
| : <math>\alpha\approx 1.5</math> and the [[indicator function]] <math>\mathbf{1}_{[0,\alpha\sqrt{n}]}(\|p_\sigma\|)</math> evaluates to one [[if and only if|iff]] <math>\|p_\sigma\|\in[0,\alpha\sqrt{n}]</math> or, in other words, <math>\|p_\sigma\|\le\alpha\sqrt{n}</math>, which is usually the case,
| |
| | |
| : <math>c_s = (1 - \mathbf{1}_{[0,\alpha\sqrt{n}]}(\|p_\sigma\|)^2) \,c_1 c_c (2-c_c) </math> makes partly up for the small variance loss in case the indicator is zero,
| |
| | |
| : <math>c_1 \approx 2 / n^2</math> is the learning rate for the rank-one update of the [[covariance matrix]] and
| |
| | |
| : <math>c_\mu \approx \mu_w / n^2 </math> is the learning rate for the rank-<math>\mu</math> update of the [[covariance matrix]] and must not exceed <math>1 - c_1</math>.
| |
| | |
| The [[covariance matrix]] update tends to increase the [[Likelihood function|likelihood]] for <math>p_c</math> and for <math>(x_{i:\lambda} - m_k)/\sigma_k</math> to be sampled from <math>\mathcal{N}(0,C_{k+1})</math>. This completes the iteration step.
| |
| | |
| The number of candidate samples per iteration, <math>\lambda</math>, is not determined a priori and can vary in a wide range. Smaller values, for example <math>\lambda=10</math>, lead to more local search behavior. Larger values, for example <math>\lambda=10n</math> with default value <math>\mu_w \approx \lambda/4</math>, render the search more global. Sometimes the algorithm is repeatedly restarted with increasing <math>\lambda</math> by a factor of two for each restart.<ref>{{cite conference
| |
| | first = A.
| |
| | last = Auger
| |
| | coauthors = N. Hansen
| |
| | title = A Restart CMA Evolution Strategy With Increasing Population Size
| |
| | booktitle = 2005 IEEE Congress on Evolutionary Computation, Proceedings
| |
| | pages = 1769–1776
| |
| | publisher = IEEE
| |
| | year = 2005
| |
| | location =
| |
| | url=http://www.lri.fr/~auger/cec-restartcma.pdf
| |
| | accessdate =
| |
| | id =
| |
| }}</ref> Besides of setting <math>\lambda</math> (or possibly <math>\mu</math> instead, if for example <math>\lambda</math> is predetermined by the number of available processors), the above introduced parameters are not specific to the given objective function and therefore not meant to be modified by the user.
| |
| | |
| ==Example code in MATLAB/Octave==
| |
| <syntaxhighlight lang="matlab">
| |
| function xmin=purecmaes % (mu/mu_w, lambda)-CMA-ES
| |
| | |
| % -------------------- Initialization --------------------------------
| |
| % User defined input parameters (need to be edited)
| |
| strfitnessfct = 'frosenbrock'; % name of objective/fitness function
| |
| N = 20; % number of objective variables/problem dimension
| |
| xmean = rand(N,1); % objective variables initial point
| |
| sigma = 0.3; % coordinate wise standard deviation (step size)
| |
| stopfitness = 1e-10; % stop if fitness < stopfitness (minimization)
| |
| stopeval = 1e3*N^2; % stop after stopeval number of function evaluations
| |
|
| |
| % Strategy parameter setting: Selection
| |
| lambda = 4+floor(3*log(N)); % population size, offspring number
| |
| mu = lambda/2; % number of parents/points for recombination
| |
| weights = log(mu+1/2)-log(1:mu)'; % muXone array for weighted recombination
| |
| mu = floor(mu);
| |
| weights = weights/sum(weights); % normalize recombination weights array
| |
| mueff=sum(weights)^2/sum(weights.^2); % variance-effectiveness of sum w_i x_i
| |
| | |
| % Strategy parameter setting: Adaptation
| |
| cc = (4+mueff/N) / (N+4 + 2*mueff/N); % time constant for cumulation for C
| |
| cs = (mueff+2) / (N+mueff+5); % t-const for cumulation for sigma control
| |
| c1 = 2 / ((N+1.3)^2+mueff); % learning rate for rank-one update of C
| |
| cmu = min(1-c1, 2 * (mueff-2+1/mueff) / ((N+2)^2+mueff)); % and for rank-mu update
| |
| damps = 1 + 2*max(0, sqrt((mueff-1)/(N+1))-1) + cs; % damping for sigma
| |
| % usually close to 1
| |
| % Initialize dynamic (internal) strategy parameters and constants
| |
| pc = zeros(N,1); ps = zeros(N,1); % evolution paths for C and sigma
| |
| B = eye(N,N); % B defines the coordinate system
| |
| D = ones(N,1); % diagonal D defines the scaling
| |
| C = B * diag(D.^2) * B'; % covariance matrix C
| |
| invsqrtC = B * diag(D.^-1) * B'; % C^-1/2
| |
| eigeneval = 0; % track update of B and D
| |
| chiN=N^0.5*(1-1/(4*N)+1/(21*N^2)); % expectation of
| |
| % ||N(0,I)|| == norm(randn(N,1))
| |
|
| |
| % -------------------- Generation Loop --------------------------------
| |
| counteval = 0; % the next 40 lines contain the 20 lines of interesting code
| |
| while counteval < stopeval
| |
|
| |
| % Generate and evaluate lambda offspring
| |
| for k=1:lambda,
| |
| arx(:,k) = xmean + sigma * B * (D .* randn(N,1)); % m + sig * Normal(0,C)
| |
| arfitness(k) = feval(strfitnessfct, arx(:,k)); % objective function call
| |
| counteval = counteval+1;
| |
| end
| |
|
| |
| % Sort by fitness and compute weighted mean into xmean
| |
| [arfitness, arindex] = sort(arfitness); % minimization
| |
| xold = xmean;
| |
| xmean = arx(:,arindex(1:mu))*weights; % recombination, new mean value
| |
|
| |
| % Cumulation: Update evolution paths
| |
| ps = (1-cs)*ps ...
| |
| + sqrt(cs*(2-cs)*mueff) * invsqrtC * (xmean-xold) / sigma;
| |
| hsig = norm(ps)/sqrt(1-(1-cs)^(2*counteval/lambda))/chiN < 1.4 + 2/(N+1);
| |
| pc = (1-cc)*pc ...
| |
| + hsig * sqrt(cc*(2-cc)*mueff) * (xmean-xold) / sigma;
| |
| | |
| % Adapt covariance matrix C
| |
| artmp = (1/sigma) * (arx(:,arindex(1:mu))-repmat(xold,1,mu));
| |
| C = (1-c1-cmu) * C ... % regard old matrix
| |
| + c1 * (pc*pc' ... % plus rank one update
| |
| + (1-hsig) * cc*(2-cc) * C) ... % minor correction if hsig==0
| |
| + cmu * artmp * diag(weights) * artmp'; % plus rank mu update
| |
| | |
| % Adapt step size sigma
| |
| sigma = sigma * exp((cs/damps)*(norm(ps)/chiN - 1));
| |
|
| |
| % Decomposition of C into B*diag(D.^2)*B' (diagonalization)
| |
| if counteval - eigeneval > lambda/(c1+cmu)/N/10 % to achieve O(N^2)
| |
| eigeneval = counteval;
| |
| C = triu(C) + triu(C,1)'; % enforce symmetry
| |
| [B,D] = eig(C); % eigen decomposition, B==normalized eigenvectors
| |
| D = sqrt(diag(D)); % D is a vector of standard deviations now
| |
| invsqrtC = B * diag(D.^-1) * B';
| |
| end
| |
|
| |
| % Break, if fitness is good enough or condition exceeds 1e14, better termination methods are advisable
| |
| if arfitness(1) <= stopfitness || max(D) > 1e7 * min(D)
| |
| break;
| |
| end
| |
| | |
| end % while, end generation loop
| |
| | |
| xmin = arx(:, arindex(1)); % Return best point of last iteration.
| |
| % Notice that xmean is expected to be even
| |
| % better.
| |
|
| |
| % ---------------------------------------------------------------
| |
| function f=frosenbrock(x)
| |
| if size(x,1) < 2 error('dimension must be greater one'); end
| |
| f = 100*sum((x(1:end-1).^2 - x(2:end)).^2) + sum((x(1:end-1)-1).^2);
| |
| </syntaxhighlight>
| |
| | |
| == Theoretical Foundations ==
| |
| Given the distribution parameters—mean, variances and covariances—the [[multivariate normal distribution|normal probability distribution]] for sampling new candidate solutions is the [[maximum entropy probability distribution]] over <math>\mathbb{R}^n</math>, that is, the sample distribution with the minimal amount of prior information built into the distribution. More considerations on the update equations of CMA-ES are made in the following.
| |
| | |
| === Variable Metric ===
| |
| The CMA-ES implements a stochastic [[variable-metric]] method. In the very particular case of a convex-quadratic objective function
| |
| | |
| :: <math> f(x) = {\textstyle\frac{1}{2}}(x-x^*)^T H (x-x^*)</math>
| |
| | |
| the covariance matrix <math>C_k</math> adapts to the inverse of the [[Hessian matrix]] <math>H</math>, [[up to]] a scalar factor and small random fluctuations. More general, also on the function <math>g \circ f</math>, where <math> g </math> is strictly increasing and therefore order preserving and <math>f</math> is convex-quadratic, the covariance matrix <math>C_k</math> adapts to <math>H^{-1}</math>, [[up to]] a scalar factor and small random fluctuations.
| |
| | |
| === Maximum-Likelihood Updates ===
| |
| | |
| The update equations for mean and covariance matrix maximize a [[likelihood]] while resembling an [[Expectation maximization|expectation-maximization]] algorithm. The update of the mean vector <math>m</math> maximizes a log-likelihood, such that
| |
| | |
| :: <math> m_{k+1} = \arg\max_{m} \sum_{i=1}^\mu w_i \log p_\mathcal{N}(x_{i:\lambda} | m) </math>
| |
| | |
| where
| |
| | |
| :: <math> \log p_\mathcal{N}(x) =
| |
| - \frac{1}{2} \log\det(2\pi C) - \frac{1}{2} (x-m)^T C^{-1} (x-m) </math>
| |
| | |
| denotes the log-likelihood of <math>x</math> from a multivariate normal distribution with mean <math>m</math> and any positive definite covariance matrix <math>C</math>. To see that <math>m_{k+1}</math> is independent of <math>C</math> remark first that this is the case for any diagonal matrix <math>C</math>, because the coordinate-wise maximizer is independent of a scaling factor. Then, rotation of the data points or choosing <math>C</math> non-diagonal are equivalent.
| |
| | |
| The rank-<math>\mu</math> update of the covariance matrix, that is, the right most summand in the update equation of <math>C_k</math>, maximizes a log-likelihood in that
| |
| | |
| :: <math> \sum_{i=1}^\mu w_i \frac{x_{i:\lambda} - m_k}{\sigma_k}
| |
| \left( \frac{x_{i:\lambda} - m_k}{\sigma_k} \right)^T
| |
| = \arg\max_{C} \sum_{i=1}^\mu w_i \log p_\mathcal{N}\left(\left.\frac{x_{i:\lambda} - m_k}{\sigma_k} \right| C\right) </math>
| |
| | |
| for <math>\mu\ge n</math> (otherwise <math>C</math> is singular, but substantially the same result holds for <math>\mu < n</math>). Here, <math> p_\mathcal{N}(x | C) </math> denotes the likelihood of <math>x</math> from a multivariate normal distribution with zero mean and covariance matrix <math>C</math>. Therefore, for <math>c_1=0</math> and <math>c_\mu=1</math>, <math>C_{k+1}</math> is the above [[maximum-likelihood]] estimator. See [[estimation of covariance matrices]] for details on the derivation.
| |
| | |
| === Natural Gradient Descent in the Space of Sample Distributions ===
| |
| | |
| Akimoto ''et al.''<ref name=akimoto2010>{{cite conference
| |
| | first = Y.
| |
| | last = Akimoto
| |
| | coauthors = Y. Nagata and I. Ono and S. Kobayashi
| |
| | title = Bidirectional Relation between CMA Evolution Strategies and Natural Evolution Strategies
| |
| | booktitle = Parallel Problem Solving from Nature, PPSN XI
| |
| | pages = 154–163
| |
| | publisher = Springer
| |
| | year = 2010
| |
| | location =
| |
| | url =
| |
| | accessdate =
| |
| | id =
| |
| }}</ref> recently found that the update of the distribution parameters resembles the descend in direction of a sampled [[Information geometry#Natural gradient|natural gradient]] of the expected objective function value {{math|E <var>f</var> (<var>x</var>)</var>}} (to be minimized), where the expectation is taken under the sample distribution. With the parameter setting of <math>c_\sigma=0</math> and <math>c_1=0</math>, i.e. without step-size control and rank-one update, CMA-ES can thus be viewed as an instantiation of [[Natural Evolution Strategies]] (NES).<ref name=akimoto2010/><ref name=glasmachers2010>{{cite conference
| |
| | first = T.
| |
| | last = Glasmachers
| |
| | coauthors = T. Schaul, Y. Sun, D. Wierstra and J. Schmidhuber
| |
| | title = Exponential Natural Evolution Strategies
| |
| | booktitle = Genetic and Evolutionary Computation Conference GECCO
| |
| | year = 2010
| |
| | location = Portland, OR
| |
| | url = http://www.idsia.ch/~tom/publications/xnes.pdf
| |
| }}</ref>
| |
| The natural gradient is independent of the parameterization of the distribution. Taken with respect to the parameters {{math|<var>θ</var>}} of the sample distribution {{math|<var>p</var>}}, the gradient of {{math|E <var>f</var> (<var>x</var>)</var>}} can be expressed as
| |
| | |
| :: <math> \begin{align}
| |
| {\nabla}_{\!\theta} E(f(x) | \theta)
| |
| &= \nabla_{\!\theta} \int_{\mathbb R^n}f(x) p(x) \mathrm{d}x
| |
| \\ &= \int_{\mathbb R^n}f(x) \nabla_{\!\theta} p(x) \mathrm{d}x
| |
| \\ &= \int_{\mathbb R^n}f(x) p(x) \nabla_{\!\theta} \ln p(x) \mathrm{d}x
| |
| \\ &= E(f(x) \nabla_{\!\theta} \ln p(x|\theta))
| |
| \end{align}</math>
| |
| | |
| where <math>p(x)=p(x|\theta)</math> depends on the parameter vector <math>\theta</math>, the so-called [[Score (statistics)|score function]], <math>\nabla_{\!\theta} \ln p(x|\theta) = \frac{\nabla_{\!\theta} p(x)}{p(x)} </math>, indicates the relative sensitivity of {{math|<var>p</var>}} w.r.t. {{math|<var>θ</var>}}, and the expectation is taken with respect to the distribution {{math|<var>p</var>}}. The [[Information geometry#Natural gradient|''natural'' gradient]] of {{math|E <var>f</var> (<var>x</var>)</var>}}, complying with the [[Fisher information metric]] (an informational distance measure between probability distributions and the curvature of the [[relative entropy]]), now reads
| |
| | |
| :: <math> \begin{align}
| |
| \tilde{\nabla} E(f(x) | \theta)
| |
| &= F^{-1}_\theta \nabla_{\!\theta} E(f(x) | \theta)
| |
| \end{align}</math>
| |
| | |
| where the [[Fisher information]] matrix <math> F_{\theta} </math> is the expectation of the [[Hessian matrix|Hessian]] of {{math|-ln<var>p</var>}} and renders the expression independent of the chosen parameterization. Combining the previous equalities we get
| |
| | |
| :: <math> \begin{align}
| |
| \tilde{\nabla} E(f(x) | \theta)
| |
| &= F^{-1}_\theta E(f(x) \nabla_{\!\theta} \ln p(x|\theta))
| |
| \\ &= E(f(x) F^{-1}_\theta \nabla_{\!\theta} \ln p(x|\theta))
| |
| \end{align}</math>
| |
| | |
| A Monte Carlo approximation of the latter expectation takes the average over {{math|<var>λ</var>}} samples from {{math|<var>p</var>}}
| |
| | |
| :: <math> \tilde{\nabla} \widehat{E}_\theta(f) := -\sum_{i=1}^\lambda \overbrace{w_i}^{\!\!\!\!\text{preference weight}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!} \underbrace{F^{-1}_\theta \nabla_{\!\theta} \ln p(x_{i:\lambda}|\theta)}_{\!\!\!\!\!\text{candidate direction from }x_{i:\lambda}\!\!\!\!\!}
| |
| \quad\mathrm{with~}w_i = -f(x_{i:\lambda})/\lambda</math>
| |
| | |
| where the notation <math>i:\lambda</math> from above is used and therefore <math>w_i</math> are monotonously decreasing in <math>i</math>.
| |
| We might use, for a more robust approximation, rather <math>w_i</math> as defined in the CMA-ES and zero for {{math|<var>i</var> > <var>μ</var>}} (implementing a consistent estimator for the [[Cumulative distribution function|CDF]] of <math>f(X), X\sim p(.|\theta)</math> at the point <math>f(x_{i:\lambda})</math> composed with a fixed monotonous decreased transformation) and let
| |
| | |
| :: <math>\theta = [m_k^T \mathrm{vec}(C_k)^T \sigma_k]^T \in \mathbb{R}^{n+n^2+1}</math>
| |
| | |
| such that <math> p(.|\theta) </math> is the density of the [[multivariate normal distribution]] <math>\mathcal N(m_k,\sigma_k^2 C_k)</math>. Then, we have an explicit expression for the inverse of the Fisher information matrix where <math>\sigma_k</math> is fixed
| |
| | |
| :: <math>F^{-1}_{\theta | \sigma_k} = \left[\begin{array}{cc}\sigma_k^2 C_k&0\\ 0&2 C_k\otimes C_k\end{array}\right]</math>
| |
| | |
| and for
| |
| | |
| :: <math>\ln p(x|\theta) = \ln p(x|m_k,\sigma_k^2 C_k) = -\frac{1}{2}(x-m_k)^T \sigma_k^{-2} C_k^{-1} (x-m_k)
| |
| \,-\, \frac{1}{2}\ln\det(2\pi\sigma_k^2 C_k)</math>
| |
| | |
| and, after some calculations, the updates in the CMA-ES turn out as<ref name=akimoto2010/>
| |
| | |
| <span id="update_in_gradient_formulation">
| |
| :: <math> \begin{align}
| |
| m_{k+1}
| |
| &= m_k - \underbrace{[\tilde{\nabla} \widehat{E}_\theta(f)]_{1,\dots, n}}_{
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
| |
| \text{natural gradient for mean}
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
| |
| }
| |
| \\
| |
| &= m_k + \sum_{i=1}^\lambda w_i (x_{i:\lambda} - m_k)
| |
| \end{align} </math>
| |
| | |
| and
| |
| | |
| :: <math> \begin{align}
| |
| C_{k+1}
| |
| &= C_k + c_1(p_c p_c^T - C_k)
| |
| - c_\mu\,\mathrm{mat}(\overbrace{[\tilde{\nabla} \widehat{E}_\theta(f)]_{n+1,\dots,n+n^2}}^{
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
| |
| \text{natural gradient for covariance matrix}
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
| |
| })\\
| |
| &= C_k + c_1(p_c p_c^T - C_k)
| |
| + c_\mu \sum_{i=1}^\lambda w_i \left(\frac{x_{i:\lambda} - m_k}{\sigma_k} \left(\frac{x_{i:\lambda} - m_k}{\sigma_k}\right)^T - C_k\right)
| |
| \end{align}</math>
| |
| </span>
| |
| | |
| where mat forms the proper matrix from the respective natural gradient sub-vector. That means, setting <math>c_1=c_\sigma=0</math>, the CMA-ES updates descend in direction of the approximation <math> \tilde{\nabla} \widehat{E}_\theta(f)</math> of the natural gradient while using different step-sizes (learning rates) for the [[Fisher information#Orthogonal parameters|orthogonal parameters]] <math>m</math> and <math>C</math> respectively.
| |
| | |
| === Stationarity or Unbiasedness ===
| |
| It is comparatively easy to see that the update equations of CMA-ES satisfy some stationarity conditions, in that they are essentially unbiased. Under neutral selection, where <math>x_{i:\lambda} \sim \mathcal N(m_k,\sigma_k^2 C_k)</math>, we find that
| |
| | |
| :: <math> E(m_{k+1}\,|\, m_k) = m_k </math>
| |
| | |
| and under some mild additional assumptions on the initial conditions
| |
| | |
| :: <math> E(\log \sigma_{k+1} \,|\, \sigma_k) = \log \sigma_k </math>
| |
| | |
| and with an additional minor correction in the covariance matrix update for the case where the indicator function evaluates to zero, we find
| |
| | |
| :: <math> E(C_{k+1} \,|\, C_k) = C_k </math>
| |
| | |
| === Invariance ===
| |
| [[Invariant (mathematics)|Invariance properties]] imply uniform performance on a class of objective functions. They have been argued to be an advantage, because they allow to generalize and predict the behavior of the algorithm and therefore strengthen the meaning of empirical results obtained on single functions. The following invariance properties have been established for CMA-ES.
| |
| | |
| * Invariance under order-preserving transformations of the objective function value <math>f</math>, in that for any <math>h:\mathbb{R}^n\to\mathbb{R}</math> the behavior is identical on <math>f:x\mapsto g(h(x))</math> for all strictly increasing <math>g:\mathbb{R}\to\mathbb{R}</math>. This invariance is easy to verify, because only the <math>f</math>-ranking is used in the algorithm, which is invariant under the choice of <math>g</math>.
| |
| | |
| * [[Scale-invariance]], in that for any <math>h:\mathbb{R}^n\to \mathbb{R}</math> the behavior is independent of <math>\alpha>0</math> for the objective function <math>f:x\mapsto h(\alpha x)</math> given <math>\sigma_0\propto1/\alpha</math> and <math>m_0\propto1/\alpha</math>.
| |
| | |
| * Invariance under rotation of the search space in that for any <math>h:\mathbb{R}^n\to \mathbb{R}</math> and any <math>z\in\mathbb{R}^n</math> the behavior on <math>f:x\mapsto h(R x)</math> is independent of the [[orthogonal matrix]] <math>R</math>, given <math>m_0=R^{-1} z</math>. More general, the algorithm is also invariant under general linear transformations <math>R</math> when additionally the initial covariance matrix is chosen as <math>R^{-1}{R^{-1}}^T</math>.
| |
| | |
| Any serious parameter optimization method should be translation invariant, but most methods do not exhibit all the above described invariance properties. A prominent example with the same invariance properties is the [[Nelder–Mead method]], where the initial simplex must be chosen respectively.
| |
| | |
| === Convergence ===
| |
| | |
| Conceptual considerations like the scale-invariance property of the algorithm, the analysis of simpler [[evolution strategies]], and overwhelming empirical evidence suggest that the algorithm converges on a large class of functions fast to the global optimum, denoted as <math>x^*</math>. On some functions, convergence occurs independently of the initial conditions with probability one. On some functions the probability is smaller than one and typically depends on the initial <math>m_0</math> and <math>\sigma_0</math>. Empirically, the fastest possible convergence rate in <math>k</math> for rank-based direct search methods can often be observed (depending on the context denoted as ''[[Rate of convergence#linear convergence|linear]]'' or ''log-linear'' or ''exponential'' convergence). Informally, we can write
| |
| | |
| :: <math>\|m_k - x^*\| \;\approx\; \|m_0 - x^*\| \times e^{-ck}
| |
| </math>
| |
| | |
| for some <math>c>0</math>, and more rigorously
| |
| | |
| :: <math>\frac{1}{k}\sum_{i=1}^k\log\frac{\|m_i - x^*\|}{\|m_{i-1} - x^*\|}
| |
| \;=\; \frac{1}{k}\log\frac{\|m_k - x^*\|}{\|m_{0} - x^*\|}
| |
| \;\to\; -c < 0 \quad\text{for}\; k\to\infty\;,
| |
| </math>
| |
| | |
| or similarly,
| |
| | |
| :: <math>E\log\frac{\|m_k - x^*\|}{\|m_{k-1} - x^*\|}
| |
| \;\to\; -c < 0 \quad\text{for}\; k\to\infty\;.
| |
| </math>
| |
| | |
| This means that on average the distance to the optimum decreases in each iteration by a "constant" factor, namely by <math>\exp(-c)</math>. The convergence rate <math>c</math> is roughly <math>0.1\lambda/n</math>, given <math>\lambda</math> is not much larger than the dimension <math>n</math>. Even with optimal <math>\sigma</math> and <math>C</math>, the convergence rate <math>c</math> cannot largely exceed <math>0.25\lambda/n</math>, given the above recombination weights <math>w_i</math> are all non-negative. The actual linear dependencies in <math>\lambda</math> and <math>n</math> are remarkable and they are in both cases the best one can hope for in this kind of algorithm. Yet, a rigorous proof of convergence is missing.
| |
| | |
| ===Interpretation as Coordinate System Transformation===
| |
| Using a non-identity covariance matrix for the [[multivariate normal distribution]] in [[evolution strategies]] is equivalent to a coordinate system transformation of the solution vectors,<ref name=hansen2008 /> mainly because the sampling equation
| |
| | |
| :<math>
| |
| \begin{align}
| |
| x_i &\sim\ m_k + \sigma_k\times\mathcal{N}(0,C_k)
| |
| \\
| |
| &\sim\ m_k + \sigma_k \times C_k^{1/2}\mathcal{N}(0,I)
| |
| \end{align}
| |
| </math>
| |
| | |
| can be equivalently expressed in an "encoded space" as
| |
| :<math>
| |
| \underbrace{C_k^{-1/2}x_i}_{\text{represented in the encode space}
| |
| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}
| |
| \sim\ \underbrace{C_k^{-1/2} m_k} {} + \sigma_k \times\mathcal{N}(0,I)
| |
| </math>
| |
| | |
| The covariance matrix defines a [[bijective]] transformation (encoding) for all solution vectors into a space, where the sampling takes place with identity covariance matrix. Because the update equations in the CMA-ES are invariant under linear coordinate system transformations, the CMA-ES can be re-written as an adaptive encoding procedure applied to a simple [[evolution strategy]] with identity covariance matrix.<ref name=hansen2008>{{cite conference
| |
| | first = N.
| |
| | last = Hansen
| |
| | title = Adpative Encoding: How to Render Search Coordinate System Invariant
| |
| | booktitle = Parallel Problem Solving from Nature, PPSN X
| |
| | pages = 205–214
| |
| | publisher = Springer
| |
| | year = 2008
| |
| | location =
| |
| | url = http://hal.archives-ouvertes.fr/inria-00287351/en/
| |
| | accessdate =
| |
| | id =
| |
| }}</ref>
| |
| This adaptive encoding procedure is not confined to algorithms that sample from a multivariate normal distribution (like evolution strategies), but can in principle be applied to any iterative search method.
| |
| | |
| == Performance in Practice ==
| |
| | |
| In contrast to most other [[evolutionary algorithms]], the CMA-ES is, from the users perspective, quasi parameter-free. However, the number of candidate samples λ (population size) can be adjusted by the user in order to change the characteristic search behavior (see above). CMA-ES has been empirically successful in hundreds of applications and is considered to be useful in particular on non-convex, non-separable, ill-conditioned, multi-modal or noisy objective functions. The search space dimension ranges typically between two and a few hundred. Assuming a black-box optimization scenario, where gradients are not available (or not useful) and function evaluations are the only considered cost of search, the CMA-ES method is likely to be outperformed by other methods in the following conditions:
| |
| | |
| * on low-dimensional functions, say <math>n < 5 </math>, for example by the [[Nelder-Mead method|downhill simplex method]] or surrogate-based methods (like [[kriging]] with expected improvement);
| |
| | |
| * on separable functions without or with only negligible dependencies between the design variables in particular in the case of multi-modality or large dimension, for example by [[differential evolution]];
| |
| | |
| * on (nearly) [[Convex function|convex]]-quadratic functions with low or moderate [[condition number]] of the [[Hessian matrix]], where [[BFGS method|BFGS]] or [[NEWUOA]] are typically ten times faster;
| |
| | |
| * on functions that can already be solved with a comparatively small number of function evaluations, say no more than <math>10 n</math>, where CMA-ES is often slower than, for example, [[NEWUOA]] or [[MCS algorithm|Multilevel Coordinate Search]] (MCS ).
| |
| | |
| On separable functions, the performance disadvantage is likely to be most significant in that CMA-ES might not be able to find at all comparable solutions. On the other hand, on non-separable functions that are ill-conditioned or rugged or can only be solved with more than <math>100 n</math> function evaluations, the CMA-ES shows most often superior performance.
| |
| | |
| == Variations and Extensions ==
| |
| The (1+1)-CMA-ES<ref>{{cite conference
| |
| | first = C.
| |
| | last = Igel
| |
| | coauthors = T. Suttorp and N. Hansen
| |
| | title = A Computational Efficient Covariance Matrix Update and a (1+1)-CMA for Evolution Strategies
| |
| | booktitle = Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)
| |
| | pages = 453–460
| |
| | publisher = ACM Press
| |
| | year = 2006
| |
| | location =
| |
| | url = http://www.cs.york.ac.uk/rts/docs/GECCO_2006/docs/p453.pdf
| |
| | accessdate =
| |
| | id =
| |
| }}</ref> generates only one candidate solution per iteration step which becomes the new distribution mean if it is better than the current mean. For <math>c_c=1</math> the (1+1)-CMA-ES is a close variant of [[Gaussian adaptation]]. Some [[Natural Evolution Strategies]] are close variants of the CMA-ES with specific parameter settings. Natural Evolution Strategies do not utilize evolution paths (that means in CMA-ES setting <math>c_c=c_\sigma=1</math>) and they formalize the update of variances and covariances on a [[Cholesky decomposition|Cholesky factor]] instead of a covariance matrix. The CMA-ES has also been extended to [[multiobjective optimization]] as MO-CMA-ES.<ref>{{cite journal
| |
| | doi = 10.1162/evco.2007.15.1.1
| |
| | first = C.
| |
| | last = Igel
| |
| | coauthors = N. Hansen and S. Roth
| |
| | title = Covariance Matrix Adaptation for Multi-objective Optimization
| |
| | journal = Evolutionary Computation
| |
| | volume = 15
| |
| | issue = 1
| |
| | pages = 1–28
| |
| | publisher = MIT press
| |
| | year = 2007
| |
| | location =
| |
| | url = http://www.mitpressjournals.org/doi/pdfplus/10.1162/evco.2007.15.1.1
| |
| | pmid = 17388777
| |
| | accessdate =
| |
| | id =
| |
| }}</ref> Another remarkable extension has been the addition of a negative update of the covariance matrix with the so-called active CMA.<ref>{{cite conference
| |
| | first = G.A.
| |
| | last = Jastrebski
| |
| | coauthors = D.V. Arnold
| |
| | title = Improving Evolution Strategies through Active Covariance Matrix Adaptation
| |
| | booktitle = 2006 IEEE World Congress on Computational Intelligence, Proceedings
| |
| | pages = 9719–9726
| |
| | publisher = IEEE
| |
| | year = 2006
| |
| | location =
| |
| | doi = 10.1109/CEC.2006.1688662
| |
| | accessdate =
| |
| | id =
| |
| }}</ref>
| |
| | |
| With the advent of niching methods in evolutionary strategies, the question of an optimal niche radius arises. An "adaptive individual niche radius" is introduced in <ref>
| |
| {{cite book
| |
| | first = Ofer M.
| |
| | last = Shir
| |
| | coauthors = Bäck, Thomas
| |
| | booktitle = Parallel Problem Solving from Nature-PPSN IX
| |
| | pages = 142–151
| |
| | year = 2006
| |
| | location =
| |
| | publisher= Springer
| |
| }}</ref>
| |
| | |
| == See also ==
| |
| * [[Global optimization]]
| |
| * [[Stochastic optimization]]
| |
| | |
| == References ==
| |
| {{Reflist}}
| |
| | |
| ==Bibliography==
| |
| *Hansen N, Ostermeier A (2001). Completely derandomized self-adaptation in evolution strategies. [http://www.mitpressjournals.org/toc/evco/9/2 ''Evolutionary Computation'', '''9'''(2)] pp. 159–195. [http://www.lri.fr/~hansen/cmaartic.pdf]
| |
| *Hansen N, Müller SD, Koumoutsakos P (2003). Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). [http://www.mitpressjournals.org/toc/evco/11/1 ''Evolutionary Computation'', '''11'''(1)] pp. 1–18. [http://mitpress.mit.edu/journals/pdf/evco_11_1_1_0.pdf]
| |
| *Hansen N, Kern S (2004). Evaluating the CMA evolution strategy on multimodal test functions. In Xin Yao et al., editors, ''Parallel Problem Solving from Nature - PPSN VIII'', pp. 282–291, Springer. [http://www.lri.fr/~hansen/ppsn2004hansenkern.pdf]
| |
| *Igel C, Hansen N, Roth S (2007). Covariance Matrix Adaptation for Multi-objective Optimization. [http://www.mitpressjournals.org/toc/evco/15/1 ''Evolutionary Computation'', '''15'''(1)] pp. 1–28. [http://www.mitpressjournals.org/doi/pdfplus/10.1162/evco.2007.15.1.1]
| |
| | |
| ==External links==
| |
| * [http://www.lri.fr/~hansen/cmaesintro.html A short introduction to CMA-ES by N. Hansen]
| |
| * [http://www.lri.fr/~hansen/cmatutorial.pdf The CMA Evolution Strategy: A Tutorial]
| |
| * [http://www.lri.fr/~hansen/cmaes_inmatlab.html CMA-ES source code page]
| |
| | |
| {{Major subfields of optimization}}
| |
| | |
| {{DEFAULTSORT:Cma-Es}}
| |
| [[Category:Evolutionary algorithms]]
| |
| [[Category:Stochastic optimization]]
| |
| [[Category:Optimization algorithms and methods]]
| |
| | |
| [[fr:Stratégie d'évolution#CMA-ES]]
| |