Diamond-square algorithm: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Monkbot
en>Rjwilmsi
m Added 2 dois to journal cites using AWB (10209)
 
Line 1: Line 1:
{{Cleanup-reorganize|date=February 2012}}
Let me initial start by introducing myself. My title is Boyd Butts although it is not the name on my beginning certification. Hiring has been my profession for some time but I've already applied for another one. Her family members lives in Minnesota. To collect cash is 1 of the issues I love most.<br><br>Feel free to visit my page - home std test kit [[http://tfor.vectorsigma.ru/node/5409 visit your url]]
{{Merge from|Weak duality|Strong duality|date=March 2012}}
In [[mathematical optimization]] theory, '''duality''' means that [[optimization problem]]s may be viewed from either of two perspectives, the primal problem or the dual problem (the '''duality principle'''). The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem.<ref name="Boyd">{{cite book|title=Convex Optimization|first1=Stephen P.|last1=Boyd|first2=Lieven|last2=Vandenberghe|year=2004|publisher=Cambridge University Press|isbn=978-0-521-83378-3|url=http://www.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf|format=pdf|accessdate=October 15, 2011}}</ref> However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the [[duality gap]]. For [[convex optimization]] problems, the duality gap is zero under a [[constraint qualification]] condition. Thus, a solution to the dual problem provides a bound on the value of the solution to the primal problem; when the problem is convex and satisfies a constraint qualification, then the value of an optimal solution of the primal problem is given by the dual problem.
 
==Dual problem==
Usually dual problem refers to the ''Lagrangian dual problem'' but other dual problems are used, for example, the [[Wolfe dual problem]] and the [[Fenchel's duality theorem|Fenchel dual problem]]. The Lagrangian dual problem is obtained by forming the [[Lagrange multiplier|Lagrangian]], using nonnegative [[Lagrange multiplier]]s to add the constraints to the objective function, and then solving for some primal variable values that minimize the Lagrangian.  This solution gives the primal variables as functions of the Lagrange multipliers, which are called dual variables, so that the new problem is to maximize the objective function with respect to the dual variables under the derived constraints on the dual variables (including at least the nonnegativity).
 
In general given two [[dual pair]]s of [[separated space|separated]] [[locally convex space]]s <math>\left(X,X^*\right)</math> and <math>\left(Y,Y^*\right)</math> and the function <math>f: X \to \mathbb{R} \cup \{+\infty\}</math>, we can define the primal problem as finding <math>x</math> such that <math>\inf_{x \in X} f(x). \,</math>
In other words, <math>x</math> is the [[infimum]] (greatest lower bound) of the function <math>f</math>.
 
If there are constraint conditions, these can be built into the function <math>f</math> by letting <math>\tilde{f} = f + I_{\mathrm{constraints}}</math> where <math>I</math> is the [[Characteristic function (convex analysis)|indicator function]].  Then let <math>F: X \times Y \to \mathbb{R} \cup \{+\infty\}</math> be a [[perturbation function]] such that <math>F(x,0) = f(x)</math>.<ref name="BWG">{{cite book |title=Duality in Vector Optimization |author1=Boţ, Radu Ioan |author2=Wanka, Gert|author3=Grad, Sorin-Mihai |year=2009 |publisher=Springer |isbn=978-3-642-02885-4 }}</ref>
 
The [[duality gap]] is the difference of the right and left hand sides of the inequality
:<math>\sup_{y^* \in Y^*} -F^*(0,y^*) \le \inf_{x \in X} F(x,0), \,</math>
where <math>F^*</math> is the [[convex conjugate]] in both variables and <math>sup</math> denotes the [[supremum]] (least upper bound).<ref name="BWG" /><ref>{{cite book |title=Overcoming the failure of the classical generalized interior-point regularity conditions in convex optimization. Applications of the duality theory to enlargements of maximal monotone operators |author=Csetnek, Ernö Robert |year=2010 |publisher=Logos Verlag Berlin GmbH |isbn=978-3-8325-2503-3 }}</ref><ref name="Zalinescu">{{cite book |last=Zălinescu |first=Constantin |title=Convex analysis in general vector spaces |publisher=World Scientific Publishing&nbsp;Co.,&nbsp;Inc. |pages=106–113 |isbn=981-238-067-1 |mr=1921556 |issue=J |year=2002 |location=River Edge, NJ }}</ref>
 
=== Duality gap ===
{{main|duality gap}}
 
The duality gap is the difference between the values of any [[dual problem|primal solutions and any dual solutions]].  If <math>d^*</math> is the optimal dual value and <math>p^*</math> is the optimal primal value, then the duality gap is equal to <math>p^* - d^*</math>.  This value is always greater than or equal to 0.  The duality gap is zero if and only if [[strong duality]] holds.  Otherwise the gap is strictly positive and [[weak duality]] holds.<ref>{{cite book|title=Techniques of Variational Analysis|last1=Borwein|first1=Jonathan|last2=Zhu|first2=Qiji|year=2005|publisher=Springer|isbn=978-1-4419-2026-3}}</ref>
 
In computational optimization, another "duality gap" is often reported, which is the difference in value between any dual solution and the value of a feasible but suboptimal iterate for the primal problem. This alternative "duality gap" quantifies the discrepancy between the value of a current feasible but suboptimal iterate for the primal problem and the value of the dual problem; the value of the dual problem is, under regularity conditions, equal to the value of the ''[[convex relaxation]]'' of the primal problem: The convex relaxation is the problem arising replacing a non-convex feasible set with its closed [[convex hull]] and with replacing a non-convex function with its convex [[lower semi-continuous|closure]], that is the function that has the [[epigraph (mathematics)|epigraph]] that is the closed convex hull of the original primal objective function.<ref>{{cite book |author=[[Ravindra K. Ahuja|Ahuja, Ravindra K.]]; [[Thomas L. Magnanti|Magnanti, Thomas L.]]; and [[James B. Orlin|Orlin, James B.]] |title=Network Flows: Theory, Algorithms and Applications |publisher=Prentice Hall |year=1993 |isbn=0-13-617549-X }}</ref>
<ref>{{cite book |last1=Bertsekas |first1=Dimitri P. |year=1999 |title=Nonlinear Programming |edition=2nd |publisher=Athena Scientific |isbn=1-886529-00-0 }}</ref>
<ref>{{cite book |last1=Bonnans |first1=J.&nbsp;Frédéric |last2=Gilbert |first2=J.&nbsp;Charles |last3=Lemaréchal |first3=Claude |last4=Sagastizábal |first4=Claudia&nbsp;A. |title=Numerical optimization: Theoretical and practical aspects |url=http://www.springer.com/mathematics/applications/book/978-3-540-35445-1 |edition=Second revised ed. of translation of 1997 <!-- ''Optimisation numérique : Aspects théoriques et pratiques'' -->French |series=Universitext |publisher=Springer-Verlag |location=Berlin |year=2006 |pages=xiv+490 |isbn=3-540-35445-X |doi=10.1007/978-3-540-35447-5 |mr=2265882 |authorlink3=Claude Lemaréchal |ref=harv }}</ref>
<ref>{{cite book |last1=Hiriart-Urruty |first1=Jean-Baptiste |last2=Lemaréchal |first2=Claude |title=Convex analysis and minimization algorithms, Volume&nbsp;I: Fundamentals |series=Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] |volume=305 |publisher=Springer-Verlag |location=Berlin |year=1993 |pages=xviii+417 |isbn=3-540-56850-6 |mr=1261420 }}</ref>
<ref>{{cite book |last1=Hiriart-Urruty |first1=Jean-Baptiste |last2=Lemaréchal |first2=Claude |chapter=14 Duality for Practitioners |title=Convex analysis and minimization algorithms, Volume&nbsp;II: Advanced theory and bundle methods |series=Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] |volume=306 |publisher=Springer-Verlag |location=Berlin |year=1993 |pages=xviii+346 |isbn=3-540-56852-2 |mr=1295240|authorlink2==Claude Lemaréchal }}</ref>
<ref>{{cite book |last=Lasdon |first=Leon&nbsp;S. |title=Optimization theory for large systems |publisher=Dover Publications, Inc. |location=Mineola, New York |year=2002 |origyear=Reprint of the 1970 Macmillan |pages=xiii+523 |mr=1888251 |ref=harv |isbn=978-0-486-41999-2 }}</ref>
<ref>{{cite book |last=Lemaréchal |first=Claude |chapter=Lagrangian relaxation |pages=112–156 |title=Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May&nbsp;15–19,&nbsp;2000 |editor=Jünger, Michael; and Naddef, Denis |series=Lecture Notes in Computer Science (LNCS) |volume=2241 |publisher=Springer-Verlag |location=Berlin |year=2001 |isbn=3-540-42877-1 |mr=1900016 |doi=10.1007/3-540-45586-8_4 |authorlink=Claude Lemaréchal |ref=harv }}</ref>
<ref>{{cite book |last=Minoux |first=Michel |authorlink=Michel Minoux |title=Mathematical programming: Theory and algorithms |note=With a foreword by Egon Balas |edition=Translated by Steven Vajda from the (1983 Paris: Dunod) French |publisher=A Wiley-Interscience Publication. John Wiley & Sons, Ltd. |location=Chichester |year=1986 |pages=xxviii+489 |isbn=0-471-90170-9 |mr=868279 |ref=harv |id=(2008 Second ed., in French: ''Programmation mathématique : Théorie et algorithmes'', Éditions Tec & Doc, Paris, 2008. xxx+711 pp.  )}}</ref><ref>{{cite book|last=Shapiro|first=Jeremy F.|title=Mathematical programming: Structures and algorithms|publisher=Wiley-Interscience [John Wiley & Sons]|location=New York|year=1979|pages=xvi+388|isbn=0-471-77886-9|mr=544669|ref=harv}}</ref>
 
== The linear case ==
[[Linear programming]] problems are [[optimization (mathematics)|optimization]] problems in which the [[objective function]] and the [[Constraint (mathematics)|constraints]] are all [[linear]]. In the primal problem, the objective function is a linear combination of ''n'' variables. There are ''m'' constraints, each of which places an upper bound on a linear combination of the ''n'' variables. The goal is to maximize the value of the objective function subject to the constraints. A ''solution'' is a [[List (computing)|vector]] (a list) of ''n'' values that achieves the maximum value for the objective function.
 
In the dual problem, the objective function is a linear combination of the ''m'' values that are the limits in the ''m'' constraints from the primal problem. There are ''n'' dual constraints, each of which places a lower bound on a linear combination of ''m'' dual variables.
 
=== Relationship between the primal problem and the dual problem ===
In the linear case, in the primal problem, from each sub-optimal point that satisfies all the constraints, there is a direction or [[linear subspace|subspace]] of directions to move that increases the objective function. Moving in any such direction is said to remove slack between the candidate solution and one or more constraints. An ''infeasible'' value of the candidate solution is one that exceeds one or more of the constraints.
 
In the dual problem, the dual vector multiplies the constants that determine the positions of the constraints in the primal. Varying the dual vector in the dual problem is equivalent to revising the upper bounds in the primal problem. The lowest upper bound is sought. That is, the dual vector is minimized in order to remove slack between the candidate positions of the constraints and the actual optimum. An infeasible value of the dual vector is one that is too low. It sets the candidate positions of one or more of the constraints in a position that excludes the actual optimum.
 
This intuition is made formal by the equations in [[Linear programming#Duality|Linear programming: Duality]].
 
=== Economic interpretation ===
If we interpret our primal LP problem as a classical "Resource Allocation" problem, its dual can be interpreted as a "Resource Valuation" problem.
 
== The non-linear case ==
In [[non-linear programming]], the constraints are not necessarily linear. Nonetheless, many of the same principles apply.
 
To ensure that the global maximum of a non-linear problem can be identified easily, the problem formulation often requires that the functions be convex and have compact lower level sets.
 
This is the significance of the [[Karush–Kuhn–Tucker conditions]]. They provide necessary conditions for identifying local optima of non-linear programming problems. There are additional conditions (constraint qualifications) that are necessary so that it will be possible to define the direction to an ''optimal'' solution. An optimal solution is one that is a local optimum, but possibly not a global optimum.
 
===The strong Lagrangian principle: Lagrange duality {{anchor|Lagrange duality}}===
<!-- linked from redirect [[Lagrange duality]] -->
Given a [[nonlinear programming]] problem in standard form
 
:<math>\begin{align}
\text{minimize }    &f_0(x) \\
\text{subject to } &f_i(x) \leq 0,\ i \in \left \{1,\dots,m \right \} \\
                    &h_i(x) = 0,\ i \in \left \{1,\dots,p \right \}
\end{align}</math>
 
with the domain <math>\mathcal{D} \subset \mathbb{R}^n</math> having non-empty interior, the ''Lagrangian function'' <math>\Lambda: \mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R}^p \to \mathbb{R} </math> is defined as
 
: <math>\Lambda(x,\lambda,\nu) = f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x).</math>
 
The vectors <math>\lambda</math> and <math>\nu</math> are called the ''dual variables'' or ''Lagrange multiplier vectors'' associated with the problem.  The ''Lagrange dual function'' <math>g:\mathbb{R}^m \times \mathbb{R}^p \to \mathbb{R} </math> is defined as
 
: <math>g(\lambda,\nu) = \inf_{x\in\mathcal{D}} \Lambda(x,\lambda,\nu) = \inf_{x\in\mathcal{D}} \left ( f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x) \right ).</math>
 
The dual function ''g'' is concave, even when the initial problem is not convex.  The dual function yields lower bounds on the optimal value <math>p^*</math> of the initial problem; for any <math>\lambda \geq 0 </math> and any <math>\nu</math> we have <math>g(\lambda,\nu) \leq p^* </math>.  
 
If a [[constraint qualification]] such as [[Slater's condition]] holds and the original problem is convex, then we have [[strong duality]], i.e. <math>d^* = \max_{\lambda \ge 0, \nu} g(\lambda,\nu) = \inf f_0 = p^*</math>.
 
=== Convex problems ===
For a convex minimization problem with inequality constraints,
: <math>\begin{align}
&\underset{x}{\operatorname{minimize}}& & f(x) \\
&\operatorname{subject\;to}
& &g_i(x) \leq 0, \quad i = 1,\dots,m
\end{align}</math>
the Lagrangian dual problem is
: <math>\begin{align}
&\underset{u}{\operatorname{maximize}}& & \inf_x \left(f(x) + \sum_{j=1}^m u_j g_j(x)\right) \\
&\operatorname{subject\;to}
& &u_i \geq 0, \quad i = 1,\dots,m
\end{align}</math>
where the objective function is the Lagrange dual function. Provided that the functions <math>f</math> and <math>g_1, \cdots, g_m</math> are continuously differentiable, the infimum occurs where the gradient is equal to zero. The problem 
: <math>\begin{align}
&\underset{x, u}{\operatorname{maximize}}& & f(x) + \sum_{j=1}^m u_j g_j(x) \\
&\operatorname{subject\;to}
& & \nabla f(x) + \sum_{j=1}^m u_j \nabla g_j(x) = 0 \\
&&&u_i \geq 0, \quad i = 1,\dots,m
\end{align}</math>
is called the Wolfe dual problem. This problem may be difficult to deal with computationally, because the objective function is not concave in the joint variables <math>(u,x)</math>. Also, the equality constraint <math>\nabla f(x) + \sum_{j=1}^m u_j \nabla g_j(x)</math> is nonlinear in general, so the Wolfe dual problem is typically a nonconvex optimization problem. In any case, [[weak duality]] holds.<ref>{{cite journal |last1=Geoffrion |first1=Arthur M. |title=Duality in Nonlinear Programming: A Simplified Applications-Oriented Development | jstor=2028848 |journal=SIAM Review |volume=13 |year=1971 |pages=1–37 |issue=1 |doi=10.1137/1013001}}</ref>
 
== History ==
 
According to [[George Dantzig]], the duality theorem for linear optimization was conjectured by [[John von Neumann]] immediately after Dantzig presented the linear programming problem. Von Neumann noted that he was using information from his  [[game theory]], and conjectured that two person zero sum matrix game was equivalent to linear programming. Rigorous proofs were first published in 1948 by [[Albert W. Tucker]] and his group. (Dantzig's foreword to Nering and Tucker, 1993)
 
==See also==
* [[Duality (mathematics)|Duality]]
* [[Relaxation (approximation)]]
 
==Notes==
{{reflist}}
 
== References ==
 
===Books===
* {{cite book |author=[[Ravindra K. Ahuja|Ahuja, Ravindra K.]]; [[Thomas L. Magnanti|Magnanti, Thomas L.]]; and [[James B. Orlin|Orlin, James B.]] |title=Network Flows: Theory, Algorithms and Applications |publisher=Prentice Hall |year=1993 |isbn=0-13-617549-X }}
* {{cite book |last1=Bertsekas |first1=Dimitri P. |year=1999 |title=Nonlinear Programming |edition=2nd |publisher=Athena Scientific |isbn=1-886529-00-0 }}
* {{cite book |last1=Bonnans |first1=J.&nbsp;Frédéric |last2=Gilbert |first2=J.&nbsp;Charles |last3=Lemaréchal |first3=Claude |last4=Sagastizábal |first4=Claudia&nbsp;A. |title=Numerical optimization: Theoretical and practical aspects |url=http://www.springer.com/mathematics/applications/book/978-3-540-35445-1 |edition=Second revised ed. of translation of 1997 <!-- ''Optimisation numérique : Aspects théoriques et pratiques'' -->French |series=Universitext |publisher=Springer-Verlag |location=Berlin |year=2006 |pages=xiv+490 |isbn=3-540-35445-X |doi=10.1007/978-3-540-35447-5 |mr=2265882 |authorlink3=Claude Lemaréchal |ref=harv }}
*{{cite book |first1=William J. |last1=Cook|author1-link=William J. Cook |first2=William H. |last2=Cunningham |first3=William R. |last3=Pulleyblank |first4=Alexander |last4=Schrijver|author4-link=Alexander Schrijver |title=Combinatorial Optimization |publisher=John Wiley & Sons |edition=1st |date=November 12, 1997 |isbn =0-471-55894-X }}
* {{cite book |last1=Dantzig |first1=George B. |authorlink1=George Dantzig| year= 1963 |title=Linear Programming and Extensions |publisher=Princeton University Press |location=Princeton, NJ }}
* {{cite book |last1=Hiriart-Urruty |first1=Jean-Baptiste |last2=Lemaréchal |first2=Claude |title=Convex analysis and minimization algorithms, Volume&nbsp;I: Fundamentals |series=Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] |volume=305 |publisher=Springer-Verlag |location=Berlin |year=1993 |pages=xviii+417 |isbn=3-540-56850-6 |mr=1261420 }}
*{{cite book |last1=Hiriart-Urruty |first1=Jean-Baptiste |last2=Lemaréchal |first2=Claude |chapter=14 Duality for Practitioners |title=Convex analysis and minimization algorithms, Volume&nbsp;II: Advanced theory and bundle methods |series=Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] |volume=306 |publisher=Springer-Verlag |location=Berlin |year=1993 |pages=xviii+346 |isbn=3-540-56852-2 |mr=1295240|authorlink2==Claude Lemaréchal }}
*{{cite book |last=Lasdon |first=Leon&nbsp;S. |title=Optimization theory for large systems |publisher=Dover Publications, Inc. |location=Mineola, New York |year=2002 |origyear=Reprint of the 1970 Macmillan |pages=xiii+523 |mr=1888251 |ref=harv |isbn=978-0-486-41999-2 }}
* {{cite book |last=Lawler |first=Eugene |authorlink=Eugene Lawler |title=Combinatorial Optimization: Networks and Matroids |chapter=4.5. Combinatorial Implications of Max-Flow Min-Cut Theorem, 4.6. Linear Programming Interpretation of Max-Flow Min-Cut Theorem |year=2001 |publisher=Dover |isbn=0-486-41453-1 |pages=117–120 }}
* {{cite book |last=Lemaréchal |first=Claude |chapter=Lagrangian relaxation |pages=112–156 |title=Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May&nbsp;15–19,&nbsp;2000 |editor=Jünger, Michael; and Naddef, Denis |series=Lecture Notes in Computer Science (LNCS) |volume=2241 |publisher=Springer-Verlag |location=Berlin |year=2001 |isbn=3-540-42877-1 |mr=1900016 |doi=10.1007/3-540-45586-8_4 |authorlink=Claude Lemaréchal |ref=harv }}
* {{cite book |last=Minoux |first=Michel |authorlink=Michel Minoux |title=Mathematical programming: Theory and algorithms |note=With a foreword by Egon Balas |edition=Translated by Steven Vajda from the (1983 Paris: Dunod) French |publisher=A Wiley-Interscience Publication. John Wiley & Sons, Ltd. |location=Chichester |year=1986 |pages=xxviii+489 |isbn=0-471-90170-9 |mr=868279 |ref=harv |id=(2008 Second ed., in French: ''Programmation mathématique : Théorie et algorithmes'', Éditions Tec & Doc, Paris, 2008. xxx+711 pp.  )) }}
* {{cite book |last1=Nering |first1=Evar D. |last2= Tucker |first2=Albert W. |year=1993 |title=Linear Programming and Related Problems |publisher=Academic Press |location=Boston, MA |isbn=978-0-12-515440-6 }}
*{{cite book |first1=Christos H. |last1=Papadimitriou |first2=Kenneth |last2=Steiglitz |title=Combinatorial Optimization : Algorithms and Complexity |publisher=Dover |edition=Unabridged |date=July 1998 |year=1998 |isbn=0-486-40258-4 }}
* {{cite book|last=Ruszczyński|first=Andrzej|authorlink=Andrzej Piotr Ruszczyński|title=Nonlinear Optimization|publisher=[[Princeton University Press]]|location=Princeton, NJ|year=2006|pages=xii+454|isbn=978-0691119151 |mr=2199043}}
 
===Articles===
* {{cite journal |first=Hugh, III |last=Everett |authorlink=Hugh Everett |url=http://or.journal.informs.org/cgi/reprint/11/3/399 |title=Generalized Lagrange multiplier method for solving problems of optimum allocation of resources |doi=10.1287/opre.11.3.39 |journal=Operations Research |volume=11 |year=1963 |pages=399–417 |mr=152360 |jstor=168028 |ref=harv |issue=3 }}
* {{cite journal |last1=Kiwiel |first1=Krzysztof&nbsp;C. |last2=Larsson |first2=Torbjörn |last3=Lindberg |first3=P.&nbsp;O. |title=Lagrangian relaxation via ballstep subgradient methods |mr=2348241 |journal=Mathematics of Operations Research |volume=32 |year=2007 |url=http://mor.journal.informs.org/cgi/content/abstract/32/3/669 |issue=3 |pages=669–686 |month=August |doi=10.1287/moor.1070.0261 |ref=harv }}
 
[[Category:Mathematical optimization]]
[[Category:Linear programming]]
[[Category:Convex optimization]]
[[Category:Mathematical and quantitative methods (economics)]]

Latest revision as of 14:19, 23 May 2014

Let me initial start by introducing myself. My title is Boyd Butts although it is not the name on my beginning certification. Hiring has been my profession for some time but I've already applied for another one. Her family members lives in Minnesota. To collect cash is 1 of the issues I love most.

Feel free to visit my page - home std test kit [visit your url]