Mean motion: Difference between revisions
en>NOrbeck Clean up. |
en>BD2412 right the first time |
||
Line 1: | Line 1: | ||
:''"Active Set" redirects here. For the Wikipedia article on the band, see [[The Active Set]].'' | |||
In [[Optimization (mathematics)|optimization]], a problem is defined using an objective function to minimize or maximize, and a set of constraints | |||
:<math>g_1(x)\ge 0, \dots, g_k(x)\ge 0</math> | |||
that define the [[feasible region]], that is, the set of all ''x'' to search for the optimal solution. Given a point <math>x</math> in the feasible region, a constraint | |||
:<math>g_i(x) \ge 0</math> | |||
is called '''active''' at <math>x</math> if <math>g_i(x)=0</math> and '''inactive''' at <math>x</math> if <math>g_i(x)>0.</math> Equality constraints are always active. The '''active set''' at <math>x</math> is made up of those constraints <math>g_i(x)</math> that are active at the current point {{harv|Nocedal|Wright|2006|p=308}}. | |||
The active set is particularly important in optimization theory as it determines which constraints will influence the final result of optimization. For example, in solving the [[linear programming]] problem, the active set gives the hyperplanes that intersect at the solution point. In [[quadratic programming]], as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search. | |||
==Active set methods== | |||
In general an active set algorithm has the following structure: | |||
:Find a feasible starting point | |||
:'''repeat until''' "optimal enough" | |||
::''solve'' the equality problem defined by the active set (approximately) | |||
::''compute'' the [[Lagrange multipliers]] of the active set | |||
::''remove'' a subset of the constraints with negative Lagrange multipliers | |||
::''search'' for infeasible constraints | |||
:'''end repeat''' | |||
Methods that can be described as '''active set methods''' include{{Citation needed|date=November 2013}}: | |||
* [[Successive linear programming]] (SLP) <!-- acc. to: Leyffer... - alt: acc. to "MPS glossary", http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Main_Page: Successive approximation --> | |||
* [[Sequential quadratic programming]] (SQP) <!-- acc. to: Leyffer... - alt: acc. to "MPS glossary", http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Main_Page: Successive approximation --> | |||
* [[Sequential linear-quadratic programming]] (SLQP) <!-- acc. to: Leyffer... - alt: acc. to "MPS glossary", http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Main_Page: Successive approximation --> | |||
* [[Frank–Wolfe algorithm|Reduced gradient method]] (RG) <!-- acc. to: MPS glossary, http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Main_Page - alt: acc. to "Optimization - Theory and Practice" (Forst, Hoffmann): Projection method --> | |||
* [[Generalized Reduced Gradient|Generalized Reduced Gradient method]] (GRG) <!-- acc. to: MPS glossary, http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Main_Page - alt: acc. to "Optimization - Theory and Practice" (Forst, Hoffmann): Projection method --> | |||
<!-- ? Wilson's Lagrange-newton method --> | |||
<!-- ? Method of feasible directions (MFD) --> | |||
<!-- ? Gradient projection method - alt: acc. to "Optimization - Theory and Practice" (Forst, Hoffmann): Projection method --> | |||
==References== | |||
* {{cite book|last=Murty|first=K. G.|title=Linear complementarity, linear and nonlinear programming|series=Sigma Series in Applied Mathematics|volume=3|publisher=Heldermann Verlag|location=Berlin|year=1988|pages=xlviii+629 pp.|isbn=3-88538-403-5|url=http://ioe.engin.umich.edu/people/fac/books/murty/linear_complementarity_webbook/}} {{MR|949214}} | |||
* {{Cite book | last1=Nocedal | first1=Jorge | last2=Wright | first2=Stephen J. | title=Numerical Optimization | publisher=[[Springer-Verlag]] | location=Berlin, New York | edition=2nd | isbn=978-0-387-30303-1 | year=2006 | ref=harv | postscript=<!--None-->}}. | |||
[[Category:Mathematical optimization]] | |||
[[Category:Optimization algorithms and methods]] |
Revision as of 03:54, 27 July 2013
- "Active Set" redirects here. For the Wikipedia article on the band, see The Active Set.
In optimization, a problem is defined using an objective function to minimize or maximize, and a set of constraints
that define the feasible region, that is, the set of all x to search for the optimal solution. Given a point in the feasible region, a constraint
is called active at if and inactive at if Equality constraints are always active. The active set at is made up of those constraints that are active at the current point Template:Harv.
The active set is particularly important in optimization theory as it determines which constraints will influence the final result of optimization. For example, in solving the linear programming problem, the active set gives the hyperplanes that intersect at the solution point. In quadratic programming, as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search.
Active set methods
In general an active set algorithm has the following structure:
- Find a feasible starting point
- repeat until "optimal enough"
- solve the equality problem defined by the active set (approximately)
- compute the Lagrange multipliers of the active set
- remove a subset of the constraints with negative Lagrange multipliers
- search for infeasible constraints
- end repeat
Methods that can be described as active set methods includePotter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.:
- Successive linear programming (SLP)
- Sequential quadratic programming (SQP)
- Sequential linear-quadratic programming (SLQP)
- Reduced gradient method (RG)
- Generalized Reduced Gradient method (GRG)
References
- 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.
My blog: http://www.primaboinca.com/view_profile.php?userid=5889534 Template:MR - 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.
My blog: http://www.primaboinca.com/view_profile.php?userid=5889534.