MIDI Tuning Standard: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Khazar2
m Frequency values: clean-up, MOS:HYPHEN, replaced: widely- → widely using AWB
No edit summary
 
Line 1: Line 1:
{{Multiple issues|
Oscar is how he's known as and he completely loves this title. Puerto Rico is where he's been residing for many years and he will never transfer. In her [http://www.escuelavirtual.registraduria.gov.co/user/view.php?id=140944&course=1 at home std test] [http://bossip.com/634680/celebrities-accused-of-carrying-stds3920/ professional] life she is a payroll clerk [http://forums.cellphonedge.com/groups/valuable-guidance-for-successfully-treating-candidiasis/ home std test] but she's usually needed her personal business. What I adore performing is performing ceramics but I haven't produced a dime with it.<br><br>Visit my site :: [http://gcjcteam.org/index.php?mid=etc_video&document_srl=655020&sort_index=regdate&order_type=desc http://gcjcteam.org]
{{cleanup-reorganize|date=April 2012}}
{{refimprove|date=April 2012}}
{{Verify sources|date=April 2012}}
}}
'''Info-gap decision theory''' is a non-probabilistic [[decision theory]] that seeks to optimize [[robust statistics|robustness]] to failure – or opportuneness for windfall – under severe [[uncertainty]],<ref name="ybh igdt 2001">Yakov Ben-Haim, ''Information-Gap Theory: Decisions Under Severe Uncertainty,'' Academic Press, London, 2001.</ref><ref name="ybh igdt 2006">Yakov Ben-Haim, ''Info-Gap Theory: Decisions Under Severe Uncertainty,'' 2nd edition, Academic Press, London, 2006.</ref> in particular applying [[sensitivity analysis]] of the [[stability radius]] type<ref name="MS10">{{Cite journal | doi = 10.1108/15265941011043648 | last1 = Sniedovich | first1 = M. | year = 2010 | title = A bird's view of info-gap decision theory | url = | journal = Journal of Risk Finance | volume = 11 | issue = 3| pages = 268–283 }}</ref> to perturbations in the value of a given estimate of the parameter of interest. It has some connections with [[Wald's maximin model]]; some authors distinguish them, others consider them instances of the same principle.
 
It has been developed since the 1980s by [http://www.technion.ac.il/yakov/ Yakov Ben-Haim],<ref>[http://www.technion.ac.il/yakov/IGT/start-grow02.html How Did Info-Gap Theory Start? How Does it Grow?]</ref> and has found many [[#Applications|applications]] and described as a theory for decision-making under "''severe'' uncertainty".  It has been [[#Criticism|criticized]] as unsuited for this purpose, and [[#Alternatives|alternatives]] proposed, including such classical approaches as [[robust optimization]].
 
== Summary ==
Info-gap is a decision theory: it seeks to assist in decision-making under uncertainty. It does this by using 3 models, each of which builds on the last. One begins with a ''model'' for the situation, where some ''parameter'' or parameters are unknown.
One then takes an ''estimate'' for the parameter, which is assumed to be ''substantially wrong,'' and one analyzes how ''sensitive'' the ''outcomes'' under the model are to the error in this estimate.
;Uncertainty model: Starting from the estimate, an uncertainty model measures how distant other values of the parameter are from the estimate: as uncertainty increases, the set of possible values increase – if one is ''this'' uncertain in the estimate, what other parameters are possible?
;Robustness/opportuneness model: Given an uncertainty model and a minimum level of desired outcome, then for each decision, how uncertain can you be and be assured achieving this minimum level? (This is called the '''robustness''' of the decision.) Conversely, given a desired windfall outcome, how uncertain must you be for this desirable outcome to be possible? (This is called the '''opportuneness''' of the decision.)
;Decision-making model: To decide, one optimizes either the robustness or the opportuneness, on the basis of the robustness or opportuneness model. Given a desired minimum outcome, which decision is most robust (can stand the most uncertainty) and still give the desired outcome (the '''robust-satisficing action''')? Alternatively, given a desired windfall outcome, which decision requires the ''least'' uncertainty for the outcome to be achievable (the '''opportune-windfalling action''')?
 
=== Models ===
Info-gap theory models uncertainty <math>\alpha</math> (the '''horizon of uncertainty''') as nested subsets <math>\mathcal{U}(\alpha, \tilde{u})</math> around a [[point estimate]] <math>\tilde{u}</math> of a parameter: with no uncertainty, the estimate is correct, and as uncertainty increases, the subset grows, in general without bound. The subsets quantify uncertainty – the horizon of uncertainty measures the "[[#Sublevel sets|distance]]" between an estimate and a possibility – providing an intermediate measure between a single point (the [[point estimate]]) and the universe of all possibilities, and giving a measure for sensitivity analysis: how uncertain can an estimate be and a decision (based on this incorrect estimate) still yield an acceptable outcome – what is the [[margin of error]]?
 
Info-gap is a ''local'' decision theory, beginning with an estimate and considering ''deviations'' from it; this contrasts with ''global'' methods such as [[minimax]], which considers worst-case analysis over the entire space of outcomes, and probabilistic [[decision theory]], which considers all possible outcomes, and assigns some probability to them. In info-gap, the universe of possible outcomes under consideration is the union of all of the nested subsets: <math>\mathfrak{U} := \bigcup_\alpha \mathcal{U}(\alpha, \tilde{u}).</math>
 
Info-gap analysis gives answers to such questions as:
* under what level of uncertainty can specific requirements be reliably assured (robustness), and
* what level of uncertainty is necessary to achieve certain windfalls (opportuneness).
It can be used for [[satisficing]], as an alternative to [[optimizing]] in the presence of [[uncertainty]] or [[bounded rationality]]; see [[robust optimization]] for an alternative approach.
 
=== Comparison with classical decision theory ===
{{details|#Alternatives|Alternatives}}
In contrast to probabilistic [[decision theory]], info-gap analysis does not use probability distributions: it measures the deviation of errors (differences between the parameter and the estimate), but not the probability of outcomes – in particular, the estimate <math>\tilde{u}</math> is in no sense more or less likely than other points, as info-gap does not use probability. Info-gap, by not using probability distributions, is robust in that it is not sensitive to assumptions on probabilities of outcomes. However, the model of uncertainty does include a notion of "closer" and "more distant" outcomes, and thus includes some assumptions, and is not as robust as simply considering all possible outcomes, as in minimax. Further, it considers a fixed universe <math>\mathfrak{U},</math> so it is not robust to unexpected (not modeled) events.
 
The connection to [[minimax]] analysis has occasioned some controversy: (Ben-Haim 1999, pp.&nbsp;271–2) argues that info-gap's robustness analysis, while similar in some ways, is not minimax worst-case analysis, as it does not evaluate decisions over all possible outcomes, while (Sniedovich, 2007) argues that the robustness analysis can be seen as an example of maximin (not minimax), applied to maximizing the horizon of uncertainty. This is discussed in [[#Criticism|criticism]], below, and elaborated in the [[#Classical decision theory perspective|classical decision theory perspective]].
 
== Basic example: budget ==
As a simple example, consider a worker with uncertain income. They expect to make $100 per week, while if they make under $60 they will be unable to afford lodging and will sleep in the street, and if they make over $150 they will be able to afford a night's entertainment.
 
Using the info-gap '''absolute error model:'''
:<math>
\mathcal{U}(\alpha, {\tilde{u}}) = \left \{ u : \
|u - {\tilde{u}} | \le \alpha \right \} , \qquad \alpha \ge 0
</math>
where <math>\tilde u = \$100,</math> one would conclude that the worker's robustness function <math>\hat\alpha</math> is $40, and their opportuneness function <math>\hat\beta</math> is $50: if they are certain that they will make $100, they will neither sleep in the street nor feast, and likewise if they make within $40 of $100. However, if they erred in their estimate by more than $40, they may find themselves on the street, while if they erred by more than $50, they may find themselves in clover.
 
As stated, this example is only ''descriptive,'' and does not enable any decision making – in applications, one considers alternative decision rules, and often situations with more complex uncertainty.
 
Consider now the worker thinking of moving to a different town, where the work pays less but lodgings are cheaper. Say that here they estimate that they will earn $80 per week, but lodging only costs $44, while entertainment still costs $150. In that case the robustness function will be $36, while the opportuneness function will be $70. If they make the same errors in both cases, the second case (moving) is both less robust and less opportune.
 
On the other hand, if one measures uncertainty by ''relative'' error, using the '''fractional error model:'''
:<math>
\mathcal{U}(\alpha, {\tilde{u}}) = \left \{ u : \
|u - {\tilde{u}} | \le \alpha \tilde u \right \} , \qquad \alpha \ge 0
</math>
in the first case robustness is 40% and opportuneness is 50%, while in the second case robustness is 45% and opportuneness is 87.5%, so moving is ''more'' robust and less opportune.
 
This example demonstrates the sensitivity of analysis to the model of uncertainty.
 
=== Info-gap models ===
Info-gap can be applied to spaces of functions; in that case the uncertain parameter is a function <math>u(x),</math> with estimate <math>{\tilde u}(x),</math> and the nested subsets are sets of functions. One way to describe such a set of functions is by requiring values of ''u'' to be close to values of <math>{\tilde u}</math> for all ''x,'' using a family of info-gap models on the ''values.''
 
For example, the above fraction error model for values becomes the fractional error model for functions by adding a parameter ''x'' to the definition:
:<math>
\mathcal{U}(\alpha, {\tilde{u}}) = \left \{ u(x): \
|u(x) - {\tilde{u}}(x) | \le \alpha {\tilde{u}}(x), \ \mbox{for all}\ x \in X \right \} , \ \ \ \alpha \ge 0.
</math>
 
More generally, if <math>U(\alpha,y)</math> is a family of info-gap models of values, then one obtains an info-gap model of functions in the same way:
:<math>
\mathcal{U}(\alpha, {\tilde{u}}) = \left \{ u(x): \
u(x) \in U(\alpha,{\tilde{u}}(x)), \ \mbox{for all}\ x \in X \right \} , \ \ \ \alpha \ge 0.
</math>
 
== Motivation ==
 
It is common to make decisions under uncertainty.<ref group="note">Here are some examples: In many fields, including [[engineering]], [[economics]], [[management]], [[conservation biology|biological conservation]], [[medicine]], [[homeland security]], and more, analysts use models and data to evaluate and formulate [[decision theory|decisions]]. An '''info-gap''' is the disparity between what ''is known'' and what ''needs to be known'' in order to make a reliable and responsible decision. Info-gaps are [[Knightian uncertainty|Knightian uncertainties]]: a lack of knowledge, an incompleteness of understanding. Info-gaps are non-probabilistic and cannot be insured against or modelled [[probability theory|probabilistically]]. A common info-gap, though not the only kind, is uncertainty in the value of a parameter or of a vector of parameters, such as the durability of a new material or the future rates or return on stocks. Another common info-gap is uncertainty in the shape of a [[probability distribution]]. Another info-gap is uncertainty in the functional form of a property of the system, such as [[friction]] force in engineering, or the [[Phillips curve]] in economics. Another info-gap is in the shape and size of a set of possible vectors or functions. For instance, one may have very little knowledge about the relevant set of cardiac waveforms at the onset of heart failure in a specific individual.</ref> What can be done to make good (or at least the best possible) decisions under conditions of uncertainty? Info-gap '''robustness''' analysis evaluates each feasible decision by asking: how much deviation from an estimate of a parameter value, function, or set, is permitted and yet "guarantee" acceptable performance? In everyday terms, the "robustness" of a decision is set by the size of deviation from an estimate that still leads to performance within requirements when using that decision. It is sometimes difficult to judge how much robustness is needed or sufficient. However, according to info-gap theory,  the ranking of feasible decisions in terms of their degree of robustness is independent of such judgments.
 
Info-gap theory also proposes an '''opportuneness''' function which evaluates the potential for windfall outcomes resulting from favorable uncertainty.
 
==Example: resource allocation==
 
Here is an illustrative example, which will introduce the basic concepts of information gap theory. More rigorous description and discussion follows.
 
===Resource allocation===
Suppose you are a project manager, supervising two teams: red team and blue team. Each of the teams will yield some revenue at the end of the year. This revenue depends on the investment in the team – higher investments will yield higher revenues. You have a limited amount of resources, and you wish to decide how to allocate these resources between the two groups, so that the total revenues of the project will be as high as possible.
 
If you have an estimate of the correlation between the investment in the teams and their revenues, as illustrated in Figure 1, you can also estimate the total revenue as a function of the allocation. This is exemplified in Figure 2 – the left-hand side of the graph corresponds to allocating all resources to the red team, while the right-hand side of the graph corresponds to allocating all resources to the blue team. A simple optimization will reveal the optimal allocation – the allocation that, under your estimate of the revenue functions, will yield the highest revenue.
[[Image:IGT-example1.png|thumb|right|Figure 1 – Revenue per investment]]
[[Image:IGT-example2.png|thumb|right|Figure 2 – Revenue per allocation]]
 
===Introducing uncertainty===
However, this analysis does not take uncertainty into account. Since the revenue functions are only a (possibly rough) estimate, the actual revenue functions may be quite different. For any level of uncertainty (or ''horizon of uncertainty'') we can define an envelope within which we assume the actual revenue functions are. Higher uncertainty would correspond to a more inclusive envelope. Two of these uncertainty envelopes, surrounding the revenue function of the red team, are represented in Figure 3. As illustrated in Figure 4, the actual revenue function may be any function within a given uncertainty envelope. Of course, some instances of the revenue functions are only possible when the uncertainty is high, while small deviations from the estimate are possible even when the uncertainty is small.
[[Image:IGT-example3.png|thumb|right|Figure 3 – Revenue uncertainty envelopes]]
[[Image:IGT-example4.png|thumb|right|Figure 4 – Revenue function instance]]
 
These envelopes are called ''info-gap models of uncertainty'', since they describe one's understanding of the uncertainty surrounding the revenue functions.
 
From the info-gap models (or uncertainty envelopes) of the revenue functions, we can determine an info-gap model for the total amount of revenues. Figure 5 illustrates two of the uncertainty envelopes defined by the info-gap model of the total amount of revenues.
[[Image:IGT-example5.png|thumb|right|Figure 5 – Total revenue uncertainty envelopes]]
 
===Robustness===
High revenues would typically earn a project manager the senior management's respect, but if the total revenues are below a certain threshold, it will cost said project manager's job. We will define such a threshold as a ''critical revenue'', since total revenues beneath the critical revenue will be considered as failure.
 
For any given allocation, the ''robustness'' of the allocation, with respect to the critical revenue, is the maximal uncertainty that will still guarantee that the total revenue will exceed the critical revenue. This is demonstrated in Figure 6. If the uncertainty will increase, the envelope of uncertainty will become more inclusive, to include instances of the total revenue function that, for the specific allocation, yields a revenue smaller than the critical revenue.
[[Image:IGT-example6.png|thumb|right|Figure 6 – Robustness]]
 
The robustness measures the immunity of a decision to failure. A ''robust satisficer'' is a decision maker that prefers choices with higher robustness.
 
If, for some allocation <math>q</math>, the correlation between the critical revenue and the robustness is illustrated, the result is a graph somewhat similar to that in Figure 7. This graph, called ''robustness curve'' of allocation <math>q</math>, has two important features, that are common to (most) robustness curves:
[[Image:IGT-example7.png|thumb|right|Figure 7 – Robustness curve]]
 
# The curve is non-increasing. This captures the notion that when higher requirements (higher critical revenue) are in place, failure to meet the target is more likely (lower robustness). This is the tradeoff between quality and robustness.
 
# At the nominal revenue, that is, when the critical revenue equals the revenue under the nominal model (the estimate of the revenue functions), the robustness is zero. This is since a slight deviation from the estimate may decrease the total revenue.
 
If the robustness curves of two allocations, <math>q</math> and <math>q'</math> are compared, the fact that the two curves will intersect is noticeable, as illustrated in Figure 8. In this case, none of the allocations is strictly more robust than the other: for critical revenues smaller than the crossing point, allocation <math>q'</math> is more robust than allocation <math>q</math>, while the other way around holds for critical revenues higher than the crossing point. That is, the preference between the two allocations depends on the criterion of failure – the critical revenue.
[[Image:IGT-example8.png|thumb|right|Figure 8 – Robustness curves cross]]
 
===Opportuneness===
Suppose, in addition to the threat of losing your job, the senior management offers you a carrot: if the revenues are ''higher'' than some revenue, you will be awarded a considerable bonus. Although revenues lower than this revenue will not be considered to be a failure (as you may still keep your job), a higher revenue will be considered a windfall success. We will therefore denote this threshold by ''windfall revenue''.
 
For any given allocation, the ''opportuneness'' of the allocation, with respect to the critical revenue, is the minimal uncertainty for which it is possible for the total revenue to exceed the critical revenue. This is demonstrated in Figure 9. If the uncertainty will decrease, the envelope of uncertainty will become less inclusive, to exclude all instances of the total revenue function that, for the specific allocation, yields a revenue higher than the windfall revenue.
[[Image:IGT-example9.png|thumb|right|Figure 9 - Opportuneness]]
 
The opportuneness may be considered as the immunity to windfall success. Therefore, lower opportuneness is preferred to higher opportuneness.
 
If, for some allocation <math>q</math>, we will illustrate the correlation between the windfall revenue and the robustness, we will have a graph somewhat similar to Figure 10. This graph, called ''opportuneness curve'' of allocation <math>q</math>, has two important features, that are common to (most) opportuneness curves:
[[Image:IGT-example10.png|thumb|right|Figure 10 – Opportuneness curves]]
 
# The curve is non-decreasing. This captures the notion that when we have higher requirements (higher windfall revenue), we are more immune to failure (higher opportuneness, which is less desirable). That is, we need a more substantial deviation from the estimate in order to achieve our ambitious goal. This is the tradeoff between quality and opportuneness.
# At the nominal revenue, that is, when the critical revenue equals the revenue under the nominal model (our estimate of the revenue functions), the opportuneness is zero. This is since no deviation from the estimate is needed in order to achieve the windfall revenue.
 
=== Treatment of severe uncertainty ===
 
The logic underlying the above illustration is that the (unknown) true revenue is somewhere in the immediate neighborhood of the (known) estimate of the revenue. For if this is not the case, what is the point of conducting the analysis exclusively in this neighborhood?
Therefore, to remind ourselves that info-gap's manifest objective is to seek robust solutions for problems that are subject to '''severe''' uncertainty, it is instructive to exhibit in the display of the results also those associated with the '''true''' value of the revenue. Of course, given the severity of the uncertainty we do not know the true value.
 
What we do know, however,  is that according to our working assumptions the estimate we have is a '''poor'''  indication of the true value of the revenue and is likely to be '''substantially wrong.'''  So, methodologically speaking, we have to display the true value at a distance from its estimate. In fact, it would be even more enlightening to display a number of '' possible true values .''
 
In short, methodolocially speaking the picture is this:
 
[[Image:Investment example.png|650px]]
 
Note that in addition to the results generated by the estimate,  two "possible" true values of the revenue are also displayed at a distance from the estimate.
 
As indicated by the picture, since info-gap robustness model applies its Maximin analysis in an immediate neighborhood of the estimate, there is no assurance that the analysis is in fact conducted in the neighborhood of the true value of the revenue. In fact, under conditions of severe uncertainty this—methodologically speaking—is very unlikely.
 
This raises the question: how valid/useful/meaningful are the results? Aren't we sweeping the severity of the uncertainty under the carpet?
 
For example, suppose that a given allocation is found to be very fragile in the neighborhood of the estimate. Does this means that this allocation is also fragile elsewhere in the region of uncertainty? Conversely, what guarantee is there that an allocation that is robust in the neighborhood of the estimate is also robust elsewhere in the region of uncertainty, indeed in the neighborhood of the true value of the revenue?
 
More fundamentally, given that the results generated by info-gap are based on a '''local''' revenue/allocation analysis in the neighborhood of an estimate that is likely to be substantially wrong, we have no other choice—methodologically speaking—but to assume that the results generated by this analysis are equally likely to be substantially wrong. In other words, in accordance with the universal [[Garbage In, Garbage Out|Garbage In - Garbage Out Axiom]], we have to assume that the quality of the results generated by info-gap's analysis is only as good as the quality of the estimate on which the results are based.
 
The picture speaks for itself.
 
What emerges then is that info-gap theory is yet to explain in what way, if any,  it actually attempts to deal with the severity of the uncertainty under consideration. Subsequent sections of this article will address this '''severity''' issue and its methodological and practical implications.
 
A more detailed analysis of an illustrative numerical investment problem of this type can be found in Sniedovich (2007).
 
== Uncertainty models ==
Info-gaps are quantified by '''info-gap models of uncertainty.''' An info-gap model is an unbounded family of nested sets. For example, a frequently encountered example is a family of nested [[ellipsoid]]s all having the same shape. The structure of the sets in an info-gap model derives from the information about the uncertainty. In general terms, the structure of an info-gap model of uncertainty is chosen to define the smallest or strictest family of sets whose elements are consistent with the prior information. Since there is, usually, no known worst case, the family of sets may be unbounded.
 
A common example of an info-gap model is the '''fractional error model.''' The best estimate of an uncertain function <math>u(x)\!\,</math> is <math>{\tilde{u}}(x)</math>, but the fractional error of this estimate is unknown. The following unbounded family of nested sets of functions is a fractional-error info-gap model:
:<math>
\mathcal{U}(\alpha, {\tilde{u}}) = \left \{ u(x): \
|u(x) - {\tilde{u}}(x) | \le \alpha {\tilde{u}}(x), \ \mbox{for all}\ x \right \} , \ \ \ \alpha \ge 0
</math>
At any '''horizon of uncertainty''' <math>\alpha</math>, the set <math>\mathcal{U}(\alpha, {\tilde{u}})</math> contains all functions <math>u(x)\!\,</math> whose fractional deviation from <math>{\tilde{u}}(x)</math> is no greater than <math>\alpha</math>. However, the horizon of uncertainty is unknown, so the info-gap model is an unbounded family of sets, and there is no worst case or greatest deviation.
 
There are many other types of info-gap models of uncertainty. All info-gap models obey two basic [[axiom]]s:
 
*'''Nesting.''' The info-gap model <math>\mathcal{U}(\alpha, {\tilde{u}})</math> is nested if <math>\alpha < \alpha^\prime</math> implies that:
::<math>
\mathcal{U}(\alpha, {\tilde{u}}) \ \subseteq \ \mathcal{U}(\alpha^\prime, {\tilde{u}})
</math>
 
*'''Contraction.''' The info-gap model <math>\mathcal{U}(0,{\tilde{u}})</math> is a singleton set containing its center point:
::<math>
\mathcal{U}(0,{\tilde{u}}) = \{ {\tilde{u}} \}
</math>
 
The nesting axiom imposes the property of "clustering" which is characteristic of info-gap uncertainty. Furthermore, the nesting axiom implies that the uncertainty sets <math>\mathcal{U}(\alpha, u)</math> become more inclusive as <math>\alpha</math> grows, thus endowing <math>\alpha</math> with its meaning as an horizon of uncertainty. The contraction axiom implies that, at horizon of uncertainty zero, the estimate <math>{\tilde{u}}</math> is correct.
 
Recall that the uncertain element <math>u</math> may be a parameter, vector, function or set. The info-gap model is then an unbounded family of nested sets of parameters, vectors, functions or sets.
 
=== Sublevel sets ===
For a fixed point estimate <math>\tilde{u},</math> an info-gap model is often equivalent to a function <math>\phi\colon \mathfrak{U} \to [0,+\infty)</math> defined as:
:<math>\phi(u) := \min \{\alpha \mid u \in \mathcal{U}(\alpha,{\tilde{u}}) \}</math>
meaning "the uncertainty of a point ''u'' is the minimum uncertainty such that ''u'' is in the set with that uncertainty". In this case, the family of sets <math>\mathcal{U}(\alpha, \tilde{u})</math> can be recovered as the [[sublevel sets]] of <math>\phi</math>:
:<math>\mathcal{U}(\alpha, \tilde{u}) := \phi^{-1}([0,\alpha])</math>
meaning: "the nested subset with horizon of uncertainty <math>\alpha</math> consists of all points with uncertainty less than or equal to <math>\alpha</math>".
 
Conversely, given a function <math>\phi\colon \mathfrak{U} \to [0,+\infty),</math> satisfying the axiom <math>\phi^{-1}(0) = \{\tilde{u}\}</math> (equivalently, <math>\phi(u) = 0</math> if and only if <math>u = \tilde{u}</math>), it defines an info-gap model via the sublevel sets.
 
For instance, if the region of uncertainty is a [[metric space]], then the uncertainty function can simply be the distance, <math>\phi(u) := d(\tilde{u},u),</math> so the nested subsets are simply
:<math>\mathcal{U}(\alpha, \tilde{u}) = \{ u \mid d(\tilde{u},u) \leq \alpha \}.</math>
This always defines an info-gap model, as distances are always non-negative (axiom of non-negativity), and satisfies <math>\phi^{-1}(0) = \{\tilde{u}\}</math> (info-gap axiom of contraction) because the distance between two points is zero if and only if they are equal (the identity of indiscernibles); nesting follows by construction of sublevel set.
 
Not all info-gap models arise as sublevel sets: for instance, if <math>u_1 \in \mathcal{U}(\alpha, \tilde{u})</math> for all <math>\alpha > 1,</math> but not for <math>\alpha = 1</math> (it has uncertainty "just more" than 1), then the minimum above is not defined; one can replace it by an [[infimum]], but then the resulting sublevel sets will not agree with the infogap model: <math>u_1 \in \phi^{-1}([0,1]),</math> but <math>u_1 \not\in \mathcal{U}(1, \tilde{u}).</math> The effect of this distinction is very minor, however, as it modifies sets by less than changing the horizon of uncertainty by any positive number <math>\epsilon,</math> however small.
 
== Robustness and opportuneness ==
Uncertainty may be either [[wiktionary:pernicious|pernicious]] or [[wiktionary:propitious|propitious]]. That is, uncertain variations may be either adverse or favorable. Adversity entails the possibility of failure, while favorability is the opportunity for sweeping success. Info-gap decision theory is based on quantifying these two aspects of uncertainty, and choosing an action which addresses one or the other or both of them simultaneously. The pernicious and propitious aspects of uncertainty are quantified by two "immunity functions": the robustness function expresses the immunity to failure, while the opportuneness function expresses the immunity to windfall gain.
 
=== Robustness and opportuneness functions ===
The '''robustness function''' expresses the greatest level of uncertainty at which failure cannot occur; the '''opportuneness function''' is the least level of uncertainty which entails the possibility of sweeping success. The robustness and opportuneness functions address, respectively, the pernicious and propitious facets of uncertainty.
 
Let <math>q</math> be a decision vector of parameters such as design variables, time of initiation, model parameters or operational options. We can verbally express the robustness and opportuneness functions as the maximum or minimum of a set of values of the uncertainty parameter <math>\alpha</math> of an info-gap model:
:{| width="100%" border="0"
|<math>
{\hat{\alpha}}(q) = \max \{ \alpha: \ \mbox{minimal requirements are always satisfied}\}
</math>
| (robustness)
| (1a)<!-- eq 2 ig-->
|-
|<math>
{\hat{\beta}}(q) = \min \{ \alpha: \ \mbox{sweeping success is possible}\}
</math>
| (opportuneness)
| (2a)<!-- eq 1 ig-->
|}
Formally,
:{| width="100%" border="0"
|<math>
{\hat{\alpha}}(q) = \max \{ \alpha: \ \mbox{minimal requirements are satisfied for all } u \in \mathcal{U}(\alpha,\tilde u)\}
</math>
| (robustness)
| (1b)<!-- eq 2 ig-->
|-
|<math>
{\hat{\beta}}(q) = \min \{ \alpha: \ \mbox{windfall is achieved for at least one } u \in \mathcal{U}(\alpha,\tilde u)
\}
</math>
| (opportuneness)
| (2b)<!-- eq 1 ig-->
|}
 
We can "read" eq.&nbsp;(1) as follows. The robustness <math>{\hat{\alpha}}(q)</math> of decision vector <math>q</math> is the greatest value of the horizon of uncertainty <math>\alpha</math> for which specified minimal requirements are ''always'' satisfied. <math>{\hat{\alpha}}(q)</math> expresses robustness — the degree of resistance to uncertainty and immunity against failure — so a large value of <math>{\hat{\alpha}}(q)</math> is desirable. Robustness is defined as a ''worst-case'' scenario up to the horizon of uncertainty: how large can the horizon of uncertainty be and still, even in the worst case, achieve the critical level of outcome?
 
Eq.&nbsp;(2) states that the opportuneness <math>{\hat{\beta}}(q)</math>
is the least level of uncertainty <math>\alpha</math> which must be tolerated in order to enable the ''possibility'' of sweeping success as a result of decisions <math>q</math>. <math>{\hat{\beta}}(q)</math> is the immunity against windfall reward, so a small value of <math>{\hat{\beta}}(q)</math> is desirable. A small value of <math>{\hat{\beta}}(q)</math> reflects the opportune situation that
great reward is possible even in the presence of little ambient uncertainty. Opportuneness is defined as a ''best-case'' scenario up to the horizon of uncertainty: how small can the horizon of uncertainty be and still, in the best case, achieve the windfall reward?
 
The immunity functions <math>{\hat{\alpha}}(q)</math> and <math>{\hat{\beta}}(q)</math> are complementary and are defined in an anti-symmetric sense. Thus "bigger is better" for <math>{\hat{\alpha}}(q)</math> while "big is bad" for <math>{\hat{\beta}}(q)</math>. The immunity functions — robustness and opportuneness — are the basic decision functions in info-gap decision theory.
 
=== Optimization ===
The robustness function involves a maximization, but not of the performance or outcome of the decision: in general the outcome could be arbitrarily bad. Rather, it maximizes the level of uncertainty that would be required for the outcome to fail.
 
The greatest tolerable uncertainty is found at which decision <math>q</math> '''[[satisficing|satisfices]]''' the performance at a critical survival-level. One may establish one's preferences among the available actions <math>q, \, q^\prime,\, \ldots </math> according to their robustnesses <math>{\hat{\alpha}}(q),\, {\hat{\alpha}}(q^\prime), \, \ldots </math>, whereby larger robustness engenders higher preference. In this way the robustness function underlies a satisficing decision algorithm which maximizes the immunity to pernicious uncertainty.
 
The opportuneness function in eq.&nbsp;(2) involves a minimization, however not, as might be expected, of the damage which can accrue from unknown adverse events. The least horizon of uncertainty is sought at which decision <math>q</math> enables (but does not necessarily guarantee) large windfall gain. Unlike the robustness function, the opportuneness function does not satisfice, it "windfalls". Windfalling preferences are those which prefer actions for which the opportuneness function takes a small value. When <math>{\hat{\beta}}(q)</math> is used to choose an action <math>q</math>, one is "windfalling" by optimizing the opportuneness from propitious uncertainty in an attempt to enable highly ambitious goals or rewards.
 
Given a scalar reward function <math>R(q,u)</math>, depending on the decision vector <math>q</math> and the info-gap-uncertain function <math>u</math>, the minimal requirement in eq.&nbsp;(1) is that the reward <math>R(q,u)</math> be no less than a critical value <math>{r_{\rm c}}</math>. Likewise, the sweeping success in eq. (2) is attainment of a "wildest dream" level of reward <math>{r_{\rm w}}</math> which is much greater than <math>{r_{\rm c}}</math>. Usually neither of these threshold values, <math>{r_{\rm c}}</math> and <math>{r_{\rm w}}</math>, is chosen irrevocably before performing the decision analysis. Rather, these parameters enable the decision maker to explore a range of options. In any case the windfall reward <math>{r_{\rm w}}</math> is greater, usually much greater, than the critical reward <math>{r_{\rm c}}</math>:
:<math>
{r_{\rm w}} > {r_{\rm c}}
</math>
 
The robustness and opportuneness functions of eqs.&nbsp;(1) and (2) can now be expressed more explicitly:
:{| border="0" width="100%"
| <math>
{\hat{\alpha}}(q, {r_{\rm c}}) = \max \left \{ \alpha :
r_{\rm c} \leq \min_{u \in \mathcal{U}(\alpha, \tilde{u})} R(q,u) \right \}
</math>
| (3)
|-
|<math>
{\hat{\beta}}(q, {r_{\rm w}}) = \min \left \{ \alpha :
r_{\rm w} \leq
\max_{u \in \mathcal{U}(\alpha, \tilde{u})} R(q,u) \right \}
</math>
| (4)
|}
<math>{\hat{\alpha}}(q, {r_{\rm c}})</math> is the greatest level of uncertainty consistent with guaranteed reward no less than the critical reward <math>{r_{\rm c}}</math>, while <math>{\hat{\beta}}(q, {r_{\rm w}})</math> is the least level of uncertainty which must be accepted in order to facilitate (but not guarantee) windfall as great as <math>{r_{\rm w}}</math>. The complementary or anti-symmetric structure of the immunity functions is evident from eqs.&nbsp;(3) and (4).
 
These definitions can be modified to handle multi-criterion reward functions. Likewise, analogous definitions apply when <math>R(q,u)</math> is a loss rather than a reward.
 
== Decision rules ==
Based on these function, one can then decided on a course of action by optimizing for uncertainty: choose the decision which is most robust (can withstand the greatest uncertainty; "satisficing"), or choose the decision which requires the least uncertainty to achieve a windfall.
 
Formally, optimizing for robustness or optimizing for opportuneness yields a [[preference relation]] on the set of decisions, and the [[decision rule]] is the "optimize with respect to this preference".
 
In the below, let <math>\mathcal{Q}</math> be the set of all available or feasible decision vectors <math>q</math>.
 
=== Robust-satisficing ===
The robustness function generates '''robust-satisficing preferences''' on the options: decisions are ranked in increasing order of robustness, for a given critical reward, i.e., by <math>{\hat{\alpha}}(q, {r_{\rm c}})</math> value, meaning <math>q \succ _{\rm r} q^\prime</math> if <math>{\hat{\alpha}}(q, {r_{\rm c}}) > {\hat{\alpha}}(q^\prime, {r_{\rm c}}).</math>
 
A robust-satisficing decision is one which ''maximizes'' the robustness and satisfices the performance at the critical level <math>{r_{\rm c}}</math>.
 
Denote the maximum robustness by <math>\hat{\alpha},</math> (formally <math>\hat{\alpha}({r_{\rm c}}),</math> for the maximum robustness for a given critical reward), and the corresponding decision (or decisions) by <math>\hat{q}_{{\rm c}}</math> (formally, <math>{\hat{q}_{{\rm c}}}({r_{\rm c}}),</math> the critical optimizing action for a given level of critical reward):
:<math>\begin{align}
\hat{\alpha}({r_{\rm c}}) &= \max_{q \in \mathcal{Q}} {\hat{\alpha}}(q, {r_{\rm c}})\\
{\hat{q}_{{\rm c}}}({r_{\rm c}}) &= \arg \max_{q \in \mathcal{Q}} {\hat{\alpha}}(q, {r_{\rm c}})
\end{align}</math>
Usually, though not invariably, the robust-satisficing action <math>{\hat{q}_{{\rm c}}}({r_{\rm c}})</math> depends on the critical reward <math>{r_{\rm c}}</math>.
 
=== Opportune-windfalling ===
Conversely, one may optimize opportuneness:
the opportuneness function generates '''opportune-windfalling preferences''' on the options: decisions are ranked in ''decreasing'' order of opportuneness, for a given windfall reward, i.e., by <math>{\hat{\beta}}(q, {r_{\rm c}})</math> value, meaning <math>q \succ _{\rm w} q^\prime</math> if <math>{\hat{\beta}}(q, {r_{\rm w}}) < {\hat{\beta}}(q^\prime, {r_{\rm w}}).</math>
 
The opportune-windfalling decision, <math>{\hat{q}_{{\rm w}}}({r_{\rm w}})</math>, ''minimizes'' the opportuneness function on the set of available decisions.
 
Denote the minimum opportuneness by <math>\hat{\beta},</math> (formally <math>\hat{\beta}({r_{\rm w}}),</math> for the minimum opportuneness for a given windfall reward), and the corresponding decision (or decisions) by <math>\hat{q}_{{\rm w}}</math> (formally, <math>{\hat{q}_{{\rm w}}}({r_{\rm w}}),</math> the windfall optimizing action for a given level of windfall reward):
:<math>\begin{align}
\hat{\beta}        ({r_{\rm w}})
  &=      \min_{q \in \mathcal{Q}} {\hat{\beta}}(q, {r_{\rm w}})\\
{\hat{q}_{{\rm w}}}({r_{\rm w}})
  &= \arg \min_{q \in \mathcal{Q}} {\hat{\beta}}(q, {r_{\rm w}})
\end{align}
</math>
 
The two preference rankings, as well as the corresponding the optimal decisions
<math>{\hat{q}_{{\rm c}}}({r_{\rm c}})</math> and <math>{\hat{q}_{{\rm w}}}({r_{\rm w}})</math>, may be different, and may vary depending on the values of <math>{r_{\rm c}}</math> and <math>{r_{\rm w}}.</math>
 
== Applications ==
Info-gap theory has generated a lot of literature. Info-gap theory has been studied or applied in a range of applications including engineering
<ref name="ybh rrms 1996"/>
<ref>{{Cite journal | doi = 10.1109/5326.798765 | last1 = Hipel | first1 = Keith W. | last2 = Ben-Haim | first2 = Yakov | year = 1999 | title = Decision making in an uncertain world: Information-gap modelling in water resources management | url = | journal = IEEE Trans., Systems, Man and Cybernetics | volume = 29 | issue = 4| pages = 506–517 }}</ref>
<ref name="ybh 2005 crc">Yakov Ben-Haim, 2005, Info-gap Decision Theory For Engineering Design. Or: Why `Good' is Preferable to `Best', appearing as chapter 11 in ''Engineering Design Reliability Handbook'', Edited by Efstratios Nikolaidis, Dan M.Ghiocel and Surendra Singhal, CRC Press, Boca Raton.</ref>
<ref name="kanno ijss 2005">{{Cite journal | doi = 10.1016/j.ijsolstr.2005.06.088 | last1 = Kanno | first1 = Y. | last2 = Takewaki | first2 = I. | year = 2006 | title = Robustness analysis of trusses with separable load and structural uncertainties | url = | journal = International Journal of Solids and Structures | volume = 43 | issue = 9| pages = 2646–2669 }}</ref>
<ref name = "wang 2005">
Kaihong Wang, 2005, Vibration Analysis of Cracked Composite Bending-torsion Beams for Damage Diagnosis, PhD thesis, Virginia Politechnic Institute, Blacksburg, Virginia.</ref>
<ref name="kanno jota 2006">{{Cite journal | doi = 10.1007/s10957-006-9102-z | last1 = Kanno | first1 = Y. | last2 = Takewaki | first2 = I. | year = 2006 | title = Sequential semidefinite program for maximum robustness design of structures under load uncertainty | url = | journal = Journal of Optimization Theory and Applications | volume = 130 | issue = 2| pages = 265–287 }}</ref>
<ref name="pierce jsv 2006">{{Cite journal | doi = 10.1016/j.jsv.2005.09.029 | last1 = Pierce | first1 = S.G. | last2 = Worden | first2 = K. | last3 = Manson | first3 = G. | year = 2006 | title = A novel information-gap technique to assess reliability of neural network-based damage detection | url = | journal = Journal of Sound and Vibration | volume = 293 | issue = 1–2| pages = 96–111 }}</ref>
<ref>{{Cite journal | doi = 10.1109/TNN.2006.880363 | last1 = Pierce | first1 = Gareth | last2 = Ben-Haim | first2 = Yakov | last3 = Worden | first3 = Keith | last4 = Manson | first4 = Graeme | year = 2006 | title = Evaluation of neural network robust reliability using information-gap theory | url = | journal = IEEE Transactions on Neural Networks | volume = 17 | issue = 6| pages = 1349–1361 | pmid = 17131652 }}</ref>
<ref name="chetwynd rs 2006">{{Cite journal | last1 = Chetwynd | first1 = D. | last2 = Worden | first2 = K. | last3 = Manson | first3 = G. | year = 2074 | title = An application of interval-valued neural networks to a regression problem | url = | journal = [[Proceedings of the Royal Society A]] | volume = 462 | issue = | pages = 3097–3114 }}</ref>
<ref>{{Cite journal | doi = 10.1007/s10710-006-9013-7 | last1 = Lim | first1 = D. | last2 = Ong | first2 = Y. S. | last3 = Jin | first3 = Y. | last4 = Sendhoff | first4 = B. | last5 = Lee | first5 = B. S. | year = 2006 | title = Inverse Multi-objective Robust Evolutionary Design | url = | journal = Genetic Programming and Evolvable Machines | volume = 7 | issue = 4| pages = 383–404 }}</ref>
<ref name="vinot et al. 2005">{{Cite journal | doi = 10.1016/j.jsv.2005.07.007 | last1 = Vinot | first1 = P. | last2 = Cogan | first2 = S. | last3 = Cipolla | first3 = V. | year = 2005 | title = A robust model-based test planning procedure | url = | journal = Journal of Sound and Vibration | volume = 288 | issue = 3| pages = 571–585 }}</ref>
<ref name="takewaki and ybh 2005">{{Cite journal | doi = 10.1016/j.jsv.2005.07.005 | last1 = Takewaki | first1 = Izuru | last2 = Ben-Haim | first2 = Yakov | year = 2005 | title = Info-gap robust design with load and model uncertainties | url = | journal = Journal of Sound and Vibration | volume = 288 | issue = 3| pages = 551–570 }}</ref>
,<ref>
Izuru Takewaki and Yakov Ben-Haim, 2007, Info-gap robust design of passively controlled structures with load and model uncertainties, ''Structural Design Optimization Considering Uncertainties'', Yiannis Tsompanakis, Nikkos D. Lagaros and Manolis Papadrakakis, editors, Taylor and Francis Publishers.
</ref><ref>{{Cite journal | doi = 10.1016/j.ymssp.2004.03.001 | last1 = Hemez | first1 = Francois M. | last2 = Ben-Haim | first2 = Yakov | year = 2004 | title = Info-gap robustness for the correlation of tests and simulations of a nonlinear transient | url = | journal = Mechanical Systems and Signal Processing | volume = 18 | issue = 6| pages = 1443–1467 }}</ref>
biological conservation
<ref name="levy et al. 2000">{{Cite journal | doi = 10.1016/S0304-3800(00)00226-X | last1 = Levy | first1 = Jason K. | last2 = Hipel | first2 = Keith W. | last3 = Kilgour | first3 = Marc | year = 2000 | title = Using environmental indicators to quantify the robustness of policy alternatives to uncertainty | url = | journal = Ecological Modelling | volume = 130 | issue = 1–3| pages = 79–86 }}</ref>
<ref>{{Cite journal | doi = 10.1016/j.biocon.2005.11.006 | last1 = Moilanen | first1 = A. | last2 = Wintle | first2 = B.A. | year = 2006 | title = Uncertainty analysis favours selection of spatially aggregated reserve structures | url = | journal = Biological Conservation | volume = 129 | issue = 3| pages = 427–434 }}</ref>
<ref>{{Cite journal | doi = 10.1111/j.1461-0248.2005.00827.x | last1 = Halpern | first1 = Benjamin S. | last2 = Regan | first2 = Helen M. | last3 = Possingham | first3 = Hugh P. | last4 = McCarthy | first4 = Michael A. | year = 2006 | title = Accounting for uncertainty in marine reserve design | url = | journal = Ecology Letters | volume = 9 | issue = 1| pages = 2–11 | pmid = 16958861 }}</ref>
<ref name=rhino>{{Cite journal | doi = 10.1890/03-5419 | last1 = Regan | first1 = Helen M. | last2 = Ben-Haim | first2 = Yakov | last3 = Langford | first3 = Bill | last4 = Wilson | first4 = Will G. | last5 = Lundberg | first5 = Per | last6 = Andelman | first6 = Sandy J. | last7 = Burgman | first7 = Mark A. | year = 2005 | title = Robust decision making under severe uncertainty for conservation management | url = | journal = Ecological Applications | volume = 15 | issue = 4| pages = 1471–1477 }}</ref>
<ref>{{Cite journal | doi = 10.1007/s00267-006-0022-3 | last1 = McCarthy | first1 = M.A. | last2 = Lindenmayer | first2 = D.B. | year = 2007 | title = Info-gap decision theory for assessing the management of catchments for timber production and urban water supply | url = | journal = Environmental Management | volume = 39 | issue = 4| pages = 553–562 | pmid = 17318697 }}</ref>
<ref>{{Cite journal | doi = 10.1016/j.biocon.2007.06.007 | last1 = Crone | first1 = Elizabeth E. | last2 = Pickering | first2 = Debbie | last3 = Schultz | first3 = Cheryl B. | year = 2007 | title = Can captive rearing promote recovery of endangered butterflies? An assessment in the face of uncertainty | url = | journal = Biological Conservation | volume = 139 | issue = 1–2| pages = 103–112 }}</ref>
<ref>
L. Joe Moffitt, John K. Stranlund and Craig D. Osteen, 2007, Robust detection protocols for uncertain introductions of invasive species, ''Journal of Environmental Management'', In Press, Corrected Proof, Available online 27 August 2007.</ref>
<ref>{{Cite journal | doi = 10.1890/04-0906 | last1 = Burgman | first1 = M. A. | last2 = Lindenmayer | first2 = D.B. | last3 = Elith | first3 = J. | year = 2005 | title = Managing landscapes for conservation under uncertainty | url = | journal = Ecology | volume = 86 | issue = 8| pages = 2007–2017 }}</ref>
<ref>{{Cite journal | doi = 10.1111/j.1523-1739.2006.00560.x | last1 = Moilanen | first1 = A. | last2 = Elith | first2 = J. | last3 = Burgman | first3 = M. | last4 = Burgman | year = 2006 | first4 = M | title = Uncertainty analysis for regional-scale reserve selection | url = | journal = Conservation Biology | volume = 20 | issue = 6| pages = 1688–1697 | pmid = 17181804 }}</ref>
<ref name="atte et al. reserve 2006">{{Cite journal | doi = 10.1016/j.ecolmodel.2006.07.004 | last1 = Moilanen | first1 = Atte | last2 = Runge | first2 = Michael C. | last3 = Elith | first3 = Jane | last4 = Tyre | first4 = Andrew | last5 = Carmel | first5 = Yohay | last6 = Fegraus | first6 = Eric | last7 = Wintle | first7 = Brendan | last8 = Burgman | first8 = Mark | last9 = Benhaim | year = 2006 | first9 = Y | title = Planning for robust reserve networks using uncertainty analysis | url = | journal = Ecological Modelling | volume = 199 | issue = 1| pages = 115–124 }}</ref>
,<ref>{{Cite journal | doi = 10.1890/1051-0761(2007)017[0251:MCDUUF]2.0.CO;2 | last1 = Nicholson | first1 = Emily | last2 = Possingham | first2 = Hugh P. | year = 2007| title = Making conservation decisions under uncertainty for the persistence of multiple species | url = | journal = Ecological Applications | volume = 17 | issue = 1| pages = 251–265 | pmid = 17479849 }}</ref><ref name="burgman 2005 book">
Burgman, Mark, 2005, ''Risks and Decisions for Conservation and Environmental Management'', Cambridge University Press, Cambridge.
</ref> theoretical biology,<ref>{{Cite journal | doi = 10.1086/491691 | last1 = Carmel | first1 = Yohay | last2 = Ben-Haim | first2 = Yakov | year = 2005 | title = Info-gap robust-satisficing model of foraging behavior: Do foragers optimize or satisfice? | url = | journal = American Naturalist | volume = 166 | issue = 5| pages = 633–641 | pmid = 16224728 }}</ref> homeland security,<ref>{{Cite journal | doi = 10.2202/1547-7355.1134 | last1 = Moffitt | first1 = Joe | last2 = Stranlund | first2 = John K. | last3 = Field | first3 = Barry C. | year = 2005 | title = Inspections to Avert Terrorism: Robustness Under Severe Uncertainty | url = http://www.bepress.com/jhsem/vol2/iss3/3 | journal = Journal of Homeland Security and Emergency Management | volume = 2 | issue = 3| page = 3 }}</ref> economics
<ref name="colin 2007">{{Cite journal | doi = 10.1108/15265940710721055 | last1 = Beresford-Smith | first1 = Bryan | last2 = Thompson | first2 = Colin J. | year = 2007 | title = Managing credit risk with info-gap uncertainty | url = | journal = The Journal of Risk Finance | volume = 8 | issue = 1| pages = 24–34 }}</ref>
,<ref>John K. Stranlund and Yakov Ben-Haim, (2007), Price-based vs. quantity-based environmental regulation under Knightian uncertainty: An info-gap robust satisficing perspective, ''Journal of Environmental Management'', In Press, Corrected Proof, Available online 28 March 2007.</ref><ref name="ybh 2005 var">{{Cite journal | doi = 10.1108/15265940510633460 | last1 = Ben-Haim | first1 = Yakov | year = 2005 | title = Value at risk with Info-gap uncertainty | url = | journal = Journal of Risk Finance | volume = 6 | issue = 5| pages = 388–403 }}</ref>
project management
<ref>{{Cite journal | doi = 10.1061/(ASCE)0733-9364(1998)124:2(125) | last1 = Ben-Haim | first1 = Yakov | authorlink2 = Alexander Laufer | last2 = Laufer | first2 = Alexander | year = 1998 | title = Robust reliability of projects with activity-duration uncertainty | url = | journal = ASCE Journal of Construction Engineering and Management | volume = 124 | issue = 2| pages = 125–132 }}</ref>
<ref name="tahan ben-asher 2005">{{Cite journal | doi = 10.1002/sys.20021 | last1 = Tahan | first1 = Meir | last2 = Ben-Asher | first2 = Joseph Z. | year = 2005 | title = Modeling and analysis of integration processes for engineering systems | url = | journal = Systems Engineering | volume = 8 | issue = 1| pages = 62–77 }}</ref>
<ref>{{Cite journal | last1 = Regev | first1 = Sary | last2 = Shtub | first2 = Avraham | last3 = Ben-Haim | first3 = Yakov | year = 2006 | title = Managing project risks as knowledge gaps | url = | journal = Project Management Journal | volume = 37 | issue = 5| pages = 17–25 }}</ref>
and statistics
.<ref>{{Cite journal | doi = 10.1002/env.811 | last1 = Fox | first1 = D.R. | last2 = Ben-Haim | first2 = Y. | last3 = Hayes | first3 = K.R. | last4 = McCarthy | first4 = M. | last5 = Wintle | first5 = B. | last6 = Dunstan | first6 = P. | year = 2007 | title = An Info-Gap Approach to Power and Sample-size calculations | url = | journal = Environmetrics | volume = 18 | issue = 2| pages = 189–203 }}</ref> Foundational issues related to info-gap theory have also been studied
<ref>{{Cite journal | last1 = Ben-Haim | first1 = Yakov | year = 1994 | title = Convex models of uncertainty: Applications and Implications | url = | journal = Erkenntnis: an International Journal of Analytic Philosophy | volume = 41 | issue = 2| pages = 139–156 | doi = 10.1007/BF01128824 }}</ref>
<ref>{{Cite journal | doi = 10.1016/S0016-0032(99)00024-1 | last1 = Ben-Haim | first1 = Yakov | year = 1999 | title = Set-models of information-gap uncertainty: Axioms and an inference scheme | url = | journal = Journal of the Franklin Institute | volume = 336 | issue = 7| pages = 1093–1117 }}</ref>
<ref>{{Cite journal | doi = 10.1016/S0016-0032(00)00016-8 | last1 = Ben-Haim | first1 = Yakov | year = 2000 | title = Robust rationality and decisions under severe uncertainty | url = | journal = Journal of the Franklin Institute | volume = 337 | issue = 2–3| pages = 171–199 }}</ref>
<ref>{{Cite journal | doi = 10.1016/j.ress.2004.03.015 | last1 = Ben-Haim | first1 = Yakov | year = 2004 | title = Uncertainty, probability and information-gaps | url = | journal = Reliability Engineering and System Safety | volume = 85 | issue = | pages = 249–266 }}</ref>
<ref>
George J. Klir, 2006, ''Uncertainty and Information: Foundations of Generalized Information Theory'', Wiley Publishers.</ref>
.<ref>
Yakov Ben-Haim, 2007, Peirce, Haack and Info-gaps, in ''Susan Haack, A Lady of Distinctions: The Philosopher Responds to Her Critics'', edited by Cornelis de Waal, Prometheus Books.</ref>
 
The remainder of this section describes in a little more detail the kind of uncertainties addressed by info-gap theory. Although many published works are mentioned below, no attempt is made here to present insights from these papers. The emphasis is not upon elucidation of the  concepts of info-gap theory, but upon the context where it is used and the goals.
 
=== Engineering ===
A typical engineering application is the vibration analysis of a cracked beam, where the location, size, shape and orientation of the crack is unknown and greatly influence the vibration dynamics.<ref name = "wang 2005"/> Very little is usually known about these spatial and geometrical uncertainties. The info-gap analysis allows one to model these uncertainties, and to determine the degree of robustness - to these uncertainties - of properties such as vibration amplitude, natural frequencies, and natural modes of vibration. Another example is the structural design of a building subject to uncertain loads such as from wind or earthquakes.<ref name="kanno ijss 2005"/><ref name="kanno jota 2006"/> The response of the structure depends strongly on the spatial and temporal distribution of the loads. However, storms and earthquakes are highly idiosyncratic events, and the interaction between the event and the structure involves very site-specific mechanical properties which are rarely known. The info-gap analysis enables the design of the structure to enhance structural immunity against uncertain deviations from design-base or estimated worst-case loads.{{Citation needed|date=February 2008}} Another engineering application involves the design of a neural net for detecting faults in a mechanical system, based on real-time measurements. A major difficulty is that faults are highly idiosyncratic, so that training data for the neural net will tend to differ substantially from data obtained from real-time faults after the net has been trained. The info-gap robustness strategy enables one to design the neural net to be robust to the disparity between training data and future real events.<ref name="pierce jsv 2006"/><ref name="chetwynd rs 2006"/>
 
=== Biology ===
Biological systems are vastly more complex and subtle than our best models, so the conservation biologist faces substantial info-gaps in using biological models. For instance, Levy ''et al.'' <ref name="levy et al. 2000"/> use an info-gap robust-satisficing "methodology for identifying management alternatives that are robust to environmental uncertainty, but nonetheless meet specified socio-economic and environmental goals." They use info-gap robustness curves to select among management options for spruce-budworm populations in Eastern Canada. [[Mark Burgman|Burgman]]
<ref>
Burgman, Mark, 2005, ''Risks and Decisions for Conservation and Environmental Management'', Cambridge University Press, Cambridge, pp.399.</ref> uses the fact that the robustness curves of different alternatives can intersect, to illustrate a change in preference between conservation strategies for the orange-bellied parrot.
 
=== Project management ===
Project management is another area where info-gap uncertainty is common. The project manager often has very limited information about the duration and cost of some of the tasks in the project, and info-gap robustness can assist in project planning and integration.<ref name="tahan ben-asher 2005"/> Financial economics is another area where the future is fraught with surprises, which may be either pernicious or propitious. Info-gap robustness and opportuneness analyses can assist in portfolio design, [[credit rationing]], and other applications.<ref name="colin 2007"/>
 
== Limitations ==
In applying info-gap theory, one must remain aware of certain limitations.
 
Firstly, info-gap makes assumptions, namely on universe in question, and the degree of uncertainty – the info-gap model is a model of degrees of uncertainty or similarity of various assumptions, within a given universe. Info-gap does not make probability assumptions within this universe – it is non-probabilistic – but does quantify a notion of "distance from the estimate". In brief, info-gap makes ''fewer'' assumptions than a probabilistic method, but does make some assumptions.
 
Further, unforeseen events (those not in the universe <math>\mathfrak{U}</math>) are not incorporated: info-gap addresses ''modeled'' uncertainty, not unexpected uncertainty, as in [[black swan theory]], particularly the [[ludic fallacy]]. This is not a problem when the possible events by definition fall in a given universe, but in real world applications, significant events may be "outside model". For instance, a simple model of daily stock market returns – which by definition fall in the range <math>[-100\%,+\infty\%)</math> – may include extreme moves such as [[Black Monday (1987)]] but might not model the market breakdowns following the [[September 11 attacks]]: it considers the "known unknowns", not the "[[unknown unknowns]]". This is a general criticism of much [[decision theory]], and is by no means specific to info-gap, but nor is info-gap immune to it.
 
Secondly, there is no natural scale: is uncertainty of <math>\alpha = 1</math> small or large? Different models of uncertainty give different scales, and require judgment and understanding of the domain and the model of uncertainty. Similarly, measuring differences between outcomes requires judgment and understanding of the domain.
 
Thirdly, if the universe under consideration is larger than a significant horizon of uncertainty, and outcomes for these distant points is significantly different from points near the estimate, then conclusions of robustness or opportuneness analyses will generally be: "one must be very confident of one's assumptions, else outcomes may be expected to vary significantly from projections" – a cautionary conclusion.
 
===Disclaimer and Summary===
The robustness and opportuneness functions can inform decision. For example, a change in decision increasing robustness may increase or decrease opportuneness. From a subjective stance, robustness and opportuneness both trade-off against aspiration for outcome: robustness and opportuneness deteriorate as the decision maker's aspirations increase. Robustness is zero for model-best anticipated outcomes. Robustness curves for alternative decisions may cross as a function of aspiration, implying reversal of preference.
 
Various theorems identify conditions where larger info-gap robustness implies larger probability of success, regardless of the underlying probability distribution. However, these conditions are technical, and do not translate into any common-sense, verbal recommendations, limiting such applications of info-gap theory by non-experts.
 
== Criticism ==
A general criticism of non-probabilistic decision rules, discussed in detail at [[Decision_theory#Alternatives_to_probability_theory|decision theory: alternatives to probability theory]], is that optimal decision rules (formally, [[admissible decision rule]]s) can ''always'' be derived by probabilistic methods, with a suitable [[utility function]] and [[prior distribution]] (this is the statement of the complete class theorems), and thus that non-probabilistic methods such as info-gap are unnecessary and do not yield new or better decision rules.
 
A more general criticism of decision making under uncertainty is the impact of outsized, unexpected events, ones that are not captured by the model. This is discussed particularly in [[black swan theory]], and info-gap, used in isolation, is vulnerable to this, as are a fortiori all decision theories that use a fixed universe of possibilities, notably probabilistic ones.
 
In criticism specific to info-gap, Sniedovich<ref name="sniedo 2007"/> raises two objections to info-gap decision theory, one substantive, one scholarly:
;1. the info-gap uncertainty model is flawed and oversold: Info-gap models uncertainty via a nested family of subsets around a [[point estimate]], and is touted as applicable under situations of "''severe'' uncertainty". Sniedovich argues that under severe uncertainty, one should not start from a point estimate, which is assumed to be seriously flawed: instead the set one should consider is the universe of possibilities, not subsets thereof. Stated alternatively, under severe uncertainty, one should use ''global'' decision theory (consider the entire region of uncertainty), not ''local'' decision theory (starting with a point estimate and considering deviations from it).
;2. info-gap is maximin: Ben-Haim (2006, p.xii) claims that info-gap is "radically different from all current theories of decision under uncertainty," while Sniedovich argues that info-gap's robustness analysis is precisely maximin analysis of the horizon of uncertainty. By contrast, Ben-Haim states (Ben-Haim 1999, pp.&nbsp;271–2) that "robust reliability is emphatically not a [min-max] worst-case analysis". Note that Ben-Haim compares info-gap to ''minimax,'' while Sniedovich considers it a case of ''maximin.''
 
Sniedovich has challenged the validity of info-gap theory for making decisions under severe uncertainty. He questions the effectiveness of info-gap theory in situations where the best estimate <math>\displaystyle \tilde{u}</math> is a poor indication of the true value of <math>\displaystyle u</math>. Sniedovich notes that the info-gap robustness function is "local" to the region around <math>\displaystyle \tilde{u}</math>, where <math>\displaystyle \tilde{u}</math> is likely to be substantially in error. He concludes that therefore the info-gap robustness function is an unreliable assessment of immunity to error.
 
=== Maximin ===
Sniedovich argues that info-gap's robustness model is [[Maximin_(decision_theory)|maximin]] analysis of, not the outcome, but the horizon of uncertainty: it chooses an estimate such that one maximizes the horizon of uncertainty <math>\alpha</math> such that the minimal (critical) outcome is achieved, assuming worst-case outcome for a particular horizon. Symbolically, max <math>\alpha</math> assuming min (worst-case) outcome, or maximin.
 
In other words, while it is not a maximin analysis of outcome over the universe of uncertainty, it is a maximin analysis over a properly construed decision space.
 
Ben-Haim argues that info-gap's robustness model is not min-max/maximin analysis because it is not worst case analysis of ''outcomes;'' it is a [[satisficing]] model, not an optimization model – a (straightforward) maximin analysis would consider worst-case outcomes over the entire space which, since uncertainty is often potentially unbounded, would yield an unbounded bad worst case.
 
=== Stability radius ===
 
Sniedovich<ref name="MS10" /> has shown that info-gap's robustness model is a simple [[stability radius]] model, namely a local stability model of the generic form
 
:<math>\hat{\rho}(\tilde{p}):= \max \ \{\rho\ge 0: p\in P(s),\forall p\in B(\rho,\tilde{p})\}</math>
 
where <math>B(\rho,\tilde{p})</math> denotes a [[Ball (mathematics)|ball]] of radius <math>\rho</math> centered at <math>\tilde{p}</math> and <math>P(s)</math> denotes the set of values of <math>p</math> that satisfy  pre-determined stability conditions.
 
In other words, info-gap's robustness model is a stability radius model characterized by a stability requirement of the form <math>r_{c}\le R(q,p)</math>. Since stability radius models are designed for the analysis of small perturbations in a given nominal value of a parameter, Sniedovich<ref name="MS10" /> argues that info-gap's robustness model is unsuitable for the treatment of severe uncertainty characterized by a poor estimate and a vast uncertainty space.
 
== Discussion ==
 
===Satisficing and bounded rationality ===
It is correct that the info-gap robustness function is local, and has restricted quantitative value in some cases. However, a major purpose of decision analysis is to provide focus for subjective judgments. That is, regardless of the formal analysis, a framework for discussion is provided. Without entering into any particular framework, or characteristics of frameworks in general, discussion follows about proposals for such frameworks.
 
Simon <ref>{{Cite journal | last1 = Simon | first1 = Herbert A. | year = 1959 | title = Theories of decision making in economics and behavioral science | url = | journal = American Economic Review | volume = 49 | issue = | pages = 253–283 }}</ref>
introduced the idea of [[bounded rationality]]. Limitations on knowledge, understanding, and computational capability constrain the ability of decision makers to identify optimal choices. Simon advocated [[satisficing]] rather than optimizing: seeking adequate (rather than optimal) outcomes given available resources. Schwartz,<ref>Schwartz, Barry, 2004, ''Paradox of Choice: Why More Is Less'', Harper Perennial.</ref>
Conlisk
<ref>{{Cite journal | last1 = Conlisk | first1 = John | year = 1996 | title = Why bounded rationality? | url = | journal = Journal of Economic Literature | volume = XXXIV | issue = | pages = 669–700 }}</ref>
and others discuss extensive evidence for the phenomenon of bounded rationality among human decision makers, as well as for the advantages of satisficing when knowledge and understanding are deficient. The info-gap robustness function provides a means of implementing a satisficing strategy under bounded rationality. For instance, in discussing bounded rationality and satisficing in conservation and environmental management, Burgman notes that "Info-gap theory ... can function sensibly when there are 'severe' knowledge gaps." The info-gap robustness and opportuneness functions provide "a formal framework to explore the kinds of speculations that occur intuitively when examining decision options."
<ref>
Burgman, Mark, 2005, ''Risks and Decisions for Conservation and Environmental Management'', Cambridge University Press, Cambridge, pp.391, 394.</ref> Burgman then proceeds to develop an info-gap robust-satisficing strategy for protecting the endangered orange-bellied parrot. Similarly, Vinot, Cogan and Cipolla <ref name = "Vinot, Cogan 2005">{{Cite journal | last1 = Vinot | first1 = P. | last2 = Cogan | first2 = S. | last3 = Cipolla | first3 = V. | year = 2005 | title = A robust model-based test planning procedure | url = | journal = Journal of Sound and Vibration | volume = 288 | issue = 3| page = 572 }}</ref> discuss engineering design and note that "the downside of a model-based analysis lies in the knowledge that the model behavior is only an approximation to the real system behavior. Hence the question of the honest designer: how sensitive is my measure of design success to uncertainties in my system representation? ... It is evident that if model-based analysis is to be used with any level of confidence then ... [one must] attempt to satisfy an acceptable sub-optimal level of performance while remaining maximally robust to the system uncertainties."<ref name = "Vinot, Cogan 2005" /> They proceed to develop an info-gap robust-satisficing design procedure for an aerospace application.
 
==Alternatives==
Of course, decision in the face of uncertainty is nothing new, and attempts to deal with it have a long history. A number of authors have noted and discussed similarities and differences between info-gap robustness and [[minimax]] or worst-case methods
<ref name="ybh 2005 crc"/><ref name="takewaki and ybh 2005" /><ref name="ybh 2005 var"/><ref name="tahan ben-asher 2005"/> 
<ref name="C. Eldar 2005">Z. Ben-Haim and Y. C. Eldar, Maximum set estimators with bounded estimation error, ''IEEE Trans. Signal Processing'', vol. 53, no. 8, August 2005, pp. 3172-3182.</ref> 
.<ref>
Babuška, I., F. Nobile and R. Tempone, 2005, Worst case scenario analysis for elliptic problems with uncertainty, ''Numerische Mathematik'' (in English) vol.101 pp.185–219.</ref> 
Sniedovich <ref name="sniedo 2007">{{Cite journal | last1 = Sniedovich | first1 = M. | year = 2007 | title = The art and science of modeling decision-making under severe uncertainty | url = | journal = Decision-Making in Manufacturing and Services | volume = 1 | issue = 1–2| pages = 109–134 |url=http://journals.bg.agh.edu.pl/DECISION/2007-01-02/DM_2007_1_2_07.pdf}}</ref> 
has demonstrated formally that the info-gap robustness function can be represented as a maximin optimization, and is thus related to Wald's minimax theory. Sniedovich <ref name="sniedo 2007"/> has claimed that info-gap's robustness analysis is conducted in the neighborhood of an estimate that is likely to be substantially wrong, concluding that the resulting robustness function is equally likely to be substantially wrong.   
 
On the other hand, the estimate is the best one has, so it is useful to know if it can err greatly and still yield an acceptable outcome. This critical question clearly raises the issue of whether robustness (as defined by info-gap theory) is qualified to judge whether confidence is warranted,<ref name="ybh rrms 1996">Yakov Ben-Haim, ''Robust Reliability in the Mechanical Science,'' Springer, Berlin ,1996.</ref><ref>{{Cite journal | doi = 10.1006/mssp.1996.0137 | last1 = Ben-Haim | first1 = Yakov | last2 = Cogan | first2 = Scott | last3 = Sanseigne | first3 = Laetitia | year = 1998 | title = Usability of Mathematical Models in Mechanical Decision Processes | url = | journal = Mechanical Systems and Signal Processing | volume = 12 | issue = | pages = 121–134 }}</ref>   
<ref>(See also chapter 4 in Yakov Ben-Haim, Ref. 2.)</ref> and how it compares to methods used to inform decisions under uncertainty using considerations '''not''' limited to the neighborhood of a bad initial guess. Answers to these questions vary with the particular problem at hand. Some general comments follow.
 
=== Sensitivity analysis ===
{{main|Sensitivity analysis}}
 
[[Sensitivity analysis]] – how sensitive conclusions are to input assumptions – can be performed independently of a model of uncertainty: most simply, one may take two different assumed values for an input and compares the conclusions. From this perspective, info-gap can be seen as a technique of sensitivity analysis, though by no means the only.
 
=== Robust optimization ===
{{main|Robust optimization}}
The robust optimization literature <ref name="rosenhead 72">{{Cite journal | doi = 10.1057/jors.1972.72 | last1 = Rosenhead | first1 = M.J. | last2 = Elton | first2 = M. | last3 = Gupta | first3 = S.K. | year = 1972 | title = Robustness and Optimality as Criteria for Strategic Decisions | url = | journal = Operational Research Quarterly | volume = 23 | issue = 4| pages = 413–430 }}</ref><ref name="rosenblatt 87">{{Cite journal | doi = 10.1080/00207548708919855 | last1 = Rosenblatt | first1 = M.J. | last2 = Lee | first2 = H.L. | year = 1987 | title = A robustness approach to facilities design | url = | journal = International Journal of Production Research | volume = 25 | issue = 4| pages = 479–486 }}</ref><ref name="kouvelis 97">P. Kouvelis and G. Yu, 1997, Robust Discrete Optimization and Its Applications, Kluwer.</ref><ref name="rustem 02">B. Rustem and M. Howe, 2002, Algorithms for Worst-case Design and Applications to Risk Management, Princeton University Press.</ref><ref name="lempert 03">R.J. Lempert, S.W. Popper, and S.C. Bankes, 2003, Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis, The Rand Corporation.</ref><ref name="ben-tal 06">A. Ben-Tal, L. El Ghaoui, and A. Nemirovski, 2006, Mathematical Programming, Special issue on Robust Optimization, Volume 107(1-2).</ref> provides methods and techniques that take a '''global''' approach to robustness analysis. These methods directly address decision under '''severe''' uncertainty, and have been used for this purpose for more than thirty years now. [[Abraham Wald|Wald]]'s [[minimax|Maximin]] model is the main instrument used by these methods. 
 
The principal difference between the [[minimax|Maximin]] model employed by info-gap and the various [[minimax|Maximin]] models employed by robust optimization methods is in the manner in which the total region of uncertainty is incorporated in the robustness model. Info-gap takes a local approach that concentrates on the immediate neighborhood of the estimate. In sharp contrast, robust optimization methods set out to incorporate in the analysis the entire region of uncertainty, or at least an adequate representation thereof. In fact, some of these methods do not even use an estimate.
 
===Comparative analysis===
 
Classical decision theory,<ref name="resnik 87">Resnik, M.D., ''Choices: an Introduction to Decision Theory,'' University
of Minnesota Press, Minneapolis, MN, 1987.</ref><ref name="french 88">French, S.D., ''Decision Theory,'' Ellis Horwood, 1988.</ref> offers two approaches to decision-making under severe uncertainty, namely [[maximin (decision theory)|maximin]] and Laplaces' [[principle of insufficient reason]] (assume all outcomes equally likely); these may be considered alternative solutions to the problem info-gap addresses.
 
Further, as discussed at [[Decision_theory#Alternatives_to_probability_theory|decision theory: alternatives to probability theory]], [[List of mathematical probabilists|probabilists]], particularly Bayesians probabilists, argue that optimal decision rules (formally, [[admissible decision rule]]s) can ''always'' be derived by probabilistic methods (this is the statement of the [[complete class theorems]]), and thus that non-probabilistic methods such as info-gap are unnecessary and do not yield new or better decision rules.
 
==== Maximin ====
As attested by the rich literature on [[robust optimization]], maximin provides a wide range of methods for decision making in the face of severe uncertainty.
 
Indeed, as discussed in [[criticism of info-gap decision theory]], info-gap's robustness model can be interpreted as an instance of the general maximin model.
 
==== Bayesian analysis ====
As for Laplaces' [[principle of insufficient reason]], in this context it is convenient to view it as an instance of [[Bayesian probability|Bayesian analysis]].
 
The essence of the [[Bayesian probability|Bayesian analysis]] is applying probabilities for different possible realizations of the uncertain parameters. In the case of [[Knightian uncertainty|Knightian (non-probabilistic) uncertainty]], these probabilities represent the decision maker's "degree of belief" in a specific realization.
 
In our example, suppose there are only five possible realizations of the uncertain revenue to allocation function. The decision maker believes that the estimated function is the most likely, and that the likelihood decreases as the difference from the estimate increases. Figure 11 exemplifies such a probability distribution.
[[Image:IGT-example11.png|thumb|right|Figure 11 – Probability distribution of the revenue function realizations]]
 
Now, for any allocation, one can construct a probability distribution of the revenue, based on his prior beliefs. The decision maker can then choose the allocation with the highest expected revenue, with the lowest probability for an unacceptable revenue, etc.
 
The most problematic step of this analysis is the choice of the realizations probabilities. When there is an extensive and relevant past experience, an expert may use this experience to construct a probability distribution. But even with extensive past experience, when some parameters change, the expert may only be able to estimate that <math>A</math> is more likely than <math>B</math>, but will not be able to reliably quantify this difference. Furthermore, when conditions change drastically, or when there is no past experience at all, it may prove to be difficult even estimating whether <math>A</math> is more likely than <math>B</math>.
 
Nevertheless, methodologically speaking, this difficulty is not as problematic as  basing the analysis of a problem subject to severe uncertainty on a single point estimate and its immediate neighborhood, as done by info-gap. And what is more, contrary to info-gap, this approach is global, rather than local.
 
Still, it must be stressed that Bayesian analysis does not expressly concern itself with the question of robustness.
 
It should also be noted that Bayesian analysis raises the issue of ''learning from experience'' and adjusting probabilities accordingly. In other words, decision is not a one-stop process, but profits from a sequence of decisions and observations.
 
== Classical decision theory perspective ==
Sniedovich<ref name="sniedo 2007"/> raises two objections to info-gap decision theory, from the point of view of classical decision theory, one substantive, one scholarly:
;the info-gap uncertainty model is flawed and oversold: Info-gap models uncertainty via a nested family of subsets around a [[point estimate]], and is touted as applicable under situations of "''severe'' uncertainty". Sniedovich argues that under severe uncertainty, one should not start from a point estimate, which is assumed to be seriously flawed: instead the set one should consider is the universe of possibilities, not subsets thereof. Stated alternatively, under severe uncertainty, one should use ''global'' decision theory (consider the entire universe), not ''local'' decision theory (starting with an estimate and considering deviations from it).
;info-gap is maximin: Ben-Haim (2006, p.xii) claims that info-gap is "radically different from all current theories of decision under uncertainty," while Sniedovich argues that info-gap's robustness analysis is precisely maximin analysis of the horizon of uncertainty. By contrast, Ben-Haim states (Ben-Haim 1999, pp.&nbsp;271–2) that "robust reliability is emphatically not a [min-max] worst-case analysis".
 
Sniedovich has challenged the validity of info-gap theory for making decisions under severe uncertainty. He questions the effectiveness of info-gap theory in situations where the best estimate <math>\displaystyle \tilde{u}</math> is a poor indication of the true value of <math>\displaystyle u</math>. Sniedovich notes that the info-gap robustness function is "local" to the region around <math>\displaystyle \tilde{u}</math>, where <math>\displaystyle \tilde{u}</math> is likely to be substantially in error. He concludes that therefore the info-gap robustness function is an unreliable assessment of immunity to error.
 
In the framework of classical [[decision theory]], info-gap's robustness model can be construed as an instance of [[Abraham Wald|Wald]]'s [[minimax|Maximin]] model and its opportuneness model is an instance of the classical Minimin model. Both operate in the neighborhood of an estimate of the parameter of interest whose true value is subject to ''severe'' uncertainty and therefore is likely to be ''substantially wrong''. Moreover, the considerations brought to bear upon the decision process itself also originate in the locality of this unreliable estimate, and so may or may not be reflective of the entire range of decisions and uncertainties.
 
=== Background, working assumptions, and a look ahead ===
 
Decision under severe uncertainty is a formidable task and the development of methodologies capable of handling this task is even a more arduous undertaking.  Indeed, over the past sixty years an enormous effort has gone into the development of such methodologies.  Yet, for all the knowledge and expertise that have accrued in this area of decision theory, no fully satisfactory general methodology is available to date.
 
Now, as portrayed in the info-gap literature, Info-Gap was designed expressly as a methodology for solving decision problems that are subject to severe uncertainty.  And what is more, its aim is to seek solutions that are '''robust'''.
 
Thus, to have a clear picture of info-gap's modus operandi and its role and place in decision theory and robust optimization, it is imperative to examine it within this context.  In other words, it is necessary to establish info-gap's relation to classical decision theory and robust optimization.
To this end, the following questions must be addressed:
* What are the characteristics of decision problems that are subject to severe uncertainty?
* What difficulties arise in the modelling and solution of such problems?
* What type of robustness is sought?
* How does info-gap theory address these issues?
* In what way is info-gap decision theory similar to and/or different from other theories for decision under uncertainty?
 
Two important points need to be elucidated in this regard at the outset:
* Considering the '''severity''' of the uncertainty that info-gap was designed to tackle, it is essential to clarify the difficulties posed by severe uncertainty.
* Since info-gap is a '''non-probabilistic''' method that seeks to '''maximize robustness''' to uncertainty, it is imperative to compare it to the single most important "non-probabilistic" model in classical decision theory, namely Wald's '''Maximin''' paradigm (Wald 1945, 1950).  After all, this paradigm has dominated the scene in classical decision theory for well over sixty years now.
 
So, first let us clarify the assumptions that are implied by '''severe''' uncertainty.
 
==== Working assumptions ====
 
Info-gap decision theory employs three simple constructs to capture the uncertainty associated with decision problems:
# A parameter <math>\displaystyle u</math> whose true value is subject to severe uncertainty.
# A region of uncertainty <math>\displaystyle \mathfrak{U}\ </math> where the true value of <math>\displaystyle u \ </math> lies.
# An estimate <math>\ \displaystyle \tilde{u}\ </math> of the true value of <math>\displaystyle u \ </math>.
 
It should be pointed out, though, that as such these constructs are generic, meaning that they can be employed to model situations where the uncertainty is not severe but mild, indeed very mild.  So it is vital to be clear that to give apt expression to the  '''severity''' of the uncertainty, in the Info-Gap framework these three constructs  are given specific meaning.
<blockquote style="background:beige;font-size:115%; padding:5px; border:1px dashed darkcyan">
[[Image:Assumption.png|right]]
<center>''Working Assumptions''</center>
# The region of uncertainty <math>\displaystyle \mathfrak{U}\ </math> is '''relatively large.'''<br> In fact, Ben-Haim (2006, p.&nbsp;210) indicates that in the context of info-gap decision theory most of the commonly encountered regions of uncertainty are '''unbounded.'''
# The estimate <math>\displaystyle \tilde{u}\ </math> is a '''poor''' approximation of the true value of <math>\displaystyle \ u\ </math>.<br> That is, the estimate is a '''poor''' indication of the true value of <math>\displaystyle \ u\ </math> (Ben-Haim, 2006, p.&nbsp;280) and is likely to be '''substantially wrong''' (Ben-Haim, 2006, p.&nbsp;281).
 
In the picture <math>\displaystyle  u^{\circ}\ </math> represents the true (unknown) value of <math>\ \displaystyle u\ </math>.
 
The point to note here is that conditions of severe uncertainty entail that the estimate <math>\displaystyle  \tilde{u}\ </math>  can—relatively speaking—be very distant from the true value <math>\displaystyle  u^{\circ}\ </math>. This is particularly pertinent for methodologies, like info-gap,  that seek '''robustness''' to uncertainty. Indeed, assuming otherwise  would—methodologically speaking—be tantamount to engaging in wishful thinking.
</blockquote>
 
In short, the situations that info-gap is designed to take on are demanding in the extreme.  Hence, the challenge that one faces conceptually, methodologically and technically is considerable.  It is essential therefore to examine whether info-gap robustness analysis succeeds in this task, and whether the tools that it deploys in this effort are different from those made available by Wald's (1945) Maximin paradigm especially for robust optimization.
 
So let us take a quick look at this stalwart of classical decision theory and robust optimization.
 
==== Wald's Maximin paradigm ====
 
The basic idea behind this famous paradigm can be expressed in plain language as follows:
<blockquote style="background:beige;font-size:115%; padding:5px; border:1px dashed darkcyan">
<center>''Maximin Rule''</center>
The maximin rule tells us to rank alternatives by their worst possible outcomes: we are to adopt the alternative the worst outcome of which is superior to the worst outcome of the others.<div align="right">[[John Rawls|Rawls]] <ref name="rawls">Rawls, J. Theory of Justice, 1971, Belknap Press, Cambridge, MA.</ref>(1971, p.&nbsp;152)</div>
 
</blockquote>
 
Thus,  according to this paradigm,  in the framework of decision-making under severe uncertainty, the robustness of an alternative is a measure of how well this alternative can cope with the '''worst uncertain outcome''' that it can generate. Needless to say, this attitude towards severe uncertainty often leads to the selection of highly '''conservative''' alternatives.  This is precisely the reason that this paradigm is not always a satisfactory methodology for decision-making under severe uncertainty (Tintner 1952).
 
As indicated in the overview, info-gap's robustness model is a Maximin model in disguise. More specifically, it is a simple instance of Wald's Maximin model where:
# The region of uncertainty associated with an alternative decision is an immediate neighborhood of the estimate <math>\displaystyle \tilde{u}\ </math>.
# The uncertain outcomes of an alternative are determined by a characteristic function of the performance requirement under consideration.
 
Thus, aside from the '''conservatism''' issue, a far more serious issue must be addressed.  This is the '''validity''' issue arising from  the '''local'''  nature of info-gap's robustness analysis.
 
==== Local vs global robustness ====
 
[[Image:Maximin assumption.png|right]]
The validity of the results generated by info-gap's robustness analysis are crucially contingent on the quality of the estimate <math>\displaystyle \tilde{u}\ </math>.  Alas, according to info-gap's own working assumptions, this estimate is poor and likely to be substantially wrong (Ben-Haim, 2006, p.&nbsp;280-281).
 
The trouble with this feature of info-gap's robustness model is brought out more forcefully by the picture. The white circle represents the immediate neighborhood of the estimate <math>\ \displaystyle \tilde{u}\ </math> on which the Maximin analysis is conducted. Since the region of uncertainty is large and the quality of the estimate is poor, it is very likely that the true value of <math>\ \displaystyle u\ </math> is distant from the point at which the Maximin analysis is conducted.
 
So given the severity of the uncertainty under consideration, how valid/useful can this type of Maximin analysis really be?
 
The critical issue here is then to what extent can a '''local''' robustness analysis a la Maximin in the immediate neighborhood of a poor estimate aptly represent a large region of uncertainty. This is a serious issue that must be dealt with in this article.
 
It should be pointed out that, in comparison, robust optimization methods invariably take a far more global view of robustness. So much so that '''scenario planning'''  and '''scenario generation''' are central issues in this area.  This reflects a strong commitment to an adequate representation of the entire region of uncertainty in the definition of robustness and in the robustness analysis itself.   
 
And finally there is another reason why the intimate relation to Maximin is crucial to this discussion. This has to do with the portrayal of info-gap's contribution to the state of the art in decision theory, and its role and place vis-a-vis other methodologies.
 
==== Role and place in decision theory ====
 
Info-gap is emphatic about its advancement of the state of the art in decision theory (color is used here for emphasis):
<blockquote style="background:beige;font-size:115%; padding:5px; border:1px dashed darkcyan">
Info-gap decision theory is <span style="color:darkcyan;">radically different from all current theories</span> of decision under uncertainty. The difference originates in the <span style="color:darkcyan;">modelling of uncertainty</span> as an information gap <span style="color:darkcyan;">rather than as a probability</span>. <div align="right"> Ben-Haim (2006, p.xii)</div>
 
In this book we concentrate on the fairly <span style="color:darkcyan;">new concept</span> of information-gap uncertainty, whose <span style="color:darkcyan;">differences</span> from more classical approaches to uncertainty are <span style="color:darkcyan;">real and deep</span>. Despite the power of classical decision theories, in many areas such as engineering, economics, management, medicine and public policy, a need has arisen for a <span style="color:darkcyan;">different format</span> for decisions based on <span style="color:darkcyan;">severely uncertain</span> evidence. <div align="right">Ben-Haim (2006, p.&nbsp;11)</div> 
</blockquote>
 
These strong claims must be substantiated. In particular, a clear-cut, unequivocal answer must be given to the following question: in what way is info-gap's generic robustness model different, indeed <span style="color:darkcyan;">radically different</span>, from <span style="color:darkcyan;">worst-case analysis</span> a la  <span style="color:darkcyan;">Maximin</span>?
 
Subsequent sections of this article describe various aspects of info-gap decision theory and its applications, how it  proposes to cope with the working assumptions outlined above, the local nature of info-gap's robustness analysis and its intimate  relationship with Wald's classical Maximin paradigm and worst-case analysis.
 
=== Invariance property ===
 
The main point to keep in mind here is that info-gap's raison d'être is to provide a methodology for decision under '''severe''' uncertainty.  This means that its primary test would be in the efficacy of its handling of and coping with  '''severe''' uncertainty.  To this end it must be established first how Info-Gap's robustness/opportuneness models behave/fare, as the '''severity''' of the uncertainty is increased/decreased.
 
Second, it must be established whether info-gap's robustness/opportuneness models give adequate expression to the potential variability of the performance function over the entire region of uncertainty.  This is particularly important because Info—Gap is usually concerned with relatively large, indeed unbounded, regions of uncertainty.
 
So, let <math>\ \displaystyle \mathfrak{U} \ </math> denote the total region of uncertainty and consider these key questions:
<blockquote style="font-size:110%">
* How does the robustness/opportuneness analysis respond to an increase/decrease in the size of <math>\ \displaystyle \mathfrak{U} \ </math>?
* How does an increase/decrease in the size of <math>\ \displaystyle \mathfrak{U} \ </math> affect the robustness or opportuneness of a decision?
* How representative are the results generated by info-gap's robustness/opportuneness analysis of what occurs in the relatively large total region of uncertainty <math>\ \displaystyle \mathfrak{U} \ </math>?
</blockquote>
 
[[Image:Invariance gray1.png|right]]
 
Suppose then that the robustness <math>\ \displaystyle \hat{\alpha}(q,r_{c}) \ </math> has been computed for a decision <math>\ \displaystyle q\in \mathcal{Q}\ </math>  and it is observed that <math>\ \displaystyle \ \mathcal{U}(\alpha^{*},\tilde{u}) \subseteq \mathfrak{U}\ </math> where <math>\ \displaystyle \alpha^{*}=\hat{\alpha}(q,r_{c}) + \varepsilon \ </math>&nbsp; for some <math>\ \displaystyle \varepsilon > 0\ </math>.
 
The question is then: how would the robustness of <math>\ \displaystyle q \ </math>, namely  <math>\ \displaystyle \hat{\alpha}(q,r_{c}) \ </math>, be affected if the region of uncertainty would be say, twice as large as <math>\ \displaystyle \mathfrak{U} \ </math>, or perhaps even 10 times as large as <math>\ \displaystyle \mathfrak{U} \ </math>?
 
Consider then the following result which is a direct consequence of the local nature of info-gap's robustness/opportuneness analysis and the nesting property of info-gaps' regions of uncertainty (Sniedovich 2007):
 
==== Invariance Theorem ====
The robustness of decision <math>\ \displaystyle q \ </math> is invariant with the size of the total region of uncertainty <math>\ \displaystyle \mathfrak{U} \ </math> for all <math>\ \displaystyle \mathfrak{U} \ </math> such that
:{| width="70%" border="0"
| (7)<!-- eq 7 ig-->
|<math>\mathcal{U}(\hat{\alpha}(q,r_{c})+\varepsilon,\tilde{u}) \subseteq \mathfrak{U}\ </math>&nbsp; for some <math>\ \displaystyle \varepsilon > 0\ .</math> &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;&nbsp;  &nbsp;&nbsp;&nbsp;<math> \Box</math>
|}
 
In other words, for any given decision, info-gap's analysis yields the same results for all total regions of uncertainty that contain <math>\ \displaystyle \ \mathcal{U}(\alpha^{*},\tilde{u}) \ </math>. This applies to both the robustness and opportuneness models.
 
This is illustrated in the picture: the robustness of a given decision does not change notwithstanding an increase in the region of uncertainty from <math>\ \displaystyle \mathfrak{U} \ </math> to  <math>\ \displaystyle \mathfrak{U}''' \ </math>.
 
In short, by dint of focusing exclusively on the immediate neighborhood of the estimate <math>\ \displaystyle \tilde{u} \ </math> info-gap's robustness/opportuneness models are inherently '''local'''.  For this reason they are --  '''in principle''' -- incapable of incorporating in the analysis of  <math>\ \displaystyle \hat{\alpha}(q,r_{c}) \ </math> and <math>\ \displaystyle \hat{\beta}(q,r_{c}) \ </math> regions of uncertainty that lie outside the neighborhoods <math>\mathcal{U}(\hat{\alpha}(q,r_{c}),\tilde{u})\ </math> and <math>\mathcal{U}(\hat{\beta}(q,r_{c}),\tilde{u})\ </math> of the estimate <math>\ \displaystyle \tilde{u} \ </math>, respectively.
 
To illustrate, consider a simple numerical example where the total region of uncertainty is <math>\mathfrak{U}=(-\infty,\infty),\ </math> the estimate is <math>\ \displaystyle \tilde{u}=0 \ </math>  and for some decision <math>\ \displaystyle \hat{q} \ </math> we obtain <math>\mathcal{U}(\hat{\alpha}(\hat{q},r_{c}),\tilde{u})=(-2,2)</math>. The picture is this:
[[Image:Nomansland.png|center]]
 
where the term '' "No man's land" ''&nbsp; refers to the part of the total region of uncertainty that is outside the region <math>\ \displaystyle \mathcal{U}(\hat{\alpha}(q,r_{c})+\varepsilon,\tilde{u}) \ </math>.
 
Note that in this case the robustness of decision <math>\ \displaystyle \hat{q} \ </math> is based on its (worst-case) performance over no more than a minuscule part of the total region of uncertainty that is an immediate neighborhood of the estimate <math>\ \displaystyle \tilde{u} \ </math>. Since usually info-gap's total region of uncertainty is unbounded, this illustration represents a ''usual'' &nbsp; case rather than an exception.
 
The thing to note then is that info-gap's robustness/opportuneness are '''by definition local properties.'''  As such they cannot assess the performance of decisions over the total region of uncertainty. For this reason it is not clear how Info-Gap's Robustness/Opportuneness models can provide a meaningful/sound/useful basis for decision under sever uncertainty where the estimate is poor and is likely to be substantially wrong.
 
This crucial issue is addressed in subsequent sections of this article.
 
=== Maximin/Minimin: playing robustness/opportuneness games with Nature ===
 
For well over sixty years now  [[Abraham Wald|Wald]]'s  [[minimax|Maximin]]  model has figured in classical [[decision theory]] and related areas – such as [[robust optimization]] - as the foremost non-probabilistic paradigm for modeling and treatment of severe uncertainty. 
 
Info-gap is propounded (e.g. Ben-Haim 2001, 2006) as a new non-probabilistic theory that is radically different from all current decision theories for decision under uncertainty. So, it is imperative to examine in this discussion  in what way, if any, is info-gap's robustness model radically different from [[minimax|Maximin]]. For one thing, there is a well-established assessment of the utility of [[minimax|Maximin]]. For example, Berger (Chapter 5)<ref name=Berger>
{{cite book
|author=James O Berger
|title=Statistical decision theory and Bayesian analysis
|year= 2006; really 1985
|edition=Second
|publisher=Springer Science + Business Media
|location=New York
|isbn=0-387-96098-8
|url=http://books.google.com/?id=oY_x7dE15_AC&pg=PA100&dq=isbn=0387960988#PPA331,M1}}
</ref> suggests that even in situations where no prior information is available (a best case for [[minimax|Maximin]]), [[minimax|Maximin]] can lead to bad decision rules and be hard to implement. He recommends [[Bayesian inference|Bayesian methodology]]. And as indicated above,
<blockquote style="font-size:120%;width=700px">
It should also be remarked that the minimax principle even if it is applicable leads to an extremely conservative policy.
<div align="right">Tintner (1952, p.&nbsp;25)<ref name="tintner 52">{{Cite journal | doi = 10.1214/aoms/1177729482 | last1 = Tintner | first1 = G. | year = 1952 | title = Abraham Wald's contributions to econometrics | url = | journal = The Annals of Mathematical Statistics | volume = 23 | issue = 1| pages = 21–28 }}</ref></div>
</blockquote>
 
However, quite apart from the ramifications that establishing this point might have for the utility of info-gaps' robustness model, the reason that it behooves us to clarify the relationship between info-gap and [[minimax|Maximin]] is the centrality of the latter in decision theory.  After all, this is a major classical decision methodology.  So, any theory claiming to furnish a new non-probabilistic methodology for decision under severe uncertainty would be expected to be compared to this stalwart of decision theory.  And yet, not only is a comparison of info-gap's robustness model to [[minimax|Maximin]] absent from the three books expounding info-gap  (Ben-Haim 1996, 2001, 2006), [[minimax|Maximin]] is not even mentioned in them as the major decision theoretic methodology for severe uncertainty that it is.
Elsewhere in the info-gap literature, one can find discussions dealing with similarities and differences between these two paradigms, as well as discussions on the relationship between info-gap and worst-case analysis,<ref name="ybh 2005 crc"/><ref name="takewaki and ybh 2005"/><ref name="ybh 2005 var"/><ref name="tahan ben-asher 2005"/><ref name="C. Eldar 2005"/><ref>{{Cite journal | doi = 10.1007/s00211-005-0601-x | last1 = Babuška | first1 = I. | last2 = Nobile | first2 = F. | last3 = Tempone | first3 = R. | year = 2005 | title = Worst case scenario analysis for elliptic problems with uncertainty | url = | journal = Numerische Mathematik | volume = 101 | issue = 2| pages = 185–219 }}</ref>
However, the general impression is that the intimate connection between these two paradigms has not been identified.  Indeed, the opposite is argued. For instance, Ben-Haim (2005<ref name="ybh 2005 var"/>) argues that info-gap's robustness model is similar to  [[minimax|Maximin]] but, is not a [[minimax|Maximin]] model.
 
The following quote eloquently expresses Ben-Haim's assessment of info-gap's relationship to Maximin and it provides ample motivation for the analysis that follows.
<blockquote style="font-size:110%;border:1px dashed darkcyan;background-color:beige;padding:5px">We note that robust reliability is emphatically '' not '' a worst-case analysis. In classical worst-case min-max analysis the designer minimizes the impact of the maximally damaging case. But an info-gap model of uncertainty is an unbounded family of nested sets: <math> \ \displaystyle \mathcal{U}(\alpha,\tilde{u}) \ </math>, for all  <math>\ \displaystyle \alpha\ge 0 \ </math>. Consequently, there is no worst case: any adverse occurrence is less damaging than some other more extreme event occurring at a larger value of <math>\ \displaystyle \alpha \ </math>. What Eq. (1) expresses is the greatest level of uncertainty consistent with no-failure. When the designer chooses q to maximize  <math>\ \displaystyle \hat{\alpha}(q, r_{c}) \ </math> he is maximizing his immunity to an unbounded ambient uncertainty. The closest this comes to "min-maxing" is that the design is chosen so that "bad" events (causing reward <math>\ \displaystyle  R\ </math> less than <math>\ \displaystyle  r_{c}\ </math>) occur as "far away" as possible (beyond a maximized value of <math>\ \displaystyle \hat{\alpha} \ </math>).
<div align="right"> Ben-Haim , 1999, pp.&nbsp;271–2<ref name="YBH 99">{{Cite journal | last1 = Ben-Haim | first1 = Y. | year = 1999 | title = Design certification with information-gap uncertainty | url = | journal = Structural Safety | volume = 2 | issue = | pages = 269–289 }}</ref></div>
</blockquote>
 
The point to note here is that this statement misses the fact that the horizon of uncertainty <math>\ \displaystyle \alpha \ </math> is bounded above (implicitly) by the performance requirement
<center>
<math> r_{c} \le R(q,u),\forall u\in \mathcal{U}(\alpha,\tilde{u})</math>
</center>
and that info-gap conducts its worst-case analysis—one analysis at a time for a given <math>\ \displaystyle \alpha \ge 0 \ </math>&nbsp; -- within each of the regions of uncertainty <math>\displaystyle \ \mathcal{U}(\alpha,\tilde{u}), \alpha\ge 0 \ </math> .
 
In short, given the discussions in the info-gap literature on this issue, it is obvious that the kinship between info-gap's robustness model and [[Abraham wald|Wald's]] [[minimax|Maximin]] model, as well as info-gap's kinship with other models of classical decision theory must be brought to light. So,  the  objective in this section is to place info-gap's robustness and opportuneness models in their proper context, namely within the wider frameworks of classical [[decision theory]] and [[robust optimization]].
 
The discussion is based on the classical decision theoretic perspective outlined by Sniedovich (2007<ref name="sniedo 07">{{Cite journal | last1 = Sniedovich | first1 = M. | year = 2007 | title = The art and science of modeling decision-making under severe uncertainty | url = http://www.dmms.agh.edu.pl/Volume_1_2/Sniedovich.pdf | journal = Decision-Making in Manufacturing and Services | volume = 1 | issue = 1–2| pages = 111–136 }}</ref>) and on standard texts in this area (e.g. Resnik 1987,<ref name="resnik 87"/> French 1988<ref name="french 88"/>).
 
<center>
<div style="border:1px red dashed;background-color:beige;font-size:115%;padding:5px;color:#333333"> Certain parts of the exposition that follows have a mathematical slant. <br> This is unavoidable because info-gap's models are mathematical. </div>
</center>
 
==== Generic models ====
 
The basic conceptual framework that classical decision theory provides for dealing with uncertainty is that of a two-player game.  The two players are the decision maker (DM) and '''Nature,''' where Nature represents uncertainty. More specifically, Nature represents the DM's attitude towards uncertainty and risk.
 
Note that a clear distinction is made in this regard between a '''pessimistic''' decision maker and an '''optimistic''' decision maker,  namely between a '''worst-case''' attitude and a '''best-case''' attitude. A pessimistic decision maker assumes that Nature plays '''against''' him whereas an optimistic decision maker assumes that Nature plays '''with''' him.
 
To express these intuitive notions mathematically, classical [[decision theory]] uses a simple model consisting of the following three constructs:
<blockquote style="font-size:115%">
* A set <math>\ \displaystyle D</math> representing the ''decision space'' available to the DM.
* A set of sets <math>\ \displaystyle \{S(d): d\in D\}\ </math> representing ''state spaces'' associated with the decisions in  <math>\ \displaystyle D </math>.
* A function <math>\ \displaystyle g=g(d,s)</math> stipulating the ''outcomes'' generated by the  decision-state pairs <math>\ \displaystyle (d,s)\ </math>.
</blockquote>
 
The function <math>\ \displaystyle g \ </math> is called ''objective function, payoff function, return function, cost function''  etc.
 
The decision-making process (game) defined by these objects consists of three steps:
<blockquote style="font-size:115%">
* '''Step 1:''' The DM selects a decision  <math>\ \displaystyle d\in D \ </math>.
* '''Step 2:''' In response, given <math>\ \displaystyle d\ </math>,  Nature  selects a state <math>\ \displaystyle s\in S(d)\ </math>.
* '''Step 3:''' The outcome <math>\ \displaystyle g(d,s)</math> is alloted to DM.
</blockquote>
 
Note that in contrast to games considered in classical [[game theory]], here the first player (DM) moves first so that the second player (Nature) knows what decision was selected by the first player prior to selecting her decision. Thus, the conceptual and technical complications regrding the existence of [[Nash equilibrium|Nash equilibrium point]] are not pertinent here. Nature is not an independent player, it is a conceptual device describing the DM's attitude towards uncertainty and risk. 
 
At first sight, the simplicity of this framework may strike one as naive.  Yet, as attested by the variety of specific instances that it encompasses it is rich in possibilities, flexible, and versatile. For the purposes of this discussion it suffices to consider the following classical generic setup:
<center>
<math>
\begin{array}{cccc}
z^{*}= & \stackrel{DM}{\mathop{Opt}}&\stackrel{Nature}{\mathop{opt}}\quad & g(d,s)\\[-0.05in]
& d\in D & s\in S(d) &
\end{array}
</math>
</center>
 
where <math>\ \displaystyle \mathop{Opt} \ </math> and <math> \displaystyle \mathop{opt}\ </math> represent the DM's and Nature's optimality criteria, respectively, that is, each is equal to either <math>\ \displaystyle \max\ </math>  or <math>\ \displaystyle \min\ </math>.
 
If <math>\ \displaystyle \mathop{Opt} = \mathop{opt}\ </math> then the game is '''cooperative,''' and if <math>\ \displaystyle \mathop{Opt} \neq \mathop{opt}\ </math> then the game is '''non-cooperative.'''  Thus, this format represents four cases: two non-cooperative games (Maximin and Minimax) and two cooperative games (Minimin, and Maximax). The respective formulations are as follows:
<center>
<math>
\begin{array}{c||c}
\textit{Worst-Case\ Pessimism} & \textit{Best-Case\ Optimism}\\
\hline
Maximin \ \ \ \ \ \ \ \ \ \ \ Minimax & Minimin \ \ \ \ \ \ \ \ \ \ \ \ \ Maximax\\
\displaystyle \max_{d\in D}\,\min_{s\in S(d)}\,g(d,s) \ \ \  \displaystyle \min_{d\in D}\,\max_{s\in S(d)}\,g(d,s)  & \displaystyle \min_{d\in D}\,\min_{s\in S(d)}\,g(d,s) \ \ \ \displaystyle \max_{d\in D}\,\max_{s\in S(d)}\,g(d,s)
\end{array}
</math>
</center>
 
Each case is specified by a pair of optimality criteria employed by DM and Nature. For example, [[minimax|Maximin]] depicts a situation where DM strives to maximize the outcome and Nature strives to minimize it. Similarly, the Minimin paradigm represents situations where both DM and Nature are striving to in minimize the outcome.
 
Of particular interest to this discussion are the Maximin and Minimin paradigms because they subsume info-gap's robustness and opportuneness models, respectively. So, here they are:
<blockquote style="font-size:115%">
{|
|- valign="top"
|  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Maximin Game: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
| <math>\ \displaystyle \max_{d\in D}\,\min_{s\in S(d)}\,g(d,s)</math>
|}
 
* '''Step 1:''' The DM selects a decision  <math>\ \displaystyle d\in D \ </math> with a view to <span style="color:darkgreen;">maximize</span> the outcome <math>\ \displaystyle g(d,s) \ </math>.
* '''Step 2:''' In response, given <math>\ \displaystyle d\ </math>,  Nature  selects a state in <math>\ \displaystyle S(d)\ </math> that minimizes <math> \ \displaystyle g(d,s) \ </math> over <math>\ \displaystyle S(d) \ </math>.
* '''Step 3:''' The outcome <math>\ \displaystyle g(d,s)</math> is alloted to DM.
</blockquote>
 
<blockquote style="font-size:115%">
{|
|- valign="top"
|  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Minimin Game: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
| <math>\ \displaystyle \min_{d\in D}\,\min_{s\in S(d)}\,g(d,s)</math>
|}
 
* '''Step 1:''' The DM selects a decision  <math>\ \displaystyle d\in D \ </math> with a view to <span style="color:darkgreen;">minimizes</span> the outcome <math>\ \displaystyle g(d,s) \ </math>.
* '''Step 2:''' In response, given <math>\ \displaystyle d\ </math>,  Nature  selects a state in <math>\ \displaystyle S(d)\ </math> that minimizes <math> \ \displaystyle g(d,s) \ </math> over <math>\ \displaystyle S(d) \ </math>.
* '''Step 3:''' The outcome <math>\ \displaystyle g(d,s)</math> is alloted to DM.
</blockquote>
 
With this in mind,  consider now info-gap's robustness and opportuneness models.
 
==== Info-gap's robustness model ====
 
From a classical decision theoretic point of view info-gap's robustness model is a game between the DM and Nature, where the DM selects the value of <math>\ \displaystyle \alpha \ </math> (aiming for the largest possible) whereas Nature selects the worst value of <math>\ \displaystyle  u \ </math> in <math>\ \displaystyle \mathcal{U}(\alpha,\tilde{u}) \ </math>. In this context the worst value of <math>\ \displaystyle u \ </math> pertaining to a given <math>\ \displaystyle (q,\alpha) \ </math> pair  is a <math>\ \displaystyle  u\in \mathcal{U}(\alpha,\tilde{u}) \ </math> that violates the performance requirement <math>\ \displaystyle r_{c} \le R(q,u) \ </math>. This is achieved by minimizing <math>\ \displaystyle R(q,u)\ </math> over <math>\ \displaystyle \mathcal{U}(\alpha,\tilde{u})\ </math>. 
 
There are various ways to incorporate the DM's objective and Nature's antagonistic response in a single outcome. For instance, one can use the following characteristic function for this purpose:
<center>
<math>
\varphi(q,\alpha,u):=\begin{cases}
\quad \alpha &, \ \ r_{c} \le R(q,u) \\
-\infty &, \ \ r_{c} > R(q,u)
\end{cases}  \ , \  q\in \mathcal{Q}, \alpha\ge 0, u\in \mathcal{U}(\alpha,\tilde{u})
</math>
</center>
 
Note that, as desired, for any triplet <math>\ \ (q,\alpha,u)\ </math> of interest we have
<center>
<math>
r_{c} \le R(q,u) \ \ \ \longleftrightarrow \ \ \ \alpha \le \varphi(q,\alpha,u)
</math>
</center>
 
hence from the DM's point of view satisficing the performance constraint is equivalent to maximizing &nbsp; <math>\ \displaystyle \varphi(q,\alpha,u)\ </math>.
 
In short, 
<blockquote style="font-size:115%">
{|
|- valign="top"
|  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Info-gap's Maximin Robustness Game for decision <math>\ \displaystyle q \ </math>: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
| <math>\ \displaystyle \hat{\alpha}(q,r_{c}):=\max_{\alpha \ge 0}\,\min_{u\in \mathcal{U}(\alpha,\tilde{u})}\,\varphi(q,\alpha,u)</math>
|}
 
* '''Step 1:''' The DM selects an horizon of uncertainty  <math>\ \displaystyle \alpha\ge 0 \ </math> with a view to <span style="color:darkgreen;">maximize</span> the outcome <math>\ \displaystyle \varphi(q,\alpha,u) \ </math>.
* '''Step 2:''' In response, given <math>\ \displaystyle \alpha \ </math>, Nature  selects a <math>\ \displaystyle u \in \mathcal{U}(\alpha,\tilde{u})\ </math> that minimizes <math> \ \displaystyle \varphi(q,\alpha,u) \ </math> over <math>\ \displaystyle \mathcal{U}(\alpha,\tilde{u}) \ </math>.
* '''Step 3:''' The outcome <math>\ \displaystyle \varphi(q,\alpha,u)</math> is alloted to DM.
</blockquote>
Clearly, the DM's optimal alternative is to select the largest value of <math>\ \displaystyle \alpha \ </math> such that the worst <math>\ \displaystyle u\in \mathcal{U}(\alpha,\tilde{u})\ </math> satisfies the performance requirement.
 
==== Maximin Theorem ====
As shown in Sniedovich (2007),<ref name="sniedo 2007"/> Info-gap's robustness model is a simple instance of [[Wald's maximin model]]. Specifically,
<center><math>
{\hat{\alpha}}(q, {r_{c}}) = \max \left \{ \alpha: \  {r_{\rm c}} \le  \min_{u \in \mathcal{U}(\alpha, \tilde{u})} R(q,u) \right \} = \max_{\alpha \ge 0} \min_{u \in \mathcal{U}(\alpha,\tilde{u})} \varphi(q,\alpha,u) \quad \quad \Box
</math></center>
 
==== Info-gap's opportuneness model ====
 
By the same token,  info-gap's opportuneness model is a simple instance of the generic Minimin model. That is,
<center><math>
{\hat{\beta}}(q, {r_{c}}) = \min \left \{ \alpha: \  {r_{c}} \le  \max_{u \in \mathcal{U}(\alpha, \tilde{u})} R(q,u) \right \} = \min_{\alpha \ge 0} \min_{u \in \mathcal{U}(\alpha,\tilde{u})} \psi(q,\alpha,u)
</math></center>
where
<center><math>
\psi(q,\alpha,u) = \left\{\begin{matrix} \alpha &,& {r_{c}} \le  R(q,u)\\ \infty &,&{r_{ c}} > R(q,u) \end{matrix}\right. \ , \ \alpha \ge 0, u \in \mathcal{U}(\alpha,\tilde{u})
</math></center>
 
observing that, as desired, for any triplet <math>\ \ (q,\alpha,u)\ </math> of interest we have
<center>
<math>
r_{w} \le R(q,u) \ \ \ \longleftrightarrow \ \ \ \alpha \ge \psi(q,\alpha,u)
</math>
</center>
hence, for a given pair <math>\ \displaystyle (q,\alpha)\ </math>, the DM would satisfy the performance requirement via  minimizing the outcome <math>\ \displaystyle \psi(q,\alpha,u)\ </math> over <math>\ \displaystyle \mathcal{U}(\alpha,\tilde{u}) \ </math>. Nature's behavior is a reflection of her sympathetic stance here.
 
'''Remark:''' This attitude towards risk and uncertainty which assumes that  Nature will play '' with us,''&nbsp; is rather naive. As noted by Resnik (1987, p.&nbsp;32<ref name="resnik 87"/>) "... But that rule surely would have few adherence...". Nevertheless it is often used in combination with the [[minimax|Maximin]] rule in the formulation of [[Hurwicz]]'s '' optimism-pessimisim ''&nbsp; rule (Resnik 1987,<ref name="resnik 87"/> French 1988<ref name="french 88"/>) with a view to mitigate the extreme conservatism of [[minimax|Maximin]].
 
==== Mathematical programming formulations ====
 
To bring out more forcefully that info-gap's robustness model is an instance of the generic [[minimax|Maximin]] model, and info-gap's opportuneness model an instance of the generic Minimin model, it is instructive to examine the equivalent so called ''Mathematical Programming '' (MP) formats of these generic models (Ecker and Kupferschmid,<ref name="ecker 88">Ecker J.G. and Kupferschmid, M., ''Introduction to Operations Research,'' Wiley, 1988.</ref> 1988, pp.&nbsp;24–25; Thie 1988<ref name="thie 88">Thie, P., ''An Introduction to Linear Programming and Game Theory,'' Wiley, NY, 1988.</ref> pp.&nbsp;314–317;  Kouvelis and Yu,<ref name="kouvelis 97"/> 1997, p.&nbsp;27):
<center>
<math>
\begin{array}{c|c|c}
\textit{Model} & \textit{Classical\  Format} &  \textit{MP\ Format}  \\
\hline
\textit{Maximin:} & \displaystyle \max_{d\in D}\ \min_{s\in S(d)}\ g(d,s) &
\displaystyle \max_{d\in D,\alpha\in \mathbb{R}}\{\alpha: \alpha \le \min_{s\in S(d)} g(d,s)\} \\
\textit{Minimin:} & \displaystyle \min_{d\in D}\ \min_{s\in S(d)}\ g(d,s) &
\displaystyle \min_{d\in D,\alpha\in \mathbb{R}}\{\alpha: \alpha \ge \min_{s\in S(d)} g(d,s)\}
\end{array}
</math>
</center>
 
Thus, in the case of info-gap we have
<center>
<math>
\begin{array}{c|c|c|c}
\textit{Model} & \textit{Info-Gap\ Format} & \textit{MP\ Format} &  \textit{Classical\ Format}  \\
\hline
\textit{Robustness} &\displaystyle \max\{\alpha: r_{c}\le \min_{u\in \mathcal{U}(\alpha,\tilde{u})} R(q,u)\}  &\displaystyle  \displaystyle \max\{\alpha: \alpha \le \min_{u\in \mathcal{U}(\alpha,\tilde{u})}\varphi(q,\alpha,u)\} & \displaystyle \max_{\alpha\ge 0}\ \min_{u\in \mathcal{U}(\alpha,\tilde{u})}\ \varphi(q,\alpha,u) \\
\textit{Opportuneness} &\displaystyle \min\{\alpha: r_{c}\le \max_{u\in \mathcal{U}(\alpha,\tilde{u})} R(q,u)\}  &\displaystyle  \displaystyle \min\{\alpha: \alpha \ge \min_{u\in \mathcal{U}(\alpha,\tilde{u})}\psi(q,\alpha,u)\} & \displaystyle \min_{\alpha\ge 0}\ \min_{u\in \mathcal{U}(\alpha,\tilde{u})}\ \psi(q,\alpha,u)
\end{array}
</math>
</center>
 
To verify the equivalence between info-gap's formats and the respective decision theoretic formats, recall that, by construction, for any triplet <math>\ \displaystyle (q,\alpha,u)\ </math> of interest we have
<center>
<math>
\alpha \le \varphi(q,\alpha,u)\ \ \  \longleftrightarrow \ \ \  r_{c} \le R(q,u) </math>
 
<math>
\alpha \ge \psi(q,\alpha,u) \ \ \ \longleftrightarrow \ \ \ r_{w} \le R(q,u)
</math>
</center>
 
This means that in the case of robustness/[[minimax|Maximin]], an antagonistic Nature will (effectively) minimize <math>\ \displaystyle R(q,u) \ </math> by minimizing  <math>\ \displaystyle \varphi(q,\alpha,u) \ </math> whereas in the case of  opportuneness/Minimin a sympathetic Nature will (effectively) maximize <math>\ \displaystyle R(q,u) \ </math> by minimizing <math>\ \displaystyle \psi(q,\alpha,u) \ </math>.
 
==== Summary ====
 
Info-gap's robustness analysis stipulates that given a pair <math>\ \displaystyle (q,\alpha)\ </math>, the <span style="color:darkgreen;">'''worst'''</span> element of <math>\ \displaystyle \mathcal{U}(\alpha,\tilde{u})\ </math> is realized. This of course is a typical [[minimax|Maximin]] analysis. In the parlance of classical [[decision theory]]:
<blockquote style="font-size:120%;background-color:beige;padding:3x;border:1px dashed darkcyan">
The '''Robustness''' of decision <math>\ \displaystyle q \ </math> is the <span style="color:darkgreen;">largest</span> horizon of uncertainty, <math>\ \displaystyle \alpha \ </math>,  such that the <span style="color:darkgreen;">'''worst'''</span> value of <math>\ \displaystyle u \ </math> in <math>\ \displaystyle \mathcal{U}(\alpha,\tilde{u}) \ </math> satisfies the performance requirement <math>\ \displaystyle r_{c} \le R(q,u) \ </math>.
</blockquote>
 
Similarly, info-gap's opportuneness analysis stipulates that given a pair <math>\ \displaystyle (q,\alpha)\ </math>, the <span style="color:darkgreen;">'''best'''</span> element of <math>\ \displaystyle \mathcal{U}(\alpha,\tilde{u})\ </math> is realized. This of course is a typical Minimin analysis. In the parlance of classical [[decision theory]]:
<blockquote style="font-size:120%;background-color:beige;padding:3x;border:1px dashed darkcyan">
The '''Opportuneness''' of decision <math>\ \displaystyle q \ </math>  is the <span style="color:darkgreen;">smallest</span> horizon of uncertainty, <math>\ \displaystyle \alpha \ </math> ,  such that the <span style="color:darkgreen;">'''best'''</span> value of <math>\ \displaystyle u \ </math> in <math>\ \displaystyle \mathcal{U}(\alpha,\tilde{u}) \ </math> satisfies the performance requirement <math>\ \displaystyle r_{w} \le R(q,u) \ </math>.
</blockquote>
 
The mathematical transliterations of these concepts are straightforward, resulting in typical Maximin/Minimin models, respectively.
 
Far from being restrictive, the generic Maximin/Minimin models' lean structure is a blessing in disguise. The main point here is that the abstract character of the three basic constructs of the generic models
<blockquote>
* Decision
* State
* Outcome
</blockquote>
 
in effect allows for great flexibility in modeling.
 
A more detailed analysis is therefore required to bring out the full force of the relationship between info-gap and generic classical decision theoretic models. See [[#Notes on the art of math modeling]].
 
=== Treasure hunt ===
The following is a pictorial summary of Sniedovich's (2007) discussion on local vs global robustness. For illustrative purposes it is cast here as a ''Treasure Hunt.'' It shows how the elements of info-gap's robustness model relate to one another and how the severe uncertainty is treated in the model.
 
<center>
{| width=100% bborder=0 cellpadding=5 cellspacing=0
|- vvalign="top" align="left"
| width=145px style="border-bottom:1px solid skyblue" valign="top" | [[File:Australia plain.png]]
| width=280px valign="top" style="border-bottom:1px solid skyblue" | (1) You are in charge of a treasure hunt on a large island <!-- , <math>\ \mathfrak{U}\ </math>, -->  somewhere in the Asia/Pacific region. You consult a portfolio of search strategies.<!-- ,<math>\ \mathcal{Q}\ </math>-->  You need to decide which strategy would be best for this particular expedition.
<!-- <center><math> u \in \mathfrak{U}</math></center> -->
<!-- <center><math> q \in \mathcal{Q}</math></center> --></span>
| width=145px style="border-left:1px solid skyblue;border-bottom:1px solid skyblue" valign="top" | [[File:Australia q.png]]
| width=280px valign="top" style="border-bottom:1px solid skyblue" | (2) The difficulty is that the treasure's exact location on the island is unknown. <!-- Namely, its location is subject to severe uncertainty.--> There is a severe gap between what you need to know—the true location of the treasure—and what you actually know—a poor  estimate of the true location. <!-- Under severe uncertainty the gap is severe.
<center> <math> \ u = ?</math></center> -->
| width=145px style="border-left:1px solid skyblue;border-bottom:1px solid skyblue;" valign="top" | [[File:Australia dot.png]]
| width=280px style="border-bottom:1px solid skyblue" valign="top" | (3) Somehow you compute an estimate of the true location of the treasure. Since we are dealing here with severe uncertainty,  we assume—methodologically speaking—that this estimate is a poor indication of the true location and is likely to be substantially wrong. <!-- In short, methodologically speaking, the best estimate we have, <math>\ \tilde{u}\ </math>, is a wild guess at the true value of <math>\ u \ </math>.<p>
<center><math>\displaystyle \tilde{u} \in \mathfrak{U}</math></center> -->
|- valign="top" align="left"
| vvalign="top" width=145px sstyle="border-left:1px solid skyblue" valign="top" | [[File:Australia regions.png]]
| width=280px valign="top" | (4) To determine the robustness of a given strategy, you conduct a local worst-case analysis in the immediate neighborhood of the poor estimate. Specifically,  you compute the largest safe deviation<!-- , <math>\ \alpha\ </math>,--> from the poor estimate that does not violate the performance requirement.<!-- <center><math>\displaystyle  \hat{\alpha}(q,r_{c}):= \max\{\alpha \ge 0: r_{c}\le \min_{u\in \mathcal{U}(\alpha,\tilde{u})} R(q,u)\}</math></center> -->
| vvalign="top" width=145px style="border-left:1px solid skyblue" valign="top" | [[File:Australia max.png]]
| width=280px valign="top" | (5) You compute the robustness of each search strategy in your portfolio <!-- ,  <math>\ q\in \mathcal{Q}\ </math>, --> and you select the one whose robustness is the largest.
<!-- The recipe is as follows:<p>
<center><math>\displaystyle  \hat{\alpha}(r_{c}):= \max_{q\in \mathcal{Q}} \hat{\alpha}(q,r_{c})</math></center> -->
| valign="top" align="left" colspan=2 style="border-left:1px solid skyblue" | (6) To remind yourself and the financial backers of the expedition that this analysis is subject to severe uncertainty in the true location of the treasure, it is important—methodologically speaking—to display the '''true location''' on the map. Of course, you do not know the true location. But given the severity of the uncertainty, you place it at some distance from the poor estimate. The more severe the uncertainty, the greater should the distance (gap) between the true location and the estimate be.
|-
| vvalign="top" width=145px style="border-top:1px solid skyblue" valign="top" | [[File:Australia true.png]]
| valign="top" align="left" colspan=5 style="border-top:1px solid skyblue" | '''Epilogue:'''<br>
According to Sniedovich (2007) this is an important reminder of the central issue in decision-making under severe uncertainty. The estimate we have is a poor indication of the true value of the parameter of interest and is likely to be substantially wrong. Therefore, in the case of info-gap it is important to show the gap on the map by displaying the true value of <math>\ \displaystyle u \ </math> somewhere in the region of uncertainty.
 
The small red <math>\ \clubsuit\ </math> represents the true (unknown) location of the treasure.
|}
</center>
 
'''In summary:'''
 
Info-gap's robustness model is a mathematical representation of  a local worst-case analysis in the neighborhood of a given estimate of the true value of the parameter of interest. Under severe uncertainty the estimate is assumed to be a poor indication of the true value of the parameter and is likely to be substantially wrong.
 
The fundamental question therefore is: Given the
<blockquote style="font-size:115%">
*{{color|darkcyan|Severity}} of the uncertainty
*{{color|darkcyan|Local}} nature of the analysis
*{{color|darkcyan|Poor}} quality of the estimate
</blockquote>
how meaningful and useful are the results generated by the analysis, and how sound is the methodology as a whole?
 
More on this criticism can be found on [http://www.moshe-online.com/infogap Sniedovich's web site.]
 
=== Notes on the art of math modeling ===
 
<!-- <div align="center" style="font-size:120%; padding:3px;border:1px red dashed;background-color:beige">
The Art of Math Modeling
</div> -->
 
==== Constraint satisficing vs payoff  optimization ====
 
Any satisficing problem can be formulated as an optimization problem. To see that this is so, let the objective function of the optimization problem be the [[indicator function]] of the constraints pertaining to the satisficing problem. Thus, if our concern is to identify a worst-case scenario pertaining to a constraint, this can be done via a suitable Maximin/Minimax worst-case analysis of the indicator function of the constraint.
 
This means that the generic decision theoretic models can handle outcomes that are induced by '''constraint satisficing''' requirements rather than by say '''payoff maximization.'''
 
In particular, note the equivalence
<center>
<math> r \le f(x) \ \ \longleftrightarrow \ \ 1 \le I(x)
</math>
</center>
where
<center>
<math> I(x):= \begin{cases}
1 &, \ \  r \le f(x) \\
0 &,\ \ r > f(x)
\end{cases}\ , \ x\in X
</math>
</center>
and therefore
<center>
<math>
x\in X, r \le f(x) \ \ \ \longleftrightarrow \ \ \ x=\arg\, \max_{x\in X} I(x)
</math>
</center>
 
In practical terms, this means that an antagonistic Nature will aim to select a state that will violate the constraint whereas a sympathetic Nature will aim to select a state that will satisfy the constraint. As for the outcome, the penalty for violating the constraint is such that the decision maker will refrain from selecting a decision that will allow Nature to violate the constraint within the state space pertaining to the selected decision.
 
==== The role of "min" and "max" ====
 
It should be stressed that the feature according info-gap's robustness model its typical [[minimax|Maximin]] character is not the presence of both <math>\ \displaystyle \min \ </math> and <math>\ \displaystyle \max \ </math> in the formulation of the info-gap model. Rather, the reason for this is a deeper one.  It goes to the heart of the conceptual framework that the [[minimax|Maximin]] model captures: Nature playing against the DM.  This is what is crucial here.
 
To see that this is so, let us generalize info-gap's robustness model and consider the following modified model instead:
<center>
<math>
z(q):= \max\{\alpha: R(q,u) \in C, \forall u \in \mathcal{U}(\alpha,\tilde{u})\}
</math>
</center>
 
where in this context <math>\ \displaystyle C \ </math> is some set and <math>\ R\  </math> is some function on <math>\ \displaystyle \mathcal{Q}\times \mathfrak{U} </math>. Note that it is not assumed that <math>\ \displaystyle R \ </math> is a real-valued function.  Also note that "min" is absent from this model.
 
All we need to do to incorporate a ''min''&nbsp; into this model is to express the constraint
<center>
<math>
R(q,u) \in C \ , \ \forall u \in \mathcal{U}(\alpha,\tilde{u})
</math>
</center>
 
as a worst-case requirement. This is a straightforward task, observing that for any triplet <math>\ \displaystyle  (q,\alpha.u)\ </math> of interest we have 
<center>
<math>
R(q,u) \in C \ \ \ \longleftrightarrow \ \ \ \alpha \le I(q,\alpha,u)
</math>
</center>
 
where
<center>
<math>
I(q,\alpha,u):= \begin{cases}
\quad \alpha &, \ \  R(q,u) \in C\\
-\infty &, \ \ R(q,u) \notin C
\end{cases} \ , \ q\in \mathcal{Q}, u\in \mathcal{U}(\alpha,\tilde{u})
</math>
</center>
 
hence,
 
<center>
<math>
\begin{array}{ccl}
\max\{\alpha: R(q,u) \in C, \forall u \in \mathcal{U}(\alpha,\tilde{u})\} &=& \max\{\alpha: \alpha \le I(q,\alpha,u), \forall u \in \mathcal{U}(\alpha,\tilde{u})\} \\
&=& \max\{\alpha: \alpha \le\displaystyle  \min_{u \in \mathcal{U}(\alpha,\tilde{u})} I(q,\alpha,u)\}
\end{array}
</math>
</center>
 
which, of course, is a [[minimax|Maximin]] model a la Mathematical Programming.
 
In short,
 
<center>
<math>
\max\{\alpha: R(q,u) \in C, \forall u \in \mathcal{U}(\alpha,\tilde{u})\} = \max_{\alpha\ge 0}\ \min_{u \in \mathcal{U}(\alpha,\tilde{u})} I(q,\alpha,u)\}
</math>
</center>
 
Note that although the model on the left does not include an explicit "min", it is nevertheless a typical Maximin model. The feature rendering it a [[minimax|Maximin]] model is the <math>\ \displaystyle \forall  \ </math> requirement which lends itself to an intuitive  worst-case formulation and interpretation.
 
In fact, the presence of a double "max" in an info-gap robustness model does not necessarily alter the fact that this model is a [[minimax|Maximin]] model. For instance, consider the robustness model
<center>
<math>
\max\{\alpha: r_{c}\ge \max_{u\in \mathcal{U}(\alpha,\tilde{u})} R(q,u)\}
</math>
</center>
 
This is an instance of the following [[minimax|Maximin]] model
 
<center>
<math>
\max_{\alpha \ge 0} \min_{u\in \mathcal{U}(\alpha,\tilde{u})} \vartheta(q,\alpha,u)
</math>
</center>
where
<center>
<math>
\vartheta(q,\alpha,u):= \begin{cases}
\quad \alpha  &, \ \  r_{c} \ge R(q,\alpha)\\
-\infty &,\ \  r_{c} < R(q,\alpha)
\end{cases}
</math>
</center>
 
The "inner min" indicates that Nature plays against the DM—the "max" player—hence the model is a robustness model.
 
==== The nature of the info-gap/Maximin/Minimin connection  ====
 
This modeling issue is discussed here because claims have been made that although there is a close relationship between info-gap's robustness and opportuneness models and the generic [[minimax|Maximin]] and Minimin models, respectively, the description of info-gap as an '' instance of ''&nbsp; these models is too strong.  The argument put forward is that although it is true that info-gap's robustness model can be expressed as a [[minimax|Maximin]] model, the former is not an instance of the latter.
 
This objection apparently stems from the fact that any optimization problem can be formulated as a Maximin model by a simple employment of ''dummy''&nbsp; variables. That is, clearly
<center>
<math>
\min_{x\in X} f(x) = \max_{y\in Y}\min_{x\in X} g(y,x)
</math>
</center>
where
<center>
<math>
g(y,x) = f(x) \ , \ \forall x\in X, y\in Y
</math>
</center>
for any arbitrary non-empty set <math>\ \displaystyle Y \ </math>.
 
The point of this objection seems to be that we are running the risk of watering down the meaning of the term '' instance ''&nbsp; if we thus contend that any minimization problem is an instance of the [[minimax|Maximin]] model.
 
It must therefore be pointed out that this concern is utterly unwarranted in the case of the info-gap/Maximin/Minimin relation. The correspondence between info-gap's robustness model and the generic [[minimax|Maximin]] model is neither contrived nor is it formulated with the aid of dummy objects. The correspondence is immediate, intuitive, and compelling hence, aptly described by the term '' instance of ''.
 
Specifically, as shown above, info-gap's robustness model is an instance of the generic Maximin model specified by the following constructs:
<center>
<math>
\begin{array}{rccl}
\text{Decision Space} & D & = & (0,\infty)\\
\text{State Spaces} & S(d) & = & \mathcal{U}(d,\tilde{u})\\
\text{Outcomes} & g(d,s) & = & \varphi(q,d,s)
\end{array}
</math>
</center>
 
Furthermore, those objecting to the use of the term '' instance of ''&nbsp;  should note that the Maximin model formulated above has an equivalent so called '' Mathematical Programming ''&nbsp; (MP) formulation deriving from the fact that
<center><math>
\begin{array}{ccc}
\text{Classical Maximin Format}&& \text{MP Maximin Format}\\
\displaystyle \max_{d\in D} \ \min_{s \in S(d)}\ g(d,s) &=&  \displaystyle \max_{d\in D,\alpha \in \mathbb{R}}\{\alpha: \alpha \le  \min_{s\in S(d)} g(d,s)\}
\end{array}
</math>
</center>
 
where <math>\ \mathbb{R} \ </math> denotes the real line.
 
So here are side by side info-gap's robustness model and the two equivalent formulations of the generic [[minimax|Maximin]] paradigm:
<center>
<math>
\begin{array}{c}\textit{Robustness\  Model}
\end{array}</math><br>
&nbsp;<br>
<math>
\begin{array}{c|c|c}
\text{Info-gap Format}& \text{MP Maximin Format}&\text{Classical Maximin Format}\\
\hline \\[-0.18in]
\displaystyle \max\{\alpha: r_{c} \le \min_{u\in \mathcal{U}(\alpha,\tilde{u})} R(q,u)\}&\displaystyle \max\{\alpha: \alpha \le \min_{u \in \mathcal{U}(\alpha,\tilde{u})}\ \varphi(q,\alpha,u)\}&\displaystyle \max_{\alpha\ge 0} \ \min_{u\in \mathcal{U}(\alpha,\tilde{u})} \varphi(q,\alpha,u)
\end{array}
</math>
</center>
 
Note that the equivalence between these three representations of the same decision-making situation makes no use of dummy variables. It is based on the equivalence
<center>
<math>
r_{c} \le R(q,u)  \longleftrightarrow \alpha \le \varphi(q,\alpha,u)
</math>
</center>
deriving directly from the definition of the characteristic function <math>\ \displaystyle \varphi \ </math>.
 
Clearly then, info-gap's robustness model is an instance of the generic [[minimax|Maximin]] model.
 
Similarly, for info-gap's opportuneness model we have
 
<center>
<math>
\begin{array}{c}\textit{Opportuneness\  Model}
\end{array}</math><br>
&nbsp;<br>
<math>
\begin{array}{c|c|c}
\text{Info-gap Format}& \text{MP Minimin Format}&\text{Classical Minimin Format}\\
\hline \\[-0.18in]
\displaystyle \min\{\alpha: r_{w} \le \max_{u\in \mathcal{U}(\alpha,\tilde{u})} R(q,u)\} & \displaystyle \min\{\alpha: \alpha \ge \min_{u \in \mathcal{U}(\alpha,\tilde{u})}\ \psi(q,\alpha,u)\} & \displaystyle \min_{\alpha\ge 0} \ \min_{u\in \mathcal{U}(\alpha,\tilde{u})} \psi(q,\alpha,u)
\end{array}
</math>
</center>
 
Again, it should be stressed that the equivalence between these three representations of the same decision-making situation makes no use of dummy variables. It is based on the equivalence
<center>
<math>
r_{c} \le R(q,u)  \longleftrightarrow \alpha \ge \psi(q,\alpha,u)
</math>
</center>
deriving directly from the definition of the characteristic function <math>\ \displaystyle \psi \ </math>.  
 
Thus, to "help" the DM minimize <math>\ \displaystyle \alpha \ </math>, a sympathetic Nature will select a <math>u \in \mathcal{U}(\alpha,\tilde{u})\ </math> that minimizes <math>\ \psi(q,\alpha,u) \ </math> over <math>\ \displaystyle  \mathcal{U}(\alpha,\tilde{u})\ </math> .
 
Clearly, info-gap's opportuneness model is an instance of the generic Minimin model.
 
==== Other formulations ====
 
There are of course other valid representations of the robustness/opportuneness models.  For instance, in the case of the robustness model, the outcomes can be defined as follows (Sniedovich 2007<ref name="sniedo 07" />) :
<center>
<math>
g(\alpha,u):= \alpha \cdot \left(r_{c} \preceq R(q,u)\right)
</math>
</center>
where the binary operation <math>\ \ \preceq \ \ </math>  is defined as follows:
<center>
<math>
a \preceq b := \begin{cases}
1 &, \ \ a\le b \\
0 &,\ \  a>b
\end{cases}
</math>
</center>
 
The corresponding MP format of the [[minimax|Maximin]] model would then be as follows:
<center>
<math>
\max\{\alpha: \alpha \le \min_{u\in \mathcal{U}(\alpha,\tilde{u})} \alpha \cdot \left(r_{c} \preceq R(q,u)\right) \} = \max\{\alpha: 1 \le \min_{u\in \mathcal{U}(\alpha,\tilde{u})} \left(r_{c} \preceq R(q,u)\right)\}
</math>
</center>
 
In words, to maximize the robustness, the DM selects the largest value of <math>\ \alpha \ </math> such that the performance  constraint <math>\ r_{c} \le R(q,u) \ </math> is satisfied by all <math>\ u\in \mathcal{U}(\alpha,\tilde{u})\ </math>. In plain language: the DM selects the largest value of  <math>\ \displaystyle \alpha \ </math> whose worst outcome in the region of uncertainty of size  <math>\ \displaystyle \alpha \ </math> satisfies the performance requirement.
 
==== Simplifications ====
 
As a rule the classical [[minimax|Maximin]] formulations are not particularly useful when it comes to '''solving''' the problems they represent, as no "general purpose" [[minimax|Maximin]] solver is available (Rustem and Howe 2002<ref name="rustem 02"/>).
 
It is common practice therefore to simplify the classical formulation with a view to derive a formulation that would be readily amenable to solution. This is a problem-specific task which involves exploiting a problem's specific features. The mathematical programming format of [[minimax|Maximin]] is often more user-friendly in this regard.
 
The best example is of course the classical [[minimax|Maximin]] model of [[Game theory|2-person zero-sum games]] which after streamlining is reduced to a standard [[linear programming]] model (Thie 1988,<ref name="thie 88"/> pp.&nbsp;314–317) that is readily solved by [[linear programming]] [[Simplex algorithm|algorithms]].
 
To reiterate,  this [[linear programming]] model is an instance of the generic [[Minimax|Maximin]] model obtained via simplification of the classical [[minimax|Maximin]] formulation of the [[Game theory|2-person zero-sum game]].
 
Another example is [[dynamic programming]] where the Maximin paradigm is incorporated in the dynamic programming functional equation representing sequential decision processes that are subject to severe uncertainty (e.g. Sniedovich 2003<ref name="sniedo 03">{{Cite journal | doi = 10.1287/ited.3.2.32 | last1 = Sniedovich | first1 = M. | year = 2003 | title = OR/MS Games: 3. The Counterfeit coin problem | url = | journal = INFORMS Transactions in Education | volume = 3 | issue = 2| pages = 32–41 }}</ref><ref name="sniedo 03a">{{Cite journal | doi = 10.1287/ited.4.1.48 | last1 = Sniedovich | first1 = M. | year = 2003 | title = OR/MS Games: 4. The joy of egg-dropping in Braunschweig and Hong Kong | url = | journal = INFORMS Transactions on Education | volume = 4 | issue = 1| pages = 48–64 }}</ref>).
 
==== Summary ====
Recall that in plain language the [[minimax|Maximin]] paradigm maintains the following:
<blockquote style="background:beige;font-size:115%; padding:5px; border:1px dashed darkcyan">
<center>''Maximin Rule''</center>
The maximin rule tells us to rank alternatives by their worst possible outcomes: we are to adopt the alternative the worst outcome of which is superior to the worst outcome of the others.<div align="right">Rawls (1971, p.&nbsp;152)</div>
</blockquote>
 
Info-gap's robustness model is a simple instance of this paradigm that is characterized by a specific decision space, state spaces and objective function, as discussed above.
 
Much can be gained by viewing info-gap's theory in this light.
 
==See also==
{{colbegin}}
*[[Decision theory]]
*[[Decision analysis]]
*[[Bayesian inference]]
*[[Bayesian probability]]
*[[Bayesian estimation]]
*[[Hierarchical Bayes model]]
*[[List of publications in statistics]]
*[[Markov chain Monte Carlo]]
*[[Robust decision making]]
*[[Robust optimization]]
*[[Robust statistics]]
*[[Sensitivity analysis]]
*[[Stability radius]]
{{colend}}
 
== External links ==
* [http://www.technion.ac.il/yakov/IGT/igt.htm Info-Gap Theory and Its Applications], further information on info-gap theory
*:[http://www.technion.ac.il/yakov/IGT/wiigt02.html What is Info-Gap Theory?] informal introduction
*:[http://www.technion.ac.il/yakov/IGT/aaahall-ybh2007.pdf Making Responsible Decisions (When it Seems that You Can't): Engineering Design and Strategic Planning Under Severe Uncertainty]
*:[http://www.technion.ac.il/yakov/IGT/start-grow02.html How Did Info-Gap Theory Start? How Does it Grow?]
*:[http://www.technion.ac.il/yakov/IGT/faqs01.pdf Frequently Asked Questions about info-gap theory]
* [http://www.moshe-online.com/infogap/ Info-Gap Campaign], further analysis and critique of info-gap
*:[http://info-gap.moshe-online.com/faqs.html Frequently Asked Questions about Info-Gap Decision Theory] ([http://info-gap.moshe-online.com/faqs_about_infogap.pdf PDF])
 
== Notes ==
{{reflist|group=note}}
 
== References ==
{{reflist|1}}
 
{{DEFAULTSORT:Info-Gap Decision Theory}}
[[Category:Decision theory]]
[[Category:Robust statistics]]

Latest revision as of 22:45, 10 October 2014

Oscar is how he's known as and he completely loves this title. Puerto Rico is where he's been residing for many years and he will never transfer. In her at home std test professional life she is a payroll clerk home std test but she's usually needed her personal business. What I adore performing is performing ceramics but I haven't produced a dime with it.

Visit my site :: http://gcjcteam.org