Bond duration: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Dinamik-bot
m r2.7.3) (Robot: Adding uk:Дюрація
 
en>Btyner
m Units: correct greeks wikilink
Line 1: Line 1:
She was hoping for one baby, and the news that we are having two is a double blessing. But it is important that you don't overdo it because it may jeopardize your whole pregnancy. Skip The Inhaling Or Drinking Of Harmful Substances. So many women of all ages of just about each and every age category get totally changed virtually any infertility problems. Epipadas is a type of abnormal formation of the **** at birth in which the urethra ends in an opening on the upper aspect of the ****. <br><br>The '19 Kids and Counting' star currently has 19 children, and three grandchildren. This is your best time if youre trying to get pregnant. While toxoplasmosis is generally mild to the mother, it can cause serious complications in the fetus. One of the foods you can eliminate from your diet right now to immediately give a boost to your fertility levels are peas. Antidepressant may helpful to calm the nervous system, long term intake may impair the reproductive organs in fertility function by causing damage to the DNA in their sperm, leading to abnormal sperm production including low sperm count and round head sperm. <br><br>Lisa Olsen's E book provides inspiration and some ideas to aid accomplishing pregnancy naturally. progress, or worse yet stop the sperm from reaching the egg completely. For those considering an alternative medicine option, it's possible to implement forms of relaxing meditation, a healthy diet with proper exercise, adequate sleep and proper posture. Though the suggestions are easy, they call for careful planning, patience, and perserverance. Keep a pillow under your body for support, try to relax and lie in bed to allow the sperm to stay in the vagina longer. <br><br>In IVF, sperm and eggs are combined in a petri dish and then the resulting embryos are either transferred into the uterus or frozen for future use. I hope you found the above information on [http://www.quebec-1759.info/ how to get pregnant with twins] naturally to be helpful. Learn about how your body works, about what works in your favor and how to read your fertile signs. I hope that you have found these tips on getting pregnant:now. In addition, whole grains, nuts, tofu, wheat germ, the consumption is also very useful for these products. <br><br>You should relax during the sex, rather than making conceiving a baby a task or a job you and your partner need to do. Other signs and symptoms of PCOS include obesity and weight gain, elevated insulin levels and insulin resistance, oily skin, dandruff, infertility, skin discolorations, high cholesterol levels, elevated blood pressure, and multiple, small cysts in the ovaries. There are many books and e-books that will give you great advice on pre-pregnancy plans and diets. That is, you eat food rich in Potassium, Calcium, Vitamins C, E and B12. He said I should find another doctor who could find out what was complicating my treatment.
[[File:3SAT reduced too VC.svg|thumb|300px|Example of a reduction from the [[boolean satisfiability problem]] (''A'' ∨ ''B'') ∧ (¬''A'' ∨ ¬''B'' ∨ ¬''C'') ∧ (¬''A'' ∨ ''B'' ∨ ''C'') to a [[vertex cover problem]]. The blue vertices form a minimum vertex cover, and the blue vertices in the gray oval correspond to a satisfying [[truth assignment]] for the original formula.]]
In [[computability theory]] and [[computational complexity theory]], a '''reduction''' is an [[algorithm]] for transforming one [[computational problem|problem]] into another problem. A reduction from one problem to another may be used to show that the second problem is as difficult as the first. The mathematical structure generated on a set of problems by the reductions of a particular type generally forms a [[preorder]], whose [[equivalence class]]es may be used to define [[Turing degree|degrees of unsolvability]] and [[complexity class]]es.
 
Intuitively, problem A is reducible to problem B if an algorithm for solving problem B efficiently (if it existed) could also be used as a subroutine to solve problem A efficiently. When this is true, solving A cannot be harder than solving B. We write A ≤<sub>m</sub> B, usually with a subscript on the ≤ to indicate the type of reduction being used (m : mapping reduction, p : polynomial reduction).
 
==Introduction==
Often we find ourselves trying to solve a problem that is similar to a problem we've already solved. In these cases, often a quick way of solving the new problem is to transform each instance of the new problem into instances of the old problem, solve these using our existing solution, and then use these to obtain our final solution. This is perhaps the most obvious use of reductions.
 
Another, more subtle use is this: suppose we have a problem that we've proven is hard to solve, and we have a similar new problem. We might suspect that it, too, is hard to solve. We argue by contradiction: suppose the new problem is easy to solve. Then, if we can show that ''every'' instance of the old problem can be solved easily by transforming it into instances of the new problem and solving those, we have a contradiction. This establishes that the new problem is also hard.
 
A very simple example of a reduction is from ''multiplication'' to ''squaring''. Suppose all we know how to do is to add, subtract, take squares, and divide by two. We can use this knowledge, combined with the following formula, to obtain the product of any two numbers:
 
: <math>a \times b = \frac{\left(\left(a + b\right)^{2} - a^{2} - b^{2}\right)}{2}</math>
 
We also have a reduction in the other direction; obviously, if we can multiply two numbers, we can square a number. This seems to imply that these two problems are equally hard. This kind of reduction corresponds to [[Turing reduction]].
 
However, the reduction becomes much harder if we add the restriction that we can only use the squaring function one time, and only at the end. In this case, even if we're allowed to use all the basic arithmetic operations, including multiplication, no reduction exists in general, because we may have to compute an [[irrational number]] like <math>\sqrt{2}</math> from rational numbers. Going in the other direction, however, we can certainly square a number with just one multiplication, only at the end. Using this limited form of reduction, we have shown the unsurprising result that multiplication is harder in general than squaring. This corresponds to [[many-one reduction]].
 
== Definition ==
Given two [[subset]]s ''A'' and ''B'' of '''[[natural number|N]]''' and a set of [[function (mathematics)|function]]s ''F'' from '''N''' to '''N''' which is closed under [[function composition|composition]], ''A'' is called '''reducible''' to ''B'' under ''F'' if
:<math>\exists f \in F \mbox{ . } \forall x \in \mathbb{N} \mbox{ . } x \in A \Leftrightarrow f(x) \in B</math>
 
We write
:<math>A \leq_{F} B</math>
 
Let ''S'' be a [[subset]] of '''P'''('''N''') and ≤ a reduction, then ''S'' is called '''closed''' under ≤ if
:<math>\forall s \in S \mbox{ . } \forall A \in P(N) \mbox{ . } A \leq s \Rightarrow A \in S</math>
 
A subset ''A'' of '''N''' is called '''hard''' for ''S'' if
:<math>\forall s \in S \mbox{ . } s \leq A</math>
A subset ''A'' of '''N''' is called '''[[Complete (complexity)|complete]]''' for ''S'' if ''A'' is hard for ''S'' and ''A'' is in ''S''.
 
==Properties==
A reduction is a [[preorder]]ing, that is a [[reflexive relation|reflexive]] and [[transitive relation]], on '''P'''('''N''')&times;'''P'''('''N'''), where '''P'''('''N''') is the [[power set]] of the [[natural number]]s.
 
== Types and applications of reductions ==
As described in the example above, there are two main types of reductions used in computational complexity, the [[many-one reduction]] and the [[Turing reduction]]. Many-one reductions map ''instances'' of one problem to ''instances'' of another; Turing reductions ''compute'' the solution to one problem, assuming the other problem is easy to solve. The many-one reduction is a stronger type of Turing reduction, and is more effective as separating problems into distinct complexity classes. However, the increased restrictions on many-one reductions make them more difficult to find.
 
A problem is [[complete (complexity)|complete]] for a complexity class if every problem in the class reduces to that problem, and it is also in the class itself. In this sense the problem represents the class, since any solution to it can, in combination with the reductions, be used to solve every problem in the class.
 
However, in order to be useful, reductions must be ''easy''. For example, it's quite possible to reduce a difficult-to-solve [[NP-complete]] problem like the [[boolean satisfiability problem]] to a trivial problem, like determining if a number equals zero, by having the reduction machine solve the problem in exponential time and output zero only if there is a solution. However, this does not achieve much, because even though we can solve the new problem, performing the reduction is just as hard as solving the old problem. Likewise, a reduction computing a [[computable function|noncomputable function]] can reduce an [[undecidable problem]] to a decidable one. As Michael Sipser points out in ''Introduction to the Theory of Computation'': "The reduction must be easy, relative to the complexity of typical problems in the class [...] If the reduction itself were difficult to compute, an easy solution to the complete problem wouldn't necessarily yield an easy solution to the problems reducing to it."
 
Therefore, the appropriate notion of reduction depends on the complexity class being studied. When studying the complexity class [[NP (complexity)|NP]] and harder classes such as the [[polynomial hierarchy]], [[polynomial-time reduction]]s are used. When studying classes within P such as [[NC (complexity)|NC]] and [[NL (complexity)|NL]], [[log-space reduction]]s are used. Reductions are also used in [[computability theory]] to show whether problems are or are not solvable by machines at all; in this case, reductions are restricted only to [[computable function]]s.
 
In case of optimization (maximization or minimization) problems, we often think in terms of [[approximation-preserving]] reductions. Suppose we have two optimization problems such that instances of one problem can be mapped onto instances of the other, in a way that nearly optimal solutions to instances of the latter problem can be transformed back to yield nearly optimal solutions to the former. This way, if we have an optimization algorithm (or [[approximation algorithm]]) that finds near-optimal (or optimal) solutions to instances of problem B, and an efficient approximation-preserving reduction from problem A to problem B, by composition we obtain an optimization algorithm that yields near-optimal solutions to instances of problem A. Approximation-preserving reductions are often used to prove [[hardness of approximation]] results: if some optimization problem A is hard to approximate (under some complexity assumption) within a factor better than α for some α, and there is a β-approximation-preserving reduction from problem A to problem B, we can conclude that problem B is hard to approximate within factor α/β.
 
==Examples==
* To show that a [[decision problem]] P is [[Undecidable problem|undecidable]] we must find a reduction from a decision problem which is already known to be undecidable to P. That reduction function must be a [[computable function]]. In particular, we often show that a problem P is undecidable by showing that the [[halting problem]] reduces to P.
* The complexity classes [[P (complexity)|P]], [[NP (complexity)|NP]] and [[PSPACE]] are closed under (many-one, "Karp") [[polynomial-time reduction]]s.
* The complexity classes [[L (complexity)|L]], [[NL (complexity)|NL]], [[P (complexity)|P]], [[NP (complexity)|NP]] and [[PSPACE]] are closed under [[log-space reduction]].
 
===Detailed example===
The following example shows how to use reduction from the halting problem to prove that a language is undecidable. Suppose ''H''(''M'', ''w'') is the problem of determining whether a given [[Turing machine]] ''M'' halts (by accepting or rejecting) on input string ''w''. This language is known to be undecidable. Suppose ''E''(''M'') is the problem of determining whether the language a given Turing machine ''M'' accepts is empty (in other words, whether ''M'' accepts any strings at all). We show that ''E'' is undecidable by a reduction from ''H''.
 
To obtain a contradiction, suppose ''R'' is a decider for ''E''. We will use this to produce a decider ''S'' for ''H'' (which we know does not exist). Given input ''M'' and ''w'' (a Turing machine and some input string), define ''S''(''M'', ''w'') with the following behavior: ''S'' creates a Turing machine ''N'' that accepts only if the input string to ''N'' is ''w'' and ''M'' halts on input ''w'', and does not halt otherwise. The decider ''S'' can now evaluate ''R''(''N'') to check whether the language accepted by ''N'' is empty. If ''R'' accepts ''N'', then the language accepted by ''N'' is empty, so in particular ''M'' does not halt on input ''w'', so ''S'' can reject. If ''R'' rejects ''N'', then the language accepted by ''N'' is nonempty, so ''M'' does halt on input ''w'', so ''S'' can accept. Thus, if we had a decider ''R'' for ''E'', we would be able to produce a decider ''S'' for the halting problem ''H''(''M'', ''w'') for any machine ''M'' and input ''w''. Since we know that such an ''S'' cannot exist, it follows that the language ''E'' is also undecidable.
 
== See also ==
* [[Gadget (computer science)]]
* [[Many-one reduction]]
* [[Reduction (recursion theory)]]
* [[Truth table reduction]]
* [[Turing reduction]]
 
== References ==
*Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein, Introduction to Algorithms, MIT Press, 2001, ISBN 978-0-262-03293-3
*Hartley Rogers, Jr.: Theory of Recursive Functions and Effective Computability, McGraw-Hill, 1967, ISBN 978-0-262-68052-3.
*Peter Bürgisser: Completeness and Reduction in Algebraic Complexity Theory, Springer, 2000, ISBN 978-3-540-66752-0.
*E.R. Griffor: Handbook of Computability Theory, North Holland, 1999, ISBN 978-0-444-89882-1.
 
{{DEFAULTSORT:Reduction (Complexity)}}
[[Category:Computational complexity theory]]
[[Category:Structural complexity theory]]

Revision as of 04:44, 31 January 2014

File:3SAT reduced too VC.svg
Example of a reduction from the boolean satisfiability problem (AB) ∧ (¬A ∨ ¬B ∨ ¬C) ∧ (¬ABC) to a vertex cover problem. The blue vertices form a minimum vertex cover, and the blue vertices in the gray oval correspond to a satisfying truth assignment for the original formula.

In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A reduction from one problem to another may be used to show that the second problem is as difficult as the first. The mathematical structure generated on a set of problems by the reductions of a particular type generally forms a preorder, whose equivalence classes may be used to define degrees of unsolvability and complexity classes.

Intuitively, problem A is reducible to problem B if an algorithm for solving problem B efficiently (if it existed) could also be used as a subroutine to solve problem A efficiently. When this is true, solving A cannot be harder than solving B. We write A ≤m B, usually with a subscript on the ≤ to indicate the type of reduction being used (m : mapping reduction, p : polynomial reduction).

Introduction

Often we find ourselves trying to solve a problem that is similar to a problem we've already solved. In these cases, often a quick way of solving the new problem is to transform each instance of the new problem into instances of the old problem, solve these using our existing solution, and then use these to obtain our final solution. This is perhaps the most obvious use of reductions.

Another, more subtle use is this: suppose we have a problem that we've proven is hard to solve, and we have a similar new problem. We might suspect that it, too, is hard to solve. We argue by contradiction: suppose the new problem is easy to solve. Then, if we can show that every instance of the old problem can be solved easily by transforming it into instances of the new problem and solving those, we have a contradiction. This establishes that the new problem is also hard.

A very simple example of a reduction is from multiplication to squaring. Suppose all we know how to do is to add, subtract, take squares, and divide by two. We can use this knowledge, combined with the following formula, to obtain the product of any two numbers:

a×b=((a+b)2a2b2)2

We also have a reduction in the other direction; obviously, if we can multiply two numbers, we can square a number. This seems to imply that these two problems are equally hard. This kind of reduction corresponds to Turing reduction.

However, the reduction becomes much harder if we add the restriction that we can only use the squaring function one time, and only at the end. In this case, even if we're allowed to use all the basic arithmetic operations, including multiplication, no reduction exists in general, because we may have to compute an irrational number like 2 from rational numbers. Going in the other direction, however, we can certainly square a number with just one multiplication, only at the end. Using this limited form of reduction, we have shown the unsurprising result that multiplication is harder in general than squaring. This corresponds to many-one reduction.

Definition

Given two subsets A and B of N and a set of functions F from N to N which is closed under composition, A is called reducible to B under F if

fF . x . xAf(x)B

We write

AFB

Let S be a subset of P(N) and ≤ a reduction, then S is called closed under ≤ if

sS . AP(N) . AsAS

A subset A of N is called hard for S if

sS . sA

A subset A of N is called complete for S if A is hard for S and A is in S.

Properties

A reduction is a preordering, that is a reflexive and transitive relation, on P(NP(N), where P(N) is the power set of the natural numbers.

Types and applications of reductions

As described in the example above, there are two main types of reductions used in computational complexity, the many-one reduction and the Turing reduction. Many-one reductions map instances of one problem to instances of another; Turing reductions compute the solution to one problem, assuming the other problem is easy to solve. The many-one reduction is a stronger type of Turing reduction, and is more effective as separating problems into distinct complexity classes. However, the increased restrictions on many-one reductions make them more difficult to find.

A problem is complete for a complexity class if every problem in the class reduces to that problem, and it is also in the class itself. In this sense the problem represents the class, since any solution to it can, in combination with the reductions, be used to solve every problem in the class.

However, in order to be useful, reductions must be easy. For example, it's quite possible to reduce a difficult-to-solve NP-complete problem like the boolean satisfiability problem to a trivial problem, like determining if a number equals zero, by having the reduction machine solve the problem in exponential time and output zero only if there is a solution. However, this does not achieve much, because even though we can solve the new problem, performing the reduction is just as hard as solving the old problem. Likewise, a reduction computing a noncomputable function can reduce an undecidable problem to a decidable one. As Michael Sipser points out in Introduction to the Theory of Computation: "The reduction must be easy, relative to the complexity of typical problems in the class [...] If the reduction itself were difficult to compute, an easy solution to the complete problem wouldn't necessarily yield an easy solution to the problems reducing to it."

Therefore, the appropriate notion of reduction depends on the complexity class being studied. When studying the complexity class NP and harder classes such as the polynomial hierarchy, polynomial-time reductions are used. When studying classes within P such as NC and NL, log-space reductions are used. Reductions are also used in computability theory to show whether problems are or are not solvable by machines at all; in this case, reductions are restricted only to computable functions.

In case of optimization (maximization or minimization) problems, we often think in terms of approximation-preserving reductions. Suppose we have two optimization problems such that instances of one problem can be mapped onto instances of the other, in a way that nearly optimal solutions to instances of the latter problem can be transformed back to yield nearly optimal solutions to the former. This way, if we have an optimization algorithm (or approximation algorithm) that finds near-optimal (or optimal) solutions to instances of problem B, and an efficient approximation-preserving reduction from problem A to problem B, by composition we obtain an optimization algorithm that yields near-optimal solutions to instances of problem A. Approximation-preserving reductions are often used to prove hardness of approximation results: if some optimization problem A is hard to approximate (under some complexity assumption) within a factor better than α for some α, and there is a β-approximation-preserving reduction from problem A to problem B, we can conclude that problem B is hard to approximate within factor α/β.

Examples

Detailed example

The following example shows how to use reduction from the halting problem to prove that a language is undecidable. Suppose H(M, w) is the problem of determining whether a given Turing machine M halts (by accepting or rejecting) on input string w. This language is known to be undecidable. Suppose E(M) is the problem of determining whether the language a given Turing machine M accepts is empty (in other words, whether M accepts any strings at all). We show that E is undecidable by a reduction from H.

To obtain a contradiction, suppose R is a decider for E. We will use this to produce a decider S for H (which we know does not exist). Given input M and w (a Turing machine and some input string), define S(M, w) with the following behavior: S creates a Turing machine N that accepts only if the input string to N is w and M halts on input w, and does not halt otherwise. The decider S can now evaluate R(N) to check whether the language accepted by N is empty. If R accepts N, then the language accepted by N is empty, so in particular M does not halt on input w, so S can reject. If R rejects N, then the language accepted by N is nonempty, so M does halt on input w, so S can accept. Thus, if we had a decider R for E, we would be able to produce a decider S for the halting problem H(M, w) for any machine M and input w. Since we know that such an S cannot exist, it follows that the language E is also undecidable.

See also

References

  • Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein, Introduction to Algorithms, MIT Press, 2001, ISBN 978-0-262-03293-3
  • Hartley Rogers, Jr.: Theory of Recursive Functions and Effective Computability, McGraw-Hill, 1967, ISBN 978-0-262-68052-3.
  • Peter Bürgisser: Completeness and Reduction in Algebraic Complexity Theory, Springer, 2000, ISBN 978-3-540-66752-0.
  • E.R. Griffor: Handbook of Computability Theory, North Holland, 1999, ISBN 978-0-444-89882-1.