Local Euler characteristic formula: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Addbot
m Bot: Removing Orphan Tag (Nolonger an Orphan) (Report Errors)
en>K9re11
mNo edit summary
 
Line 1: Line 1:
{{Confusing|date=May 2010}}
<br><br>A chef's knife is an important [https://Www.Google.com/search?hl=en&gl=us&tbm=nws&q=culinary culinary] tool in the kitchen and is made use of generally by cooks.  Effectively there are a handful of motives I guess why I favor to use an electric one. When you already know that a sharp knife will give you the most effective efficiency and will make it less complicated for you to reduce, slice, chop and dice you will want to make confident that the knife description mentions that the edges and centers are sharp in addition to just the tip.  In case you loved this short article and you would love to receive more information relating  Wusthof Ikon Knives Overview to [http://www.thebestkitchenknivesreviews.com/wusthof-knives-review-single-brand-different-choices/ Wusthof Knives Evaluation] i implore you to visit our own site. A properly balanced knife will handle significantly less complicated, there will be significantly less likelihood of it falling out of your hand and will also do much of the operate for you.  Then check out the boning knife guide!<br><br>The size of fillet knife you select ought to depend on what size fish you fillet most typically. Right here you may perhaps wish to consider a knife with a extra attractive handle or customized design and style.  The cleaver possesses one of a kind qualities as opposed to any other knife in your kitchen its [http://Search.huffingtonpost.com/search?q=technique+utilised&s_it=header_form_v1 technique utilised] to cut, its weight and thickness, and its edge.  Absolutely nothing worse than a steak knife that doesn't do the job!<br><br>Most sushi Overview Of Wusthof Knives knife sharpening sessions begin with the Nakato as it is very best for honing the shape of the blade as effectively as sharpening it. Grip the sushi knife in your hand with the tip pointing away from you. Place the first 3 fingers of your totally free hand on the back, non-cutting side of the knife for help and guidance. Re-position the edge of the knife blade on the Nakato whetstone at the 10 to 20 degree angle, this time starting in the middle of the knife blade.<br><br>The set is developed to permit anyone to pick your best set of knives by delivering a set that only involves a wood storage block, sharpening steel and a come apart kitchen shear. The second 1 is the Wusthof Precision Edge two Stage Knife Sharpener used for fine tuning you knife set. Nevertheless if you cannot afford to acquire a forged knife set (I really feel your pain) then stamped sets are you next port of contact.<br><br>Just click on the knife picture above and when you get to the Victorinox 40570 page, scroll down to the critiques section.  The leading-rated sharpener pictured here is the a single I received from my son at Christmas.  It is straightforward to use and performs for my 6" chef's knife... as effectively as my smaller sized paring knives. Two Knives - An 8-inch chef's knife and a three 1/two-inch paring knife, can tackle just about any cutting activity. Hold the knife firmly in your hand and cut away from the physique.<br><br>I've in fact tried making use of a kitchen knife made out of carbon and found your comments to be proper on. You don't want to use these blades for any type of prying as I immediately discovered. There is just as a great deal info here as when I took a knife class at a regional market. I, on the other hand, have but one particular knife that I use for all occasionsIf there is a knife continuum, you would be on 1 finish, I would be on the other end.<br><br>Chef Oliver's recipe contains combining three components of olive-oil into 1 element of red wine vinegar or lemon juice, then add some pepper and salt, shake the ingredients inside a jar, as easy a nd basic as that to develop a tasty salad dressing. I wanted a great, cheap, all-objective kitchen knife that was sharp, but not so scary that I'd in no way use it. I am so glad I took my time reading what the Amazon reviewers had to say, due to the fact my new knife makes cooking exciting now.<br><br>Serrated knives really should include things like characteristics like: extended, sharp and jagged edges which will let the knife to reduce by way of soft and tender objects such as tomatoes and bread. Telling them apart is straightforward their distinction lies in their handles, a forged-blade's knife's hand-molded., even though the stamped-blade knife was pressed employing some machine.  The forged-blade knife naturally comes superior compared to that of a stamped-blade knife.<br><br>This knife can actually do it all and has been made use of by the USMC for years as their regular problem knife, and for good cause, this knife is simply remarkable. My only problem with these survival knives are that the blade could possibly be a tad quick for batoning wood, that said when I tested this knife I essentially identified it the easiest to baton with due to its super thick bladeIf you do any critical cooking, you have to have a chef's knife in your cutlery collection.
Within [[computer science]] and [[operations research]],
many [[combinatorial optimization]] problems are computationally [[intractability (complexity)|intractable]] to solve exactly (to optimality).
Many such problems do admit fast ([[polynomial time]]) [[approximation algorithms]]—that is, algorithms that are guaranteed to return an approximately optimal solution given any input.
 
'''Randomized rounding'''
{{harv|Raghavan|Tompson|1987}}
is a widely used approach for designing and analyzing such [[approximation algorithms]].<ref name="MotwaniRaghavan1995">
{{cite book
|first1=Rajeev |last1=Motwani |authorlink1=Rajeev Motwani
|first2=Prabhakar |last2=Raghavan |authorlink2=Prabhakar Raghavan
|title=Randomized algorithms
|url=http://books.google.com/books?id=QKVY4mDivBEC&q=randomized+rounding#v=snippet&q=randomized%20rounding&f=false
|publisher=[[Cambridge University Press]]
|isbn=978-0-521-47465-8}}
</ref><ref name="Vazirani2001">
{{cite book
|first=Vijay |last=Vazirani
|authorlink=Vijay Vazirani
|title=Approximation algorithms
|url=http://books.google.com/books?id=EILqAmzKgYIC&dq=vazirani+approximation+algorithms+doi+isbn&printsec=frontcover&source=bn&hl=en&ei=3PSzS4-BO4TUNZSF4Z0J&sa=X&oi=book_result&ct=result&resnum=5&ved=0CCYQ6AEwBA#v=snippet&q=%22randomized%20rounding%22&f=false
|publisher=[[Springer Verlag]]
|isbn=978-3-540-65367-7}}
</ref<!-- book references generated by http://reftag.appspot.com -->
The basic idea is to use the [[probabilistic method]]
to convert an optimal solution of a [[linear programming relaxation|relaxation]]
of the problem into an approximately optimal solution to the original problem.
 
== Overview ==
The basic approach has three steps:
# Formulate the problem to be solved as an [[integer linear program]] (ILP).
# Compute an optimal fractional solution <math>x</math> to the [[linear programming relaxation]] (LP) of the ILP.
# Round the fractional solution <math>x</math> of the LP to an integer solution <math>x'</math> of the ILP.
 
(Although the approach is most commonly applied with linear programs,
other kinds of relaxations are sometimes used.
For example, see Goeman's and Williamson's [[semi-definite programming]]-based
[[Semi-definite_programming#Example_3_.28Goemans-Williamson_MAX_CUT_approximation_algorithm.29|Max-Cut approximation algorithm]].)
 
The challenge in the first step is to choose a suitable integer linear program.
Familiarity with linear programming is required, in particular, familiarity with
how to model problems using linear programs and integer linear programs.
But, for many problems, there is a natural integer linear program that works well,
such as in the Set Cover example below(The integer linear program should have a small
[[Linear_programming_relaxation#Approximation_and_integrality_gap|integrality gap]];
indeed randomized rounding is often used to prove bounds on integrality gaps.)
 
In the second step, the optimal fractional solution can typically  be computed
in [[Polynomial_time#Polynomial_time|polynomial time]]
using any standard [[linear programming]] algorithm.
 
In the third step, the fractional solution must be converted into an integer solution
(and thus a solution to the original problem).
This is called ''rounding'' the fractional solution.
The resulting integer solution should (provably) have cost
not much larger than the cost of the fractional solution.
This will ensure that the cost of the integer solution
is not much larger than the cost of the optimal integer solution.
 
The main technique used to do the third step (rounding) is to use randomization,
and then to use probabilistic arguments to bound the increase in cost due to the rounding
(following the [[probabilistic method]] from combinatorics).
There, probabilistic arguments are used to show the existence of discrete structures with
desired propertiesIn this context, one uses such arguments to show the following:
: ''Given any fractional solution <math>x</math> of the LP, with positive probability the randomized rounding process produces an integer solution <math>x'</math> that approximates <math>x</math>'' according to some desired criterion.
 
Finally, to make the third step computationally efficient,
one either shows that <math>x'</math> approximates <math>x</math>
with high probability (so that the step can remain randomized)
or one [[Derandomization|derandomizes]] the rounding step,
typically using the [[method of conditional probabilities]].
The latter method converts the randomized rounding process
into an efficient deterministic process that is guaranteed
to reach a good outcome.
 
== Comparison to other applications of the probabilistic method ==
The randomized rounding step differs from most applications of the [[probabilistic method]] in two respects:
# The [[computational complexity]] of the rounding step is important.  It should be implementable by a fast (e.g. [[polynomial time]]) [[algorithm]].
# The probability distribution underlying the random experiment is a function of the solution <math>x</math> of a [[linear programming relaxation|relaxation]] of the problem instance.  This fact is crucial to proving the [[Approximation_algorithm#Performance_guarantees|performance guarantee]] of the approximation algorithm --- that is, that for any problem instance, the algorithm returns a solution that approximates the ''optimal solution for that specific instance''In comparison, [[Probabilistic method|applications of the probabilistic method in combinatorics]] typically show the existence of structures whose features depend on other parameters of the input.  For example, consider [[Turán's theorem]], which can be stated as "any [[Graph (mathematics)|graph]] with <math>n</math> vertices of average degree <math>d</math> must have an [[Independent set (graph theory)|independent set]] of size at least <math>n/(d+1)</math>.  (See  [[Method_of_conditional_probabilities#Turán.27s_theorem|this for a probabilistic proof of Turán's theorem]].) While there are graphs for which this bound is tight, there are also graphs which have independent sets much larger than <math>n/(d+1)</math>.  Thus, the size of the independent set shown to exist by Turán's theorem in a graph may, in general, be much smaller than the maximum independent set for that graph.
 
== Set Cover example ==
The method is best illustrated by example.
The following example illustrates how randomized rounding
can be used to design an approximation algorithm for the [[Set Cover]] problem.
 
Fix any instance <math>\langle c, \mathcal S\rangle</math>
of the Set Cover problem over a universe <math>\mathcal U</math>.
 
For step 1, let IP be the
[[set cover#Integer linear program formulation|standard integer linear program for set cover]]
for this instance.
 
For step 2, let LP be the [[linear programming relaxation]] of IP,
and compute an optimal solution <math>x^*</math> to LP
using any standard [[linear programming]] algorithm.
(This takes time polynomial in the input size.)
 
(The feasible solutions to LP are the vectors <math>x</math>
that assign each set <math>s \in\mathcal S</math>
a non-negative weight <math>x_s</math>,
such that, for each element <math>e\in\mathcal U</math>,
<math>x'</math> ''covers'' <math>e</math>
-- the total weight assigned to the sets containing <math>e</math>
is at least 1, that is,
:: <math>\sum_{s\ni e} x_s \ge 1.</math>
The optimal solution <math>x^*</math>
is a feasible solution whose cost
::  <math>\sum_{s\in\mathcal U} c(S)x^*_s</math>
is as small as possible.)
 
----
Note that any set cover <math>\mathcal C</math> for <math>\mathcal S</math>
gives a feasible solution <math>x</math>
(where <math>x_s=1</math> for <math>s\in\mathcal C</math>,
<math>x_s=0</math> otherwise).
The cost of this <math>\mathcal C</math> equals the cost of <math>x</math>, that is,
::  <math>\sum_{s\in\mathcal C} c(s) = \sum_{s\in\mathcal S} c(s) x_s.</math>
In other words, the linear program LP is a [[linear programming relaxation|relaxation]]
of the given set-cover problem.
 
Since <math>x^*</math> has minimum cost among feasible solutions to the LP,
''the cost of <math>x^*</math> is a lower bound on the cost of the optimal set cover''.
 
=== Step 3: The randomized rounding step ===
Here is a description of the third step—the rounding step,
which must convert the minimum-cost fractional set cover <math>x^*</math>
into a feasible integer solution <math>x'</math> (corresponding to a true set cover).
 
The rounding step should produce an <math>x'</math> that, with positive probability,
has cost  within a small factor of the cost of <math>x^*</math>.
Then (since the cost of <math>x^*</math> is a lower bound on the cost of the optimal set cover),
the cost of <math>x'</math> will be within a small factor of the optimal cost.
 
As a starting point, consider the most natural rounding scheme:
:: ''For each set <math>s\in\mathcal S</math> in turn, take <math>x'_s = 1</math> with probability <math>\min(1,x^*_s)</math>, otherwise take <math>x'_s = 0</math>.''
 
With this rounding scheme,
the expected cost of the chosen sets is at most <math>\sum_s c(s) x^*_s</math>,
the cost of the fractional cover.
This is good.  Unfortunately the coverage is not good.
When the variables <math>x^*_s</math> are small,
the probability that an element <math>e</math> is not covered is about
 
:  <math>
\prod_{s\ni e} 1-x^*_s
\approx
\prod_{s\ni e} \exp(-x^*_s)
=
\exp\Big(-\sum_{s\ni e}x^*_s\Big)
\approx \exp(-1).
</math>
 
So only a constant fraction of the elements will be covered in expectation.
 
To make <math>x'</math> cover every element with high probability,
the standard rounding scheme
first ''scales up'' the rounding probabilities
by an appropriate factor <math>\lambda > 1 </math>.
Here is the standard rounding scheme:
::  ''Fix a parameter <math>\lambda \ge 1</math>.  For each set <math>s\in\mathcal S</math> in turn,''
::  ''take <math>x'_s = 1</math> with probability <math>\min(\lambda x^*_s, 1)</math>, otherwise take <math>x'_s = 0</math>.''
 
Scaling the probabilities up by <math>\lambda</math>
increases the expected cost by <math>\lambda</math>,
but makes coverage of all elements likely.
The idea is to choose <math>\lambda</math> as small
as possible so that all elements are provably
covered with non-zero probability.
Here is a detailed analysis.
 
----
 
==== lemma (approximation guarantee for rounding scheme) ====
:: ''Fix <math>\lambda = \ln (2|\mathcal U|)</math>.  With positive probability, the rounding scheme returns a set cover <math>x'</math> of cost at most <math>2\ln(2|\mathcal U|) c\cdot x^*</math> (and thus of cost <math>O(\log |\mathcal U|)</math> times the cost of the optimal set cover).''
 
(Note: with care the  <math>O(\log |\mathcal U|)</math>
can be reduced to <math>\ln(|\mathcal U|)+O(\log\log|\mathcal U|)</math>.)
 
==== proof ====
The output <math>x'</math> of the random rounding scheme has the desired properties
as long as none of the following "bad" events occur:
# the cost <math>c\cdot x'</math> of <math>x'</math> exceeds <math>2\lambda c\cdot x^*</math>, or
# for some element <math>e</math>, <math>x'</math> fails to cover <math>e</math>.
 
The expectation of each  <math>x'_s</math> is at most <math>\lambda x_s^*</math>.
By [[Expected_value#Linearity|linearity of expectation]],
the expectation of  <math>c\cdot x'</math>
is at most <math>\sum_s c(s)\lambda x_s^*=\lambda c\cdot x^*</math>.
Thus, by [[Markov's inequality]], the probability of the first bad event
above is at most <math>1/2</math>.
 
For the remaining bad events (one for each element <math>e</math>), note that,
since <math>\sum_{s\ni e} x^*_s \ge 1</math> for any given element <math>e</math>,
the probability that <math>e</math> is not covered is
 
:  <math>
\begin{align}
\prod_{s\ni e} \big(1-\min(\lambda x^*_s,1) \big)
& < \prod_{s\ni e} \exp({-}\lambda x^*_s)
= \exp\Big({-}\lambda \sum_{s\ni e} x^*_s \Big)
\\
& \le \exp({-}\lambda)
= 1/(2|\mathcal U|).
\end{align}
</math>
 
(This uses the inequality <math>1+z\le e^z</math>,
which is strict for <math>z \ne 0</math>.)
 
Thus, for each of the <math>|\mathcal U|</math> elements,
the probability that the element is not covered is less than <math>1/(2\mathcal U)</math>.
 
By the [[naive union bound]],
the probability that one of the <math>1+|\mathcal U|</math> bad events happens
is less than <math>1/2 + |\mathcal U|/(2\mathcal U)=1</math>.
Thus, with positive probability there are no bad events
and <math>x'</math> is a set cover of cost at most <math>2\lambda c\cdot x^*</math>.
QED
 
=== Derandomization using the method of conditional probabilities ===
<!-- Note: this subsection is linked to from [[method_of_conditional_probabilities]]. 
    If you change the title here, please update that link -->
 
The lemma above shows the ''existence'' of a set cover
of cost <math>O(\log(|\mathcal U|)c\cdot x^*</math>).
In this context our goal is an efficient approximation algorithm,
not just an existence proof, so we are not done.
 
One approach would be to increase <math>\lambda</math>
a little bit, then show that the probability of success is at least, say, 1/4.
With this modification, repeating the random rounding step a few times
is enough to ensure a successful outcome with high probability.
 
That approach weakens the approximation ratio.
We next describe a different approach that yields
a deterministic algorithm that is guaranteed to
match the approximation ratio of the existence proof above.
The approach is called the [[method of conditional probabilities]].
 
The deterministic algorithm emulates the randomized rounding scheme:
it considers each set <math>s\in\mathcal S</math> in turn,
and chooses <math>x'_s \in\{0,1\}</math>.
But instead of making each choice ''randomly'' based on <math>x^*</math>,
it makes the choice ''deterministically'', so as to
''keep the conditional probability of failure, given the choices so far, below 1''.
 
==== Bounding the conditional probability of failure ====
We want to be able to set each variable <math>x'_s</math> in turn
so as to keep the conditional probability of failure below 1.
To do this, we need a good bound on the conditional probability of failure.
The bound will come by refining the original existence proof.
That proof implicitly bounds the probability of failure
by the expectation of the random variable
:  <math>F = \frac{c\cdot x'}{2\lambda c\cdot x^*} + |\mathcal U^{(m)}|</math>,
where
:  <math>\mathcal U^{(m)}= \Big\{ e  : \prod_{s\ni e} (1-x'_s) = 1\Big\}</math>
is the set of elements left uncovered at the end.
 
The random variable <math>F</math> may appear a bit mysterious,
but it mirrors the probabilistic proof in a systematic way.
The first term in <math>F</math> comes from applying [[Markov's inequality]]
to bound the probability of the first bad event (the cost is too high).
It contributes at least 1 to <math>F</math> if the cost of <math>x'</math> is too high.
The second term
counts the number of bad events of the second kind (uncovered elements).
It contributes at least 1 to <math>F</math> if <math>x'</math> leaves any element uncovered.
Thus, in any outcome where <math>F</math> is less than 1,
<math>x'</math> must cover all the elements
and have cost meeting the desired bound from the lemma.
In short, if the rounding step fails, then <math>F \ge 1</math>.
This implies (by [[Markov's inequality]]) that
''<math>E[F]</math> is an upper bound on the probability of failure.''
Note that the argument above is implicit already in the proof of the lemma,
which also shows by calculation that <math>E[F] < 1</math>.
 
To apply the method of conditional probabilities,
we need to extend the argument to bound the ''conditional'' probability of failure
as the rounding step proceeds.
Usually, this can be done in a systematic way,
although it can be technically tedious.
 
So, what about the ''conditional'' probability of failure as the rounding step iterates through the sets?
Since <math>F \ge 1</math> in any outcome where the rounding step fails,
by [[Markov's inequality]], the ''conditional'' probability of failure
is at most the ''conditional'' expectation of <math>F</math>.
 
Next we calculate the conditional expectation of <math>F</math>,
much as we calculated the unconditioned expectation of <math>F</math> in the original proof.
Consider the state of the rounding process at the end of some iteration <math>t</math>.
Let <math>S^{(t)}</math> denote the sets considered so far
(the first <math>t</math> sets in <math>\mathcal S</math>).
Let <math>x^{(t)}</math> denote the (partially assigned) vector <math>x'</math>
(so <math>x^{(t)}_s</math> is determined only if <math>s\in S^{(t)}</math>).
For each set <math>s\not\in S^{(t)}</math>,
let <math>p_s = \min(\lambda x^*_s, 1)</math>
denote the probability with which <math>x'_s</math> will be set to 1.
Let <math>\mathcal U^{(t)}</math> contain the not-yet-covered elements.
Then the conditional expectation of <math>F</math>,
given the choices made so far, that is, given <math>x^{(t)}</math>, is
 
: <math>
E[F | x^{(t)}]
~=~
\frac{\sum_{s\in S^{(t)}} c(s) x'_s
+ \sum_{s\not\in S^{(t)}} c(s) p_s}{2\lambda c\cdot x^*}
~+~
\sum_{e\in \mathcal U^{(t)}}\prod_{s\not\in S^{(t)}, s\ni e} (1-p_s).
</math>
 
Note that <math>E[F | x^{(t)}]</math> is determined only after iteration <math>t</math>.
 
==== Keeping the conditional probability of failure below 1 ====
To keep the conditional probability of failure below 1,
it suffices to keep the conditional expectation of <math>F</math> below 1.
To do this, it suffices to keep the conditional expectation of <math>F</math> from increasing.
This is what the algorithm will do.
It will set <math>x'_s</math> in each iteration to ensure that
:: <math>E[F|x^{(m)}] \le E[F|x^{(m-1)}] \le \cdots \le E[F|x^{(1)}] \le E[F|x^{(0)}] < 1 </math>
(where <math>m=|\mathcal S|</math>).
 
In the <math>t</math>th iteration,
how can the algorithm set <math>x'_{s'}</math>
to ensure that <math>E[F|x^{(t)}] \le E[F|S^{(t-1)}]</math>?
It turns out that it can simply set <math>x'_{s'}</math>
so as to ''minimize'' the resulting value of <math>E[F|x^{(t)}]</math>.
 
To see why, focus on the point in time when iteration <math>t</math> starts.
At that time, <math>E[F|x^{(t-1)}]</math> is determined,
but <math>E[F|x^{(t)}]</math> is not yet determined
--- it can take two possible values depending on how <math>x'_{s'}</math>
is set in iteration <math>t</math>.
Let <math>E^{(t-1)}</math> denote the value of <math>E[F|x'^{(t-1)}]</math>.
Let <math>E^{(t)}_0</math> and <math>E^{(t)}_1</math>,
denote the two possible values of  <math>E[F|x^{(t)}]</math>,
depending on whether <math>x'_{s'}</math> is set to 0, or 1, respectively.
By the definition of conditional expectation,
:: <math>
E^{(t-1)} ~=~
\Pr[x'_{s'}=0] E^{(t)}_0
+
\Pr[x'_{s'}=1] E^{(t)}_1.
</math>
Since a weighted average of two quantities
is always at least the minimum of those two quantities,
it follows that
:: <math>
E^{(t-1)} ~\ge~ \min( E^{(t)}_0, E^{(t)}_1 ).
</math>
Thus, setting <math>x'_{s'}</math>
so as to minimize the resulting value of
<math>E[F | x^{(t)}]</math>
will guarantee that
<math>E[F | x^{(t)}] \le E[F | x^{(t-1)}]</math>.
This is what the algorithm will do.
 
In detail, what does this mean?
Considered as a function of <math>x'_{s'}</math>
(with all other quantities fixed)
<math>E[F | x^{(t)}]</math>
is a linear function of <math>x'_{s'}</math>,
and the coefficient of <math>x'_{s'}</math> in that function is
:  <math>\frac{c_{s'}}{2\lambda c\cdot x^*}
~-~
\sum_{e\in s'\cap \mathcal U_{t-1}}\prod_{s\not\in S^{(t)}, s\ni e} (1-p_s).
</math>
 
Thus, the algorithm should set <math>x'_{s'}</math> to 0 if this expression is positive,
and 1 otherwise.  This gives the following algorithm.
 
=== Randomized-rounding algorithm for Set Cover ===
 
'''input:''' set system <math>\mathcal S</math>, universe <math>\mathcal U</math>, cost vector <math>c</math>
 
'''output:'''  set cover <math>x'</math> (a solution to the standard integer linear program for set cover)
----
# Compute a min-cost fractional set cover <math>x^*</math> (an optimal solution to the LP relaxation).
# Let <math>\lambda \leftarrow \ln(2|\mathcal U|)</math>.  Let <math>p_s \leftarrow \min(\lambda x^*_{s},1)</math> for each <math>s\in\mathcal S</math>.
# For each <math>s'\in\mathcal S</math> do:
## Let <math>\mathcal S \leftarrow \mathcal S - \{s'\}</math>.  &nbsp; (<math>\mathcal S</math> contains the not-yet-decided sets.)
## If &nbsp;&nbsp; <math>
\frac{c_{s'}}{2\lambda c\cdot x^*}
>
\sum_{e\in s'\cap\mathcal U} \prod_{s\in \mathcal S, s\ni e}(1-p_s)
</math>
##: then set <math>x'_s\leftarrow 0</math>,
##: else set <math>x'_s\leftarrow 1</math> and <math>\mathcal U\leftarrow\mathcal U - s'</math>.
##:  &nbsp;&nbsp;(<math>\mathcal U</math> contains the not-yet-covered elements.)
# Return <math>x'</math>.
----
 
==== lemma (approximation guarantee for algorithm) ====
 
:: ''The algorithm above returns a set cover <math>x'</math> of cost at most <math>2\ln(2|\mathcal U|)</math> times the minimum cost of any (fractional) set cover.''
 
==== proof ====
----
The algorithm ensures that the conditional expectation of <math>F</math>,
<math>E[F \,|\, x^{(t)}]</math>, does not increase at each iteration.
Since this conditional expectation is initially less than 1 (as shown previously),
the algorithm ensures that the conditional expectation stays below 1.
Since the conditional probability of failure
is at most the conditional expectation of <math>F</math>,
in this way the algorithm
ensures that the conditional probability of failure stays below 1.
Thus, at the end, when all choices are determined,
the algorithm reaches a successful outcome.
That is, the algorithm above returns a set cover <math>x'</math>
of cost at most <math>2\ln(2|\mathcal U|)</math> times
the minimum cost of any (fractional) set cover.
 
=== Remarks ===
In the example above, the algorithm was guided by the conditional expectation of a random variable <math>F</math>.
In some cases, instead of an exact conditional expectation,
an ''upper bound'' (or sometimes a lower bound)
on some conditional expectation is used instead.
This is called a [[pessimistic estimator]].
 
== See also ==
* [[Method of conditional probabilities]]
 
== References ==
{{Reflist}}
 
* {{Citation | title= Randomized rounding: A technique for provably good algorithms and algorithmic proofs| first1 = Prabhakar| last1 = Raghavan |authorlink1=Prabhakar Raghavan | first2=Clark D.|last2 =Tompson|journal=[[Combinatorica]]|volume=7|issue=4|year=1987|pages=365–374|doi=10.1007/BF02579324}}.
* {{Citation | title= Probabilistic construction of deterministic algorithms: approximating packing integer programs | first = Prabhakar | last = Raghavan | authorlink=Prabhakar Raghavan | journal=[[Journal of Computer and System Sciences]] | volume=37|issue=2|pages=130–143|year = 1988 | doi = 10.1016/0022-0000(88)90003-7}}.
 
==Further reading==
* {{citation|last=Althöfer|first=Ingo|title=On sparse approximations to randomized strategies and convex combinations|journal=Linear Algebra and its Applications|volume=199|year=1994|pages=339–355|mr=1274423|ref=harv|doi=10.1016/0024-3795(94)90357-3}}
* {{citation|last1=Hofmeister|first1=Thomas|last2=Lefmann|first2=Hanno|title=Computing sparse approximations deterministically|journal=Linear Algebra and its Applications|volume=240|year=1996|pages=9–19|mr=1387283|ref=harv}}
* {{citation|last1=Lipton|first1=Richard J.|last2=Young|first2=Neal E.|chapter=Simple strategies for large zero-sum games with applications to complexity theory|title =STOC '94: Proceedings of the twenty-sixth annual ACM symposium on theory of computing|year=1994|isbn=0-89791-663-8|pages=734–740|location=Montreal, Quebec, Canada|doi=10.1145/195058.195447|publisher =[[Association for Computing Machinery|ACM]]|address=New York, NY|ref=harv}}
 
<!-- trying to follow http://en.wikipedia.org/wiki/Wikipedia:Layout#Standard_appendices_and_footers -Neal -->
 
{{DEFAULTSORT:Randomized Rounding}}
[[Category:Algorithms]]
[[Category:Probabilistic arguments]]

Latest revision as of 12:40, 21 November 2014



A chef's knife is an important culinary tool in the kitchen and is made use of generally by cooks. Effectively there are a handful of motives I guess why I favor to use an electric one. When you already know that a sharp knife will give you the most effective efficiency and will make it less complicated for you to reduce, slice, chop and dice you will want to make confident that the knife description mentions that the edges and centers are sharp in addition to just the tip. In case you loved this short article and you would love to receive more information relating Wusthof Ikon Knives Overview to Wusthof Knives Evaluation i implore you to visit our own site. A properly balanced knife will handle significantly less complicated, there will be significantly less likelihood of it falling out of your hand and will also do much of the operate for you. Then check out the boning knife guide!

The size of fillet knife you select ought to depend on what size fish you fillet most typically. Right here you may perhaps wish to consider a knife with a extra attractive handle or customized design and style. The cleaver possesses one of a kind qualities as opposed to any other knife in your kitchen its technique utilised to cut, its weight and thickness, and its edge. Absolutely nothing worse than a steak knife that doesn't do the job!

Most sushi Overview Of Wusthof Knives knife sharpening sessions begin with the Nakato as it is very best for honing the shape of the blade as effectively as sharpening it. Grip the sushi knife in your hand with the tip pointing away from you. Place the first 3 fingers of your totally free hand on the back, non-cutting side of the knife for help and guidance. Re-position the edge of the knife blade on the Nakato whetstone at the 10 to 20 degree angle, this time starting in the middle of the knife blade.

The set is developed to permit anyone to pick your best set of knives by delivering a set that only involves a wood storage block, sharpening steel and a come apart kitchen shear. The second 1 is the Wusthof Precision Edge two Stage Knife Sharpener used for fine tuning you knife set. Nevertheless if you cannot afford to acquire a forged knife set (I really feel your pain) then stamped sets are you next port of contact.

Just click on the knife picture above and when you get to the Victorinox 40570 page, scroll down to the critiques section. The leading-rated sharpener pictured here is the a single I received from my son at Christmas. It is straightforward to use and performs for my 6" chef's knife... as effectively as my smaller sized paring knives. Two Knives - An 8-inch chef's knife and a three 1/two-inch paring knife, can tackle just about any cutting activity. Hold the knife firmly in your hand and cut away from the physique.

I've in fact tried making use of a kitchen knife made out of carbon and found your comments to be proper on. You don't want to use these blades for any type of prying as I immediately discovered. There is just as a great deal info here as when I took a knife class at a regional market. I, on the other hand, have but one particular knife that I use for all occasions. If there is a knife continuum, you would be on 1 finish, I would be on the other end.

Chef Oliver's recipe contains combining three components of olive-oil into 1 element of red wine vinegar or lemon juice, then add some pepper and salt, shake the ingredients inside a jar, as easy a nd basic as that to develop a tasty salad dressing. I wanted a great, cheap, all-objective kitchen knife that was sharp, but not so scary that I'd in no way use it. I am so glad I took my time reading what the Amazon reviewers had to say, due to the fact my new knife makes cooking exciting now.

Serrated knives really should include things like characteristics like: extended, sharp and jagged edges which will let the knife to reduce by way of soft and tender objects such as tomatoes and bread. Telling them apart is straightforward their distinction lies in their handles, a forged-blade's knife's hand-molded., even though the stamped-blade knife was pressed employing some machine. The forged-blade knife naturally comes superior compared to that of a stamped-blade knife.

This knife can actually do it all and has been made use of by the USMC for years as their regular problem knife, and for good cause, this knife is simply remarkable. My only problem with these survival knives are that the blade could possibly be a tad quick for batoning wood, that said when I tested this knife I essentially identified it the easiest to baton with due to its super thick blade. If you do any critical cooking, you have to have a chef's knife in your cutlery collection.