<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://en.formulasearchengine.com/index.php?action=history&amp;feed=atom&amp;title=Time_deviation</id>
	<title>Time deviation - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://en.formulasearchengine.com/index.php?action=history&amp;feed=atom&amp;title=Time_deviation"/>
	<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/index.php?title=Time_deviation&amp;action=history"/>
	<updated>2026-04-10T04:38:13Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.0-wmf.28</generator>
	<entry>
		<id>https://en.formulasearchengine.com/index.php?title=Time_deviation&amp;diff=266655&amp;oldid=prev</id>
		<title>en&gt;Omnipaedista: standardized punctuation</title>
		<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/index.php?title=Time_deviation&amp;diff=266655&amp;oldid=prev"/>
		<updated>2014-07-29T14:43:35Z</updated>

		<summary type="html">&lt;p&gt;standardized punctuation&lt;/p&gt;
&lt;a href=&quot;https://en.formulasearchengine.com/index.php?title=Time_deviation&amp;amp;diff=266655&amp;amp;oldid=25583&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>en&gt;Omnipaedista</name></author>
	</entry>
	<entry>
		<id>https://en.formulasearchengine.com/index.php?title=Time_deviation&amp;diff=25583&amp;oldid=prev</id>
		<title>en&gt;Kvng: reorder paragraphs. link improvements. bait maximum time interval error.</title>
		<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/index.php?title=Time_deviation&amp;diff=25583&amp;oldid=prev"/>
		<updated>2012-05-24T17:38:53Z</updated>

		<summary type="html">&lt;p&gt;reorder paragraphs. link improvements. bait &lt;a href=&quot;/index.php?title=Maximum_time_interval_error&amp;amp;action=edit&amp;amp;redlink=1&quot; class=&quot;new&quot; title=&quot;Maximum time interval error (page does not exist)&quot;&gt;maximum time interval error&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Refimprove|date=January 2012}}&lt;br /&gt;
An &amp;#039;&amp;#039;&amp;#039;arbitrarily varying channel (AVC)&amp;#039;&amp;#039;&amp;#039; is a communication [[channel model]] used in [[coding theory]], and was first introduced by Blackwell, Breiman, and Thomasian.  This particular [[Communication channel|channel]] has unknown parameters that can change over time and these changes may not have a uniform pattern during the transmission of a [[codeword]]. &amp;lt;math&amp;gt;\textstyle n&amp;lt;/math&amp;gt; uses of this [[Channel model|channel]] can be described using a [[stochastic matrix]] &amp;lt;math&amp;gt;\textstyle W^n: X^n \times&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\textstyle S^n \rightarrow Y^n&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\textstyle X&amp;lt;/math&amp;gt; is the input alphabet, &amp;lt;math&amp;gt;\textstyle Y&amp;lt;/math&amp;gt; is the output alphabet, and &amp;lt;math&amp;gt;\textstyle W^n (y | x, s)&amp;lt;/math&amp;gt; is the probability over a given set of states &amp;lt;math&amp;gt;\textstyle S&amp;lt;/math&amp;gt;, that the transmitted input &amp;lt;math&amp;gt;\textstyle x = (x_1, \ldots, x_n)&amp;lt;/math&amp;gt; leads to the received output &amp;lt;math&amp;gt;\textstyle y = (y_1, \ldots, y_n)&amp;lt;/math&amp;gt;.  The state &amp;lt;math&amp;gt;\textstyle s_i&amp;lt;/math&amp;gt; in set &amp;lt;math&amp;gt;\textstyle S&amp;lt;/math&amp;gt; can vary arbitrarily at each time unit &amp;lt;math&amp;gt;\textstyle i&amp;lt;/math&amp;gt;.  This [[Channel model|channel]] was developed as an alternative to [[Claude Shannon|Shannon&amp;#039;s]] [[Binary symmetric channel|Binary Symmetric Channel]] (BSC), where the entire nature of the [[Channel model|channel]] is known, to be more realistic to actual [[Communication channel|network channel]] situations.&lt;br /&gt;
&lt;br /&gt;
==Capacities and associated proofs==&lt;br /&gt;
&lt;br /&gt;
===Capacity of deterministic AVCs===&lt;br /&gt;
An AVC&amp;#039;s [[Channel capacity|capacity]] can vary depending on the certain parameters.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\textstyle R&amp;lt;/math&amp;gt; is an achievable [[Information theory#Rate|rate]] for a deterministic AVC [[Channel coding|code]] if it is larger than &amp;lt;math&amp;gt;\textstyle 0&amp;lt;/math&amp;gt;, and if for every positive &amp;lt;math&amp;gt;\textstyle \varepsilon&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\textstyle \delta&amp;lt;/math&amp;gt;, and very large &amp;lt;math&amp;gt;\textstyle n&amp;lt;/math&amp;gt;, length-&amp;lt;math&amp;gt;\textstyle n&amp;lt;/math&amp;gt; [[block code]]s exist that satisfy the following equations: &amp;lt;math&amp;gt;\textstyle \frac{1}{n}\log N &amp;gt; R - \delta&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\displaystyle \max_{s \in S^n} \bar{e}(s) \leq \varepsilon&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\textstyle N&amp;lt;/math&amp;gt; is the highest value in &amp;lt;math&amp;gt;\textstyle Y&amp;lt;/math&amp;gt; and where &amp;lt;math&amp;gt;\textstyle \bar{e}(s)&amp;lt;/math&amp;gt; is the average probability of error for a state sequence &amp;lt;math&amp;gt;\textstyle s&amp;lt;/math&amp;gt;.  The largest [[Information theory#Rate|rate]] &amp;lt;math&amp;gt;\textstyle R&amp;lt;/math&amp;gt; represents the [[Channel capacity|capacity]] of the AVC, denoted by &amp;lt;math&amp;gt;\textstyle c&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As you can see, the only useful situations are when the [[Channel capacity|capacity]] of the AVC is greater than &amp;lt;math&amp;gt;\textstyle 0&amp;lt;/math&amp;gt;, because then the [[Channel model|channel]] can transmit a guaranteed amount of data &amp;lt;math&amp;gt;\textstyle \leq c&amp;lt;/math&amp;gt; without errors.  So we start out with a [[theorem]] that shows when &amp;lt;math&amp;gt;\textstyle c&amp;lt;/math&amp;gt; is positive in a AVC and the [[theorem]]s discussed afterward will narrow down the [[Range (mathematics)|range]] of &amp;lt;math&amp;gt;\textstyle c&amp;lt;/math&amp;gt; for different circumstances.&lt;br /&gt;
&lt;br /&gt;
Before stating Theorem 1, a few definitions need to be addressed:&lt;br /&gt;
&lt;br /&gt;
* An AVC is &amp;#039;&amp;#039;symmetric&amp;#039;&amp;#039; if &amp;lt;math&amp;gt;\displaystyle \sum_{s \in S}W(y|x, s)U(s|x&amp;#039;) = \sum_{s \in S}W(y|x&amp;#039;, s)U(s|x)&amp;lt;/math&amp;gt; for every &amp;lt;math&amp;gt;\textstyle (x, x&amp;#039;, y,s)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\textstyle x,x&amp;#039;\in X&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\textstyle y \in Y&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\textstyle U(s|x)&amp;lt;/math&amp;gt; is a channel function &amp;lt;math&amp;gt;\textstyle U: X \rightarrow S&amp;lt;/math&amp;gt;.&lt;br /&gt;
* &amp;lt;math&amp;gt;\textstyle X_r&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\textstyle S_r&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\textstyle Y_r&amp;lt;/math&amp;gt; are all [[random variable]]s in sets &amp;lt;math&amp;gt;\textstyle X&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\textstyle S&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\textstyle Y&amp;lt;/math&amp;gt; respectively.&lt;br /&gt;
* &amp;lt;math&amp;gt;\textstyle P_{X_r}(x)&amp;lt;/math&amp;gt; is equal to the probability that the [[random variable]] &amp;lt;math&amp;gt;\textstyle X_r&amp;lt;/math&amp;gt; is equal to &amp;lt;math&amp;gt;\textstyle x&amp;lt;/math&amp;gt;.&lt;br /&gt;
* &amp;lt;math&amp;gt;\textstyle P_{S_r}(s)&amp;lt;/math&amp;gt; is equal to the probability that the [[random variable]] &amp;lt;math&amp;gt;\textstyle S_r&amp;lt;/math&amp;gt; is equal to &amp;lt;math&amp;gt;\textstyle s&amp;lt;/math&amp;gt;.&lt;br /&gt;
* &amp;lt;math&amp;gt;\textstyle P_{X_{r}S_{r}Y_{r}}&amp;lt;/math&amp;gt; is the combined [[probability mass function]] (pmf) of &amp;lt;math&amp;gt;\textstyle P_{X_r}(x)&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\textstyle P_{S_r}(s)&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\textstyle W(y|x,s)&amp;lt;/math&amp;gt;.  &amp;lt;math&amp;gt;\textstyle P_{X_{r}S_{r}Y_{r}}&amp;lt;/math&amp;gt; is defined formally as &amp;lt;math&amp;gt;\textstyle P_{X_{r}S_{r}Y_{r}}(x,s,y) = P_{X_r}(x)P_{S_r}(s)W(y|x,s)&amp;lt;/math&amp;gt;.&lt;br /&gt;
* &amp;lt;math&amp;gt;\textstyle H(X_r)&amp;lt;/math&amp;gt; is the [[Information entropy|entropy]] of &amp;lt;math&amp;gt;\textstyle X_r&amp;lt;/math&amp;gt;.&lt;br /&gt;
* &amp;lt;math&amp;gt;\textstyle H(X_r|Y_r)&amp;lt;/math&amp;gt; is equal to the average probability that &amp;lt;math&amp;gt;\textstyle X_r&amp;lt;/math&amp;gt; will be a certain value based on all the values &amp;lt;math&amp;gt;\textstyle Y_r&amp;lt;/math&amp;gt; could possibly be equal to.&lt;br /&gt;
* &amp;lt;math&amp;gt;\textstyle I(X_r \land Y_r)&amp;lt;/math&amp;gt; is the [[Conditional entropy|mutual information]] of &amp;lt;math&amp;gt;\textstyle X_r&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\textstyle Y_r&amp;lt;/math&amp;gt;, and is equal to &amp;lt;math&amp;gt;\textstyle H(X_r) - H(X_r|Y_r)&amp;lt;/math&amp;gt;.&lt;br /&gt;
* &amp;lt;math&amp;gt;\displaystyle I(P) = \min_{Y_r} I(X_r \land Y_r)&amp;lt;/math&amp;gt;, where the minimum is over all random variables &amp;lt;math&amp;gt;\textstyle Y_r&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;\textstyle X_r&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\textstyle S_r&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\textstyle Y_r&amp;lt;/math&amp;gt; are distributed in the form of &amp;lt;math&amp;gt;\textstyle P_{X_{r}S_{r}Y_{r}}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Theorem 1:&amp;#039;&amp;#039;&amp;#039; &amp;lt;math&amp;gt;\textstyle c &amp;gt; 0&amp;lt;/math&amp;gt; if and only if the AVC is not symmetric.  If &amp;lt;math&amp;gt;\textstyle c &amp;gt; 0&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;\displaystyle c = \max_P I(P)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Proof of 1st part for symmetry:&amp;#039;&amp;#039; If we can prove that &amp;lt;math&amp;gt;\textstyle I(P)&amp;lt;/math&amp;gt; is positive when the AVC is not symmetric, and then prove that &amp;lt;math&amp;gt;\textstyle c = \max_P I(P)&amp;lt;/math&amp;gt;, we will be able to prove Theorem 1.  Assume &amp;lt;math&amp;gt;\textstyle I(P)&amp;lt;/math&amp;gt; were equal to &amp;lt;math&amp;gt;\textstyle 0&amp;lt;/math&amp;gt;.  From the definition of &amp;lt;math&amp;gt;\textstyle I(P)&amp;lt;/math&amp;gt;, this would make &amp;lt;math&amp;gt;\textstyle X_r&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\textstyle Y_r&amp;lt;/math&amp;gt; [[Independence (probability theory)|independent]] [[random variable]]s, for some &amp;lt;math&amp;gt;\textstyle S_r&amp;lt;/math&amp;gt;, because this would mean that neither [[random variable]]&amp;#039;s  [[Information entropy|entropy]] would rely on the other [[random variable]]&amp;#039;s value.  By using equation &amp;lt;math&amp;gt;\textstyle P_{X_{r}S_{r}Y_{r}}&amp;lt;/math&amp;gt;, (and remembering &amp;lt;math&amp;gt;\textstyle P_{X_r} = P&amp;lt;/math&amp;gt;,) we can get,&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\displaystyle P_{Y_r}(y) = \sum_{x\in X} \sum_{s\in S} P(x)P_{S_r}(s)W(y|x,s)&amp;lt;/math&amp;gt;&lt;br /&gt;
:&amp;lt;math&amp;gt;\textstyle \equiv (&amp;lt;/math&amp;gt;since &amp;lt;math&amp;gt;\textstyle X_r&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\textstyle Y_r&amp;lt;/math&amp;gt; are [[Independence (probability theory)|independent]] [[random variable]]s, &amp;lt;math&amp;gt;\textstyle W(y|x, s) = W&amp;#039;(y|s)&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;\textstyle W&amp;#039;)&amp;lt;/math&amp;gt;&lt;br /&gt;
:&amp;lt;math&amp;gt;\displaystyle P_{Y_r}(y) = \sum_{x\in X} \sum_{s\in S} P(x)P_{S_r}(s)W&amp;#039;(y|s)&amp;lt;/math&amp;gt;&lt;br /&gt;
:&amp;lt;math&amp;gt;\textstyle \equiv (&amp;lt;/math&amp;gt;because only &amp;lt;math&amp;gt;\textstyle P(x)&amp;lt;/math&amp;gt; depends on &amp;lt;math&amp;gt;\textstyle x&amp;lt;/math&amp;gt; now&amp;lt;math&amp;gt;\textstyle )&amp;lt;/math&amp;gt;&lt;br /&gt;
:&amp;lt;math&amp;gt;\displaystyle P_{Y_r}(y) = \sum_{s\in S} P_{S_r}(s)W&amp;#039;(y|s) \left[\sum_{x\in X} P(x)\right]&amp;lt;/math&amp;gt;&lt;br /&gt;
:&amp;lt;math&amp;gt;\textstyle \equiv (&amp;lt;/math&amp;gt;because &amp;lt;math&amp;gt;\displaystyle \sum_{x\in X} P(x) = 1)&amp;lt;/math&amp;gt;&lt;br /&gt;
:&amp;lt;math&amp;gt;\displaystyle P_{Y_r}(y) = \sum_{s\in S} P_{S_r}(s)W&amp;#039;(y|s)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So now we have a [[probability distribution]] on &amp;lt;math&amp;gt;\textstyle Y_r&amp;lt;/math&amp;gt; that is [[Independence (probability theory)|independent]] of &amp;lt;math&amp;gt;\textstyle X_r&amp;lt;/math&amp;gt;.  So now the definition of a symmetric AVC can be rewritten as follows:  &amp;lt;math&amp;gt;\displaystyle \sum_{s \in S}W&amp;#039;(y|s)P_{S_r}(s) = \sum_{s \in S}W&amp;#039;(y|s)P_{S_r}(s)&amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;\textstyle U(s|x)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\textstyle W(y|x, s)&amp;lt;/math&amp;gt; are both functions based on &amp;lt;math&amp;gt;\textstyle x&amp;lt;/math&amp;gt;, they have been replaced with functions based on &amp;lt;math&amp;gt;\textstyle s&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\textstyle y&amp;lt;/math&amp;gt; only.  As you can see, both sides are now equal to the &amp;lt;math&amp;gt;\textstyle P_{Y_r}(y)&amp;lt;/math&amp;gt; we calculated earlier, so the AVC is indeed symmetric when &amp;lt;math&amp;gt;\textstyle I(P)&amp;lt;/math&amp;gt; is equal to &amp;lt;math&amp;gt;\textstyle 0&amp;lt;/math&amp;gt;.  Therefore &amp;lt;math&amp;gt;\textstyle I(P)&amp;lt;/math&amp;gt; can only be positive if the AVC is not symmetric.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Proof of second part for capacity&amp;#039;&amp;#039;:  See the paper &amp;quot;The capacity of the arbitrarily varying channel revisited: positivity, constraints,&amp;quot; referenced below for full proof.&lt;br /&gt;
&lt;br /&gt;
===Capacity of AVCs with input and state constraints===&lt;br /&gt;
&lt;br /&gt;
The next [[theorem]] will deal with the [[Channel capacity|capacity]] for AVCs with input and/or state constraints.  These constraints help to decrease the very large [[Range (mathematics)|range]] of possibilities for transmission and error on an AVC, making it a bit easier to see how the AVC behaves.&lt;br /&gt;
&lt;br /&gt;
Before we go on to Theorem 2, we need to define a few definitions and [[Lemma (mathematics)|lemmas]]:&lt;br /&gt;
&lt;br /&gt;
For such AVCs, there exists:&amp;lt;br&amp;gt;&lt;br /&gt;
:- An input constraint &amp;lt;math&amp;gt;\textstyle \Gamma&amp;lt;/math&amp;gt; based on the equation &amp;lt;math&amp;gt;\displaystyle g(x) = \frac{1}{n}\sum_{i=1}^n g(x_i)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\textstyle x \in X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\textstyle x = (x_1,\dots,x_n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
:- A state constraint &amp;lt;math&amp;gt;\textstyle \Lambda&amp;lt;/math&amp;gt;, based on the equation &amp;lt;math&amp;gt;\displaystyle l(s) = \frac{1}{n}\sum_{i=1}^n l(s_i)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\textstyle s \in X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\textstyle s = (s_1,\dots,s_n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
:- &amp;lt;math&amp;gt;\displaystyle \Lambda_0(P) = \min \sum_{x \in X, s \in S}P(x)l(s)&amp;lt;/math&amp;gt;&lt;br /&gt;
:- &amp;lt;math&amp;gt;\textstyle I(P, \Lambda)&amp;lt;/math&amp;gt; is very similar to &amp;lt;math&amp;gt;\textstyle I(P)&amp;lt;/math&amp;gt; equation mentioned previously, &amp;lt;math&amp;gt;\displaystyle I(P, \Lambda) = \min_{Y_r} I(X_r \land Y_r)&amp;lt;/math&amp;gt;, but now  any state &amp;lt;math&amp;gt;\textstyle s&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\textstyle S_r&amp;lt;/math&amp;gt; in the equation must follow the &amp;lt;math&amp;gt;\textstyle l(s) \leq \Lambda&amp;lt;/math&amp;gt; state restriction.&lt;br /&gt;
&lt;br /&gt;
Assume &amp;lt;math&amp;gt;\textstyle g(x)&amp;lt;/math&amp;gt; is a given non-negative-valued function on &amp;lt;math&amp;gt;\textstyle X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\textstyle l(s)&amp;lt;/math&amp;gt; is a given non-negative-valued function on &amp;lt;math&amp;gt;\textstyle S&amp;lt;/math&amp;gt; and that the minimum values for both is &amp;lt;math&amp;gt;\textstyle 0&amp;lt;/math&amp;gt;.  In the literature I have read on this subject, the exact definitions of both &amp;lt;math&amp;gt;\textstyle g(x)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\textstyle l(s)&amp;lt;/math&amp;gt; (for one variable &amp;lt;math&amp;gt;\textstyle x_i&amp;lt;/math&amp;gt;,) is never described formally. The usefulness of the input constraint &amp;lt;math&amp;gt;\textstyle \Gamma&amp;lt;/math&amp;gt; and the state constraint &amp;lt;math&amp;gt;\textstyle \Lambda&amp;lt;/math&amp;gt; will be based on these equations.&lt;br /&gt;
&lt;br /&gt;
For AVCs with input and/or state constraints, the [[Information_theory#Rate|rate]] &amp;lt;math&amp;gt;\textstyle R&amp;lt;/math&amp;gt; is now limited to [[codeword]]s of format &amp;lt;math&amp;gt;\textstyle x_1,\dots,x_N&amp;lt;/math&amp;gt; that satisfy &amp;lt;math&amp;gt;\textstyle g(x_i) \leq \Gamma&amp;lt;/math&amp;gt;, and now the state &amp;lt;math&amp;gt;\textstyle s&amp;lt;/math&amp;gt; is limited to all states that satisfy &amp;lt;math&amp;gt;\textstyle l(s) \leq \Lambda&amp;lt;/math&amp;gt;.  The largest [[Information_theory#Rate|rate]] is still considered the [[Channel capacity|capacity]] of the AVC, and is now denoted as &amp;lt;math&amp;gt;\textstyle c(\Gamma, \Lambda)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Lemma 1:&amp;#039;&amp;#039;  Any [[Channel coding|codes]] where &amp;lt;math&amp;gt;\textstyle \Lambda&amp;lt;/math&amp;gt; is greater than &amp;lt;math&amp;gt;\textstyle \Lambda_0(P)&amp;lt;/math&amp;gt; cannot be considered &amp;quot;good&amp;quot; [[Channel coding|codes]], because those kinds of [[Channel coding|codes]] have a maximum average probability of error greater than or equal to &amp;lt;math&amp;gt;\textstyle \frac{N-1}{2N} - \frac{1}{n}\frac{l_{max}^2}{n(\Lambda - \Lambda_0(P))^2}&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\textstyle l_{max}&amp;lt;/math&amp;gt; is the maximum value of &amp;lt;math&amp;gt;\textstyle l(s)&amp;lt;/math&amp;gt;.  This isn&amp;#039;t a good maximum average error probability because it is fairly large, &amp;lt;math&amp;gt;\textstyle \frac{N-1}{2N}&amp;lt;/math&amp;gt; is close to &amp;lt;math&amp;gt;\textstyle \frac{1}{2}&amp;lt;/math&amp;gt;, and the other part of the equation will be very small since the &amp;lt;math&amp;gt;\textstyle (\Lambda - \Lambda_0(P))&amp;lt;/math&amp;gt; value is squared, and &amp;lt;math&amp;gt;\textstyle \Lambda&amp;lt;/math&amp;gt; is set to be larger than &amp;lt;math&amp;gt;\textstyle \Lambda_0(P)&amp;lt;/math&amp;gt;.  Therefore it would be very unlikely to receive a [[codeword]] without error.  This is why the &amp;lt;math&amp;gt;\textstyle \Lambda_0(P)&amp;lt;/math&amp;gt; condition is present in Theorem 2.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Theorem 2:&amp;#039;&amp;#039;&amp;#039; Given a positive &amp;lt;math&amp;gt;\textstyle \Lambda&amp;lt;/math&amp;gt; and arbitrarily small &amp;lt;math&amp;gt;\textstyle \alpha &amp;gt; 0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\textstyle \beta &amp;gt; 0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\textstyle \delta &amp;gt; 0&amp;lt;/math&amp;gt;, for any block length &amp;lt;math&amp;gt;\textstyle n \geq n_0&amp;lt;/math&amp;gt; and for any type &amp;lt;math&amp;gt;\textstyle P&amp;lt;/math&amp;gt; with conditions &amp;lt;math&amp;gt;\textstyle \Lambda_0(P) \geq \Lambda + \alpha&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\displaystyle \min_{x \in X}P(x) \geq \beta&amp;lt;/math&amp;gt;, and where &amp;lt;math&amp;gt;\textstyle P_{X_r} = P&amp;lt;/math&amp;gt;, there exists a [[Channel coding|code]] with [[codeword]]s &amp;lt;math&amp;gt;\textstyle x_1,\dots,x_N&amp;lt;/math&amp;gt;, each of type &amp;lt;math&amp;gt;\textstyle P&amp;lt;/math&amp;gt;, that satisfy the following equations: &amp;lt;math&amp;gt;\textstyle \frac{1}{n}\log N &amp;gt; I(P,\Lambda) - \delta&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\displaystyle \max_{l(s) \leq \Lambda} \bar{e}(s) \leq \exp(-n\gamma)&amp;lt;/math&amp;gt;, and where positive &amp;lt;math&amp;gt;\textstyle n_0&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\textstyle \gamma&amp;lt;/math&amp;gt; depend only on &amp;lt;math&amp;gt;\textstyle \alpha&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\textstyle \beta&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\textstyle \delta&amp;lt;/math&amp;gt;, and the given AVC.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Proof of Theorem 2&amp;#039;&amp;#039;: See the paper &amp;quot;The capacity of the arbitrarily varying channel revisited: positivity, constraints,&amp;quot; referenced below for full proof.&lt;br /&gt;
&lt;br /&gt;
===Capacity of randomized AVCs===&lt;br /&gt;
The next [[theorem]] will be for AVCs with [[Information entropy|randomized]]  [[Channel coding|code]].  For such AVCs the [[Channel coding|code]] is a [[random variable]] with values from a family of length-n [[block code]]s, and these [[Channel coding|code]]s are not allowed to depend/rely on the actual value of the [[codeword]].  These [[Channel coding|codes]] have the same maximum and average error probability value for any [[Channel model|channel]] because of its random nature.  These types of [[Channel coding|codes]] also help to make certain properties of the AVC more clear.&lt;br /&gt;
&lt;br /&gt;
Before we go on to Theorem 3, we need to define a couple important terms first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\displaystyle W_{\zeta}(y|x) = \sum_{s \in S} W(y|x, s)P_{S_r}(s)&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\textstyle I(P, \zeta)&amp;lt;/math&amp;gt; is very similar to the &amp;lt;math&amp;gt;\textstyle I(P)&amp;lt;/math&amp;gt; equation mentioned previously, &amp;lt;math&amp;gt;\displaystyle I(P, \zeta) = \min_{Y_r} I(X_r \land Y_r)&amp;lt;/math&amp;gt;, but now the [[/Probability mass function|pmf]] &amp;lt;math&amp;gt;\textstyle P_{S_r}(s)&amp;lt;/math&amp;gt; is added to the equation, making the minimum of &amp;lt;math&amp;gt;\textstyle I(P, \zeta)&amp;lt;/math&amp;gt; based a new form of &amp;lt;math&amp;gt;\textstyle P_{X_{r}S_{r}Y_{r}}&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\textstyle W_{\zeta}(y|x)&amp;lt;/math&amp;gt; replaces &amp;lt;math&amp;gt;\textstyle W(y|x, s)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Theorem 3:&amp;#039;&amp;#039;&amp;#039; The [[Channel capacity|capacity]] for [[Information entropy|randomized]] [[Channel coding|codes]] of the AVC is &amp;lt;math&amp;gt;\displaystyle c = max_P I(P, \zeta)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Proof of Theorem 3&amp;#039;&amp;#039;:  See paper &amp;quot;The Capacities of Certain Channel Classes Under Random Coding&amp;quot; referenced below for full proof.&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
* [[Binary symmetric channel]]&lt;br /&gt;
* [[Binary erasure channel]]&lt;br /&gt;
* [[Z-channel (information theory)]]&lt;br /&gt;
* [[Channel model]]&lt;br /&gt;
* [[Information theory]]&lt;br /&gt;
* [[Coding theory]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;!--- See [[Wikipedia:Footnotes]] on how to create references using &amp;lt;ref&amp;gt;&amp;lt;/ref&amp;gt; tags which will then appear here automatically --&amp;gt;&lt;br /&gt;
* Ahlswede, Rudolf and Blinovsky, Vladimir, &amp;quot;Classical Capacity of Classical-Quantum Arbitrarily Varying Channels,&amp;quot;  http://ieeexplore.ieee.org.gate.lib.buffalo.edu/stamp/stamp.jsp?tp=&amp;amp;arnumber=4069128&lt;br /&gt;
* Blackwell, David, Breiman, Leo, and Thomasian, A. J.,  &amp;quot;The Capacities of Certain Channel Classes Under Random Coding,&amp;quot;  http://www.jstor.org/stable/2237566&lt;br /&gt;
* Csiszar, I. and Narayan, P., &amp;quot;Arbitrarily varying channels with constrained inputs and states,&amp;quot; http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=2598&amp;amp;isnumber=154&lt;br /&gt;
* Csiszar, I. and Narayan, P., &amp;quot;Capacity and Decoding Rules for Classes of Arbitrarily Varying Channels,&amp;quot; http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=32153&amp;amp;isnumber=139&lt;br /&gt;
* Csiszar, I. and Narayan, P., &amp;quot;The capacity of the arbitrarily varying channel revisited: positivity, constraints,&amp;quot; http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=2627&amp;amp;isnumber=155&lt;br /&gt;
* Lapidoth, A. and Narayan, P., &amp;quot;Reliable communication under channel uncertainty,&amp;quot; http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=720535&amp;amp;isnumber=15554&lt;br /&gt;
&lt;br /&gt;
[[Category:Coding theory]]&lt;/div&gt;</summary>
		<author><name>en&gt;Kvng</name></author>
	</entry>
</feed>