|
|
Line 1: |
Line 1: |
| In [[information theory]], the '''Rényi entropy''' generalizes the [[Shannon entropy]], the [[Hartley entropy]], the '''min-entropy''', and the '''collision entropy'''.
| | Oscar is how he's called and he completely enjoys this title. One of the very very best things in the world for him is to gather badges but he is having difficulties to find time for it. My day job is a meter reader. For years he's been residing in North Dakota and his family enjoys it.<br><br>Also visit my page [http://Za.mk/weightlossfooddelivery24650 za.mk] |
| Entropies quantify the diversity, uncertainty, or randomness of a system.
| |
| The Rényi entropy is named after [[Alfréd Rényi]].<ref name=Renyi61>{{harvtxt|Rényi|1961}}</ref>
| |
| | |
| The Rényi entropy is important in ecology and statistics as [[diversity indices|indices of diversity]].
| |
| The Rényi entropy is also important in [[quantum information]], where it can be used as a measure of [[Quantum entanglement|entanglement]].
| |
| In the Heisenberg XY spin chain model, the Rényi entropy as a function of ''α'' can be calculated explicitly by virtue of the fact that it is an [[automorphic function]] with respect to a particular subgroup of the [[modular group]].<ref>{{harvtxt|Franchini|2008}}</ref><ref>{{harvtxt|Its|2010}}</ref>
| |
| In [[theoretical computer science]], the min-entropy is used in the context of [[randomness extractor]]s.
| |
| | |
| == Definition ==
| |
| The Rényi entropy of order <math>\alpha</math>, where <math>\alpha \geq 0 </math> and <math>\alpha \neq 1 </math>, is defined as
| |
| | |
| :<math>H_\alpha(X) = \frac{1}{1-\alpha}\log\Bigg(\sum_{i=1}^n p_i^\alpha\Bigg)</math> .<ref name=Renyi61 />
| |
| | |
| Here, <math>X</math> is a discrete random variable with possible outcomes <math>1,2,...,n</math> and corresponding probabilities <math>p_i \doteq \Pr(X=i)</math> for <math>i=1,\dots,n</math>, and the logarithm is base 2.
| |
| If the probabilities are <math>p_i=1/n</math> for all <math>i=1,\dots,n</math>, then all the Rényi entropies of the distribution are equal: <math>H_\alpha(X)=\log n</math>.
| |
| In general, for all discrete random variables <math>X</math>, <math>H_\alpha(X)</math> is a non-increasing function in <math>\alpha</math>.
| |
| | |
| Applications often exploit the following relation between the Rényi entropy and the [[Lp space#The p-norm in finite dimensions|p-norm]]:
| |
| :<math>H_\alpha(X)=\frac{\alpha}{1-\alpha} \log \left(\|X\|_\alpha\right)</math> .
| |
| Here, the discrete probability distribution <math>X</math> is interpreted as a vector in <math>\R^n</math> with <math>X_i=p_i\geq 0</math> and <math>\sum_{i=1}^{n} X_i =1</math>.
| |
| | |
| The Rényi entropy for any <math>\alpha \geq 0 </math> is [[Schur-concave_function|Schur concave]].
| |
| | |
| == Special cases of the Rényi entropy ==
| |
| As <math>\alpha</math> approaches zero, the Rényi entropy increasingly weighs all possible events more equally, regardless of their probabilities.
| |
| In the limit for <math>\alpha\to 0</math>, the Rényi entropy is just the logarithm of the size of the support of <math>X</math>.
| |
| The limit for <math>\alpha\to 1</math> equals the [[Shannon entropy]], which has special properties.
| |
| As <math>\alpha</math> approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability.
| |
| | |
| === Hartley entropy ===
| |
| Provided the probabilities are nonzero,<ref>[http://tools.ietf.org/html/rfc4086.html#page-6 RFC 4086, page 6]</ref> <math>H_0</math> is the logarithm of the [[cardinality]] of ''X'', sometimes called the [[Hartley entropy]] of ''X'':
| |
| | |
| :<math>H_0 (X) = \log n = \log |X|.\,</math>
| |
| | |
| === Shannon entropy ===
| |
| In the limit <math>\alpha \rightarrow 1</math>, it follows immediately that <math>H_\alpha</math> converges to the [[Shannon entropy]]:<ref>{{harvtxt|Bromiley|Thacker|Bouhova-Thacker|2004}}</ref>
| |
| :<math>H_1 (X) = - \sum_{i=1}^n p_i \log p_i. </math>
| |
| | |
| === Collision entropy ===
| |
| '''Collision entropy''', sometimes just called "Rényi entropy," refers to the case <math>\alpha = 2</math>,
| |
| | |
| :<math>H_2 (X) = - \log \sum_{i=1}^n p_i^2 = - \log P(X = Y)</math>
| |
| | |
| where ''X'' and ''Y'' are [[independent and identically distributed]].
| |
| | |
| === Min-entropy ===
| |
| {{main|Min entropy}}
| |
| In the limit as <math>\alpha \rightarrow \infty </math>, the Rényi entropy <math>H_\alpha</math> converges to the '''min-entropy''' <math>H_\infty</math>:
| |
| | |
| :<math>H_\infty(X) \doteq \min_{i=1}^n (-\log p_i) = -(\max_i \log p_i) = -\log \max_i p_i\,.</math>
| |
| | |
| Equivalently, the min-entropy <math>H_\infty(X)</math> is the largest real number <math>b</math> such that all events occur with probability at most <math>2^{-b}</math>.
| |
| | |
| The name ''min-entropy'' stems from the fact that it is the smallest entropy measure in the family of Rényi entropies.
| |
| In this sense, it is the strongest way to measure the information content of a discrete random variable.
| |
| In particular, the min-entropy is never larger than the [[Shannon entropy]].
| |
| | |
| The min-entropy has important applications for [[randomness extractor]]s in [[theoretical computer science]]:
| |
| Extractors are able to extract randomness from random sources that have a large min-entropy; merely having a large [[Shannon entropy]] does not suffice for this task.
| |
| | |
| == Inequalities between different values of ''α'' ==
| |
| That <math>H_\alpha</math> is non-increasing in <math>\alpha</math>,
| |
| which can be proven by differentiation,<ref name=Beck1993>{{harvtxt|Beck|1993}}</ref> as
| |
| :<math>-\frac{d H_\alpha}{d\alpha}
| |
| = -\frac{1}{(1-\alpha)^2} \sum_{i=1}^n z_i \log(z_i / p_i),</math>
| |
| which is proportional to [[Kullback–Leibler divergence]] (which is always non-negative), where
| |
| <math>z_i = p_i^\alpha / \sum_{j=1}^n p_j^\alpha</math>.
| |
| | |
| In particular cases inequalities can be proven also by [[Jensen's inequality]]:
| |
| :<math>\log n=H_0\geq H_1 \geq H_2 \geq H_\infty</math> .,<ref><math> H_1 \ge H_2 </math> holds because <math> \sum\limits_{i = 1}^M {p_i \log p_i } \le \log \sum\limits_{i = 1}^M {p_i^2 } </math>.</ref><ref><math> H_\infty \le H_2 </math> holds because <math>
| |
| \log \sum\limits_{i = 1}^n {p_i^2 } \le \log \sup _i p_i \left( {\sum\limits_{i = 1}^n {p_i } } \right) = \log \sup p_i
| |
| </math>.</ref>
| |
| | |
| For values of <math>\alpha>1</math>, inequalities in the other direction also hold. In particular, we have
| |
| :<math> H_2 \le 2H_\infty </math> .<ref><math> H_2 \le 2H_\infty </math> holds because <math> \log \sum\limits_{i = 1}^n {p_i^2 } \ge \log \sup _i p_i^2 = 2\log \sup_i p_i </math></ref>{{citation needed|date=August 2012}}
| |
| | |
| On the other hand, the Shannon entropy <math>H_1</math> can be arbitrarily high for a random variable <math>X</math> that has a constant min-entropy.{{citation needed|date=August 2012}}
| |
| | |
| == Rényi divergence ==
| |
| As well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising the [[Kullback–Leibler divergence]].
| |
| | |
| The '''Rényi divergence''' of order α, where α > 0, of a distribution ''P'' from a distribution ''Q'' is defined to be:
| |
| | |
| :<math>D_\alpha (P \| Q) = \frac{1}{\alpha-1}\log\Bigg(\sum_{i=1}^n \frac{p_i^\alpha}{q_i^{\alpha-1}}\Bigg) = \frac{1}{\alpha-1}\log \sum_{i=1}^n p_i^\alpha q_i^{1-\alpha}.\,</math>
| |
| | |
| Like the Kullback-Leibler divergence, the Rényi divergences are non-negative for α>0. This divergence is also known as the alpha-divergence (<math>\alpha</math>-divergence).
| |
| | |
| Some special cases:
| |
| | |
| :<math>D_0(P \| Q) = - \log Q(\{i : p_i > 0\})</math> : minus the log probability under Q that ''p''<sub>i</sub>>0;
| |
| | |
| :<math>D_{1/2}(P \| Q) = -2 \log \sum_{i=1}^n \sqrt{p_i q_i} </math> : minus twice the logarithm of the [[Bhattacharyya coefficient]];
| |
| | |
| :<math>D_1(P \| Q) = \sum_{i=1}^n p_i \log \frac{p_i}{q_i}</math> : the Kullback-Leibler divergence;
| |
| | |
| :<math>D_2(P \| Q) = \log \Big\langle \frac{p_i}{q_i} \Big\rangle \, </math> : the log of the expected ratio of the probabilities;
| |
| | |
| :<math>D_\infty(P \| Q) = \log \sup_i \frac{p_i}{q_i} </math> : the log of the maximum ratio of the probabilities.
| |
| | |
| ==Why α=1 is special ==
| |
| The value α = 1, which gives the [[Shannon entropy]] and the [[Kullback–Leibler divergence]], is special because it is only at α=1 that the [[Conditional information#Chain rule|chain rule of conditional probability]] holds exactly:
| |
| | |
| :<math>H(A,X) = H(A) + \mathbb{E}_{a \sim A} \big[ H(X| A=a) \big]</math>
| |
| | |
| for the absolute entropies, and
| |
| | |
| :<math>D_\mathrm{KL}(p(x|a)p(a)||m(x,a)) = D_\mathrm{KL}(p(a)||m(a)) + \mathbb{E}_{p(a)}\{D_\mathrm{KL}(p(x|a)||m(x|a))\},</math>
| |
| | |
| for the relative entropies.
| |
| | |
| The latter in particular means that if we seek a distribution ''p''(''x'',''a'') which minimizes the divergence from some underlying prior measure ''m''(''x'',''a''), and we acquire new information which only affects the distribution of ''a'', then the distribution of ''p''(''x''|''a'') remains ''m''(''x''|''a''), unchanged.
| |
| | |
| The other Rényi divergences satisfy the criteria of being positive and continuous; being invariant under 1-to-1 co-ordinate transformations; and of combining additively when ''A'' and ''X'' are independent, so that if ''p''(''A'',''X'') = ''p''(''A'')''p''(''X''), then
| |
| | |
| :<math>H_\alpha(A,X) = H_\alpha(A) + H_\alpha(X)\;</math>
| |
| | |
| and
| |
| | |
| :<math>D_\alpha(P(A)P(X)\|Q(A)Q(X)) = D_\alpha(P(A)\|Q(A)) + D_\alpha(P(X)\|Q(X)).</math>
| |
| | |
| The stronger properties of the α = 1 quantities, which allow the definition of [[conditional information]] and [[mutual information]] from communication theory, may be very important in other applications, or entirely unimportant, depending on those applications' requirements.
| |
| | |
| == Exponential families ==
| |
| The Rényi entropies and divergences for an [[exponential family]] admit simple expressions (Nielsen & Nock, 2011)
| |
| :<math>
| |
| H_\alpha(p_F(x;\theta)) = \frac{1}{1-\alpha} \left(F(\alpha\theta)-\alpha F(\theta)+\log E_p[e^{(\alpha-1)k(x)}]\right)
| |
| </math> | |
| | |
| and
| |
| :<math>
| |
| D_\alpha(p:q) = \frac{J_{F,\alpha}(\theta:\theta')}{1-\alpha}
| |
| </math>
| |
| where
| |
| :<math>
| |
| J_{F,\alpha}(\theta:\theta')= \alpha F(\theta)+(1-\alpha) F(\theta')- F(\alpha\theta+(1-\alpha)\theta')
| |
| </math>
| |
| is a Jensen difference divergence.
| |
| | |
| ==See also==
| |
| * [[Diversity indices]]
| |
| * [[Tsallis entropy]]
| |
| * [[Generalized entropy index]]
| |
| | |
| == Notes ==
| |
| {{Reflist}}
| |
| | |
| == References ==
| |
| | |
| * {{cite book
| |
| |last1 = Beck
| |
| |first1 = Christian
| |
| |last2 = Schlögl
| |
| |first2 = Friedrich
| |
| |year = 1993
| |
| |title = Thermodynamics of chaotic systems: an introduction
| |
| |publisher = Cambridge University Press
| |
| |isbn = 0521433673}}
| |
| | |
| * {{cite journal
| |
| | last = Jizba
| |
| | first = P.
| |
| | coauthors = Arimitsu, T.
| |
| | title = The world according to Rényi: Thermodynamics of multifractal systems
| |
| | journal = Annals of Physics
| |
| | volume = 312
| |
| | pages = 17–59
| |
| | publisher = Elsevier
| |
| | year = 2004
| |
| | doi = 10.1016/j.aop.2004.01.002
| |
| | ref=harv
| |
| | arxiv = cond-mat/0207707 |bibcode = 2004AnPhy.312...17J }}
| |
| | |
| * {{cite journal
| |
| | last = Jizba
| |
| | first = P.
| |
| | coauthors = Arimitsu, T.
| |
| | title = On observability of Rényi's entropy
| |
| | journal = Physical Review E
| |
| | volume = 69
| |
| | pages = 026128-1 - 026128-12
| |
| | publisher = APS
| |
| | year = 2004
| |
| | doi = 10.1103/PhysRevE.69.026128
| |
| | ref=harv
| |
| | arxiv = cond-mat/0307698 |bibcode = 2004PhRvE..69b6128J }}
| |
| | |
| * {{citation|first1=P.A.|last1=Bromiley|first2=N.A.|last2=Thacker|first3=E.|last3=Bouhova-Thacker|title=Shannon Entropy, Renyi Entropy, and Information|url=http://www.tina-vision.net/docs/memos/2004-004.pdf |year=2004|ref=harv}}
| |
| | |
| * {{cite journal
| |
| | last = Franchini
| |
| | first = F.
| |
| | coauthors = Its, A.R., Korepin, V.E.
| |
| | title = Rényi entropy as a measure of entanglement in quantum spin chain
| |
| | journal = Journal of Physics A: Mathematical and Theoretical
| |
| | volume = 41
| |
| | issue = 25302
| |
| | pages = 025302
| |
| | publisher = IOPScience
| |
| | year = 2008
| |
| | url = http://www.sciencedirect.com/science/article/pii/S0003491604000132
| |
| | doi = 10.1088/1751-8113/41/2/025302
| |
| | ref=harv
| |
| | arxiv = 0707.2534 |bibcode = 2008JPhA...41b5302F }}
| |
| | |
| * {{springer|title=Rényi test|id=p/r081270}}
| |
| | |
| * {{cite journal | first1 = A. O. |last1=Hero | first2=O. |last2=Michael |first3=J. |last3=Gorman| title= Alpha-divergences for Classification, Indexing and Retrieval | year = 2002 |url = http://www.eecs.umich.edu/~hero/Preprints/cspl-328.pdf |ref=harv}}
| |
| * {{cite journal
| |
| | last = Its
| |
| | first = A. R.
| |
| | coauthors = Korepin, V. E.
| |
| | title = Generalized entropy of the Heisenberg spin chain
| |
| | journal = Theoretical and Mathematical Physics
| |
| | volume = 164
| |
| | issue = 3
| |
| | pages = 1136–1139
| |
| | publisher = Springer
| |
| | year = 2010
| |
| | url = http://www.springerlink.com/content/vn2qt54344320m2g| doi = 10.1007/s11232-010-0091-6
| |
| | accessdate = 7 Mar 2012|bibcode = 2010TMP...164.1136I | ref=harv }}
| |
| | |
| * {{cite arXiv | first1 = F. |last1=Nielsen |first2= S. |last2=Boltz | title= The Burbea-Rao and Bhattacharyya centroids | year = 2010 |arxiv = 1004.5049 |ref=harv}}
| |
| | |
| * {{cite journal | first1 = Frank |last1=Nielsen |first2= Richard |last2=Nock | title= A closed-form expression for the Sharma-Mittal entropy of exponential families | journal = Journal of Physics A: Mathematical and Theoretical | year = 2012 |url = http://iopscience.iop.org/1751-8121/45/3/032003/ |ref=harv | doi = 10.1088/1751-8113/45/3/032003 | volume = 45 | issue = 3 | pages = 032003 |arxiv = 1112.4221 |bibcode =2012JPhA...45c2003N }}
| |
| | |
| * {{cite journal
| |
| | last1 = Nielsen | first1 = Frank
| |
| | last2 = Nock | first2 = Richard
| |
| | title = On Rényi and Tsallis entropies and divergences for exponential families
| |
| | journal = Arxiv
| |
| | year = 2011
| |
| | url = http://arxiv.org/abs/1105.3259
| |
| | arxiv = 1105.3259
| |
| |ref=harv
| |
| |bibcode = 2011arXiv1105.3259N }}
| |
| | |
| * {{cite conference | first=Alfréd | last=Rényi |authorlink=Alfréd Rényi | title=On measures of information and entropy | booktitle=Proceedings of the fourth Berkeley Symposium on Mathematics, Statistics and Probability 1960 | year=1961 | pages=547–561 | url = http://digitalassets.lib.berkeley.edu/math/ucb/text/math_s4_v1_article-27.pdf |ref=harv }}
| |
| | |
| * Rosso, O.A., "EEG analysis using wavelet-based information tools", ''Journal of Neuroscience Methods'', 153 (2006) 163–182.
| |
| | |
| * {{cite journal
| |
| | last1 = van Erven | first1 = Tim
| |
| | last2 = Harremoës | first2 = Peter
| |
| | title = Rényi Divergence and Kullback-Leibler Divergence
| |
| | journal = Arxiv
| |
| | year = 2012
| |
| | url = http://arxiv.org/abs/1206.2459
| |
| | arxiv = 1206.2459
| |
| |ref=harv|bibcode = 2012arXiv1206.2459V }}
| |
| | |
| {{DEFAULTSORT:Renyi entropy}}
| |
| [[Category:Information theory]]
| |
| [[Category:Entropy and information]]
| |