Euclidean quantum gravity: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>JYBot
m r2.7.1) (Robot: Adding it:Gravità quantistica euclidea
 
en>ChrisGualtieri
m →‎References: Remove stub tag(s). Page is start class or higher + General Fixes + Checkwiki fixes using AWB
Line 1: Line 1:
In [[statistics]], '''''G''-tests''' are [[likelihood ratio test|likelihood-ratio]] or [[maximum likelihood]] [[statistical significance]] tests that are increasingly being used in situations where [[chi-squared test]]s were previously recommended.{{citation needed|date=June 2012}}


The general formula for ''G'' is
:<math> G = 2\sum_{i} {O_{i} \cdot \ln(O_{i}/E_{i}) }, </math>


He got back late, and looked so tired I said I�d order a Rasa curry, which I did. So, on Friday, I emailed him in the morning to say that I�d been worried by the fact that he�d read the address of my London flat on the internet. They wanted to phone us back, so I reminded David I�d lost my BlackBerry, and have no idea what the number of the Bat Phone is.<br><br>I keep [http://Search.Un.org/search?ie=utf8&site=un_org&output=xml_no_dtd&client=UN_Website_en&num=10&lr=lang_en&proxystylesheet=UN_Website_en&oe=utf8&q=conjuring&Submit=Go conjuring] up images of him in 1983, trying to reignite the passion. <br>I told him, before he started wriggling, that I think that memorable evening, when after our game of squash he had asked me to take his racquet home for him because he had a date, he had already started seeing the woman he would marry.<br><br>He bought me a bottle of prosecco, and some shopping. On Sunday, we went to the Matisse exhibition at the Tate Modern (walking round the exhibit, talking, made me feel as though we were in a Woody Allen movie), and again his car had a parking ticket on it when we returned to it. This time, though, he didn�t hand it to me, although it�s sitting, accusingly, on my desk.<br><br>The whole phone, in fact, is a gleaming object of desire but it lacks standout new features other than the cameras, so you�ll miss out on gizmos such as the Samsung S5�s fingerprint scanner, or LG G3�s frankly frightening Quad HD screen.<br><br>99                  &#9733;&#9733;&#9733;&#9733;&#9733;Most of us who can be considered vaguely literate felt a faint anger when the term �selfie� passed from geek-speak into common parlance, especially after this year�s famous examples at the Oscars and Nelson Mandela�s funeral, where Barack Obama snuggled up to David Cameron. Huawei Ascend P7 �329.<br><br>And don�t say, �Don�t give me a hard time� when it�s you giving me a hard time. I did nothing today other than work hard and order dinner. 'Have a great life together, just leave me out of it. �You know I have no interest in her. I didn�t realise you had taken my keys back. I hope that was just a fit of pique. �<br>This came back the next morning, when he�d arrived at work. I love you and no one else. I have to work now, but I�ll see you tonight, as usual. My life is an open book to you.<br><br>Isobel and Dawn are in situ already, chilling the wine. Lots of books on Kindle. My Accessorize pink bikini. Wow, are we going to whip up some copy! What about the wedding proposal on the Pampelonne beach and me and Dawn can scatter white rose petals. Xxx� <br>The thing is, I�m not even sure David is still coming� Packing in tissue paper tonight: The Row sunglasses. My Dries negligee dress. Isobel has just sent me a message�<br>�The cast of Liz Jones�s Diary are off to the South of France. Let�s get this show on the road!<br><br>She had written to him three times, about him giving her his car (His reply: �I will send you the log book�), and having found his bow tie (His reply: �I spent �75 on one last week.<br><br>The famous Oscars �selfie� taken by Bradley Cooper and featuring Angelina Jolie, Brad Pitt, Meryl Streep, Julia Roberts, Ellen DeGeneres, Jennifer Lawrence, Lupita Nyong�o, her brother Peter, Kevin Spacey, Jared Leto and Channing Tatum<br>But Huawei (pronounced like the reverse of a jubilant �Whahey�) needed to add to the language to sum up the purpose of its new Ascend P7�s stand-out feature - a forward-facing eight-megapixel camera, with the option for panoramic shots. By law, this is the only phone you�ll be taking �groufies� on - although as yet, the trademark doesn�t apply in the UK, so users of other phones can still use it for their own work. Unless you�re the size of a Weight Watchers �before� picture, there�s only one reason for this to exist - a �group selfie� (ie, a group shot where one of you holds the camera) - hence �groufie�. Huawei is so proud of the word the company trademarked it in several countries to mark the launch of the P7.<br><br>In case you�re wondering what Huawei is, it�s one of those Chinese companies that only recently began hawking smartphones in the West, and shifts so many phones in the Far East it�s the third biggest phone company on Earth.<br><br>Upstage selfie-toting friends by turning you and your pals into a real 3D-model (warning: there�s a fair bit of work involved), ready to print off. The app �walks� you round anything to capture it in 3D - now all you need is a few hundred quid for a 3D printer.<br><br>Huawei�s invention of the g-word, and the panoramic software to make it a reality, is down to a feeling that the endless Twitter parade of selfies (both celebrity and human), might be improved with a bit of context. And in action, it�s impressive too.<br><br>WOLFENSTEIN: THE NEW ORDER�40, PC, CONSOLES<br>The biggest surprise in Wolfensteing: The New Order is that it's the [http://pinterest.com/search/pins/?q=tense+plotting tense plotting] that lifts this violent tale above its beige rivals <br>With an alternate-history plot hewn from the finest codswallop - a Nazi general uses high technology to summon an army of robots and zombies - the biggest surprise here is that it�s the tense plotting that lifts this violent tale above its beige rivals. &#9733;&#9733;&#9733;&#9733;&#9733;<br><br>Here's more information on clash of clans triche gemmes, [http://nouveauclashofclanstriche.blogspot.com/ see this website], have a look at our own web site.
where O<sub>i</sub> is the observed frequency in a cell, E is the expected frequency on the null hypothesis, where ln denotes the [[natural logarithm]] (log to the base ''[[e (mathematical constant)|e]]'') and the sum is taken over all non-empty cells.
 
''G''-tests are coming into increasing use, particularly since they were recommended at least since the 1981 edition of the popular statistics textbook by Sokal and Rohlf.<ref>[[Robert R. Sokal|Sokal, R. R.]] and Rohlf, F. J. (1981). ''Biometry: the principles and practice of statistics in biological research.'', New York: Freeman. ISBN 0-7167-2411-1.</ref>
 
==Distribution and usage==
Given the null hypothesis that the observed frequencies result from random sampling from a distribution with the given expected frequencies, the [[probability distribution|distribution]] of ''G'' is approximately a [[chi-squared distribution]], with the same number of [[degrees of freedom (statistics)|degrees of freedom]] as in the corresponding chi-squared test.
 
For very small samples the [[multinomial test]] for goodness of fit, and [[Fisher's exact test]] for contingency tables, or even Bayesian hypothesis selection are preferable to the ''G''-test .{{citation needed|date=August 2011}}
 
==Relation to the chi-squared test==
The commonly used [[chi-squared test]]s for goodness of fit to a distribution and for independence in [[contingency table]]s are in fact approximations of the [[log-likelihood ratio]] on which the G-tests are based. The general formula for Pearson's chi-squared test statistic is
:<math> \chi^2 = \sum_{i} {(O_{i} - E_{i})^2 \over E_{i}} .</math>
The approximation of ''G'' by chi squared is obtained by a second order Taylor expansion of the natural logarithm around 1. This approximation was developed by [[Karl Pearson]] because at the time it was unduly laborious to calculate log-likelihood ratios.  With the advent of electronic calculators and personal computers, this is no longer a problem.  A derivation of how the chi-squared test is related to the G-test and likelihood ratios, including to a full Bayesian solution is provided in.<ref>Hoey, J (2012). ''[http://arxiv.org/abs/1206.4881v2 The Two-Way Likelihood Ratio (G) Test and Comparison to Two-Way Chi-Squared Test]''</ref>
 
For samples of a reasonable size, the ''G''-test and the chi-squared test will lead to the same conclusions. However, the approximation to the theoretical chi-squared distribution for the ''G''-test is better than for the Pearson chi-squared tests.<ref>Harremoës, P. and Tusnády, G. (2012). ''[http://arxiv.org/abs/1202.1125 Information Divergence is more chi squared distributed than the chi squared statistic]'', Proceedings ISIT 2012, pp. 538-543.</ref> In cases where <math> O_i  >2 \cdot E_i </math> for some cell case the ''G''-test is always better than the chi-squared test.{{citation needed|date=August 2011}} 
 
For testing goodness-of-fit the ''G''-test is infinitely more efficient than the chi squared test in the sense of Bahadur, but the two tests are equally efficient in the sense of Pitman or in the sense of Hodge and Lehman.<ref>Quine, M. P. and Robinson, J. (1985), Efficiencies of chi-square and likelihood ratio goodness-of-fit tests. Ann. Statist. 13, 727-742</ref><ref>Harremoës, P. and Vajda, I. (2008) On the Bahadur-Efficient Testing of Uniformity by means of the Entropy, IEEE Trans. Inform Theory, vol 54, pp. 321-331</ref>
 
==Relation to Kullback-Leibler divergence==
The ''G''-test quantity is proportional to the [[Kullback–Leibler divergence]] of the empirical distribution from the theoretical distribution.
 
==Relation to mutual information==
For analysis of contingency tables the value of ''G'' can also be expressed in terms of [[mutual information]].
 
Let
:<math>N = \sum_{ij}{O_{ij}} \; </math> , <math> \; \pi_{ij} = {O_{ij} \over N} \;</math> , <math>\; \pi_{i.} = {\sum_j O_{ij} \over N} \; </math> and <math>\; \pi_{. j} = {\sum_i O_{ij} \over N} \;</math> .
 
Then ''G'' can be expressed in several alternative forms:
 
:<math> G = 2 \cdot N \cdot \sum_{ij}{\pi_{ij} \left( \ln(\pi_{ij})-\ln(\pi_{i.})-\ln(\pi_{.j}) \right)} ,</math>
 
:<math> G = 2 \cdot N \cdot \left[ H(row) + H(col) - H(row,col) \right] , </math>
 
:<math> G = 2 \cdot N \cdot MI(row,col) \, ,</math>
 
where the [[Entropy (information theory)|entropy]] of a discrete random variable <math>X \,</math> is defined as
:<math> H(X) = - {\sum_x p(x) \log p(x)} \, ,</math>
and where
:<math> MI(row,col)= H(row) + H(col) - H(row,col) \, </math>
is the [[mutual information]] between the row vector and the column vector of the contingency table.
 
It can also be shown{{citation needed|date=August 2011}} that the inverse document frequency weighting commonly used for text retrieval is an approximation of ''G'' applicable when the row sum for the query is much smaller than the row sum for the remainder of the corpus.  Similarly, the result of Bayesian inference applied to a choice of single multinomial distribution for all rows of the contingency table taken together versus the more general alternative of a separate multinomial per row produces results very similar to the ''G'' statistic.{{citation needed|date=August 2011}}
 
==Application==
* The [[McDonald–Kreitman test]] in [[statistical genetics]] is an application of the G-test.
* Dunning<ref>Dunning, Ted (1993). ''[http://acl.ldc.upenn.edu/J/J93/J93-1003.pdf Accurate Methods for the Statistics of Surprise and Coincidence]'', Computational Linguistics, Volume 19, issue 1 (March, 1993).</ref> introduced the test to the [[computational linguistics]] community where it is now widely used.
 
==Statistical software==
* The [[R programming language]] has the [http://rforge.net/doc/packages/Deducer/likelihood.test.html likelihood.test] function in the [http://rforge.net/doc/packages/Deducer/html/00Index.html Deducer] package.
*In [[SAS System|SAS]], one can conduct G-Test by applying the <code>/chisq</code> option in <code>proc freq</code>.<ref>[http://udel.edu/~mcdonald/statgtestind.html G-test of independence], [http://udel.edu/~mcdonald/statgtestgof.html G-test for goodness-of-fit] in Handbook of Biological Statistics, University of Delaware. ( pp. 46-51, 64-69  in: McDonald, J.H. (2009) ''Handbook of Biological Statistics'' (2nd ed.). Sparky House Publishing, Baltimore, Maryland.)</ref>
*Fisher's G-Test in the [http://cran.r-project.org/web/packages/GeneCycle/ GeneCycle Package] of the [[R programming language]] (fisher.g.test) does not implement the G-test as described in this article, but rather Fisher's exact test of Gaussian white-noise in a time series.<ref>Fisher, R.A. (1929) "Tests of significance in harmonic analysis" Proceedings of the Royal Society of London. Series A, Volume 125, Issue 796, pp. 54-59.</ref>
 
==References==
<references/>
 
==External links==
* [http://ucrel.lancs.ac.uk/llwizard.html G<sup>2</sup>/Log-likelihood calculator]
 
{{DEFAULTSORT:G-Test}}
[[Category:Categorical data]]
[[Category:Statistical tests]]

Revision as of 22:05, 12 December 2013

In statistics, G-tests are likelihood-ratio or maximum likelihood statistical significance tests that are increasingly being used in situations where chi-squared tests were previously recommended.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.

The general formula for G is

where Oi is the observed frequency in a cell, E is the expected frequency on the null hypothesis, where ln denotes the natural logarithm (log to the base e) and the sum is taken over all non-empty cells.

G-tests are coming into increasing use, particularly since they were recommended at least since the 1981 edition of the popular statistics textbook by Sokal and Rohlf.[1]

Distribution and usage

Given the null hypothesis that the observed frequencies result from random sampling from a distribution with the given expected frequencies, the distribution of G is approximately a chi-squared distribution, with the same number of degrees of freedom as in the corresponding chi-squared test.

For very small samples the multinomial test for goodness of fit, and Fisher's exact test for contingency tables, or even Bayesian hypothesis selection are preferable to the G-test .Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.

Relation to the chi-squared test

The commonly used chi-squared tests for goodness of fit to a distribution and for independence in contingency tables are in fact approximations of the log-likelihood ratio on which the G-tests are based. The general formula for Pearson's chi-squared test statistic is

The approximation of G by chi squared is obtained by a second order Taylor expansion of the natural logarithm around 1. This approximation was developed by Karl Pearson because at the time it was unduly laborious to calculate log-likelihood ratios. With the advent of electronic calculators and personal computers, this is no longer a problem. A derivation of how the chi-squared test is related to the G-test and likelihood ratios, including to a full Bayesian solution is provided in.[2]

For samples of a reasonable size, the G-test and the chi-squared test will lead to the same conclusions. However, the approximation to the theoretical chi-squared distribution for the G-test is better than for the Pearson chi-squared tests.[3] In cases where for some cell case the G-test is always better than the chi-squared test.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.

For testing goodness-of-fit the G-test is infinitely more efficient than the chi squared test in the sense of Bahadur, but the two tests are equally efficient in the sense of Pitman or in the sense of Hodge and Lehman.[4][5]

Relation to Kullback-Leibler divergence

The G-test quantity is proportional to the Kullback–Leibler divergence of the empirical distribution from the theoretical distribution.

Relation to mutual information

For analysis of contingency tables the value of G can also be expressed in terms of mutual information.

Let

, , and .

Then G can be expressed in several alternative forms:

where the entropy of a discrete random variable is defined as

and where

is the mutual information between the row vector and the column vector of the contingency table.

It can also be shownPotter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park. that the inverse document frequency weighting commonly used for text retrieval is an approximation of G applicable when the row sum for the query is much smaller than the row sum for the remainder of the corpus. Similarly, the result of Bayesian inference applied to a choice of single multinomial distribution for all rows of the contingency table taken together versus the more general alternative of a separate multinomial per row produces results very similar to the G statistic.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.

Application

Statistical software

References

  1. Sokal, R. R. and Rohlf, F. J. (1981). Biometry: the principles and practice of statistics in biological research., New York: Freeman. ISBN 0-7167-2411-1.
  2. Hoey, J (2012). The Two-Way Likelihood Ratio (G) Test and Comparison to Two-Way Chi-Squared Test
  3. Harremoës, P. and Tusnády, G. (2012). Information Divergence is more chi squared distributed than the chi squared statistic, Proceedings ISIT 2012, pp. 538-543.
  4. Quine, M. P. and Robinson, J. (1985), Efficiencies of chi-square and likelihood ratio goodness-of-fit tests. Ann. Statist. 13, 727-742
  5. Harremoës, P. and Vajda, I. (2008) On the Bahadur-Efficient Testing of Uniformity by means of the Entropy, IEEE Trans. Inform Theory, vol 54, pp. 321-331
  6. Dunning, Ted (1993). Accurate Methods for the Statistics of Surprise and Coincidence, Computational Linguistics, Volume 19, issue 1 (March, 1993).
  7. G-test of independence, G-test for goodness-of-fit in Handbook of Biological Statistics, University of Delaware. ( pp. 46-51, 64-69 in: McDonald, J.H. (2009) Handbook of Biological Statistics (2nd ed.). Sparky House Publishing, Baltimore, Maryland.)
  8. Fisher, R.A. (1929) "Tests of significance in harmonic analysis" Proceedings of the Royal Society of London. Series A, Volume 125, Issue 796, pp. 54-59.

External links