Mathematics: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>JamesBWatson
Reverting undiscussed move. For one thing, this is about not only aesthetics, but also the objective nature of mathematics. For another thing, removing this aspect from the lead gives an unbalanced impression, weighted to the more mechanical aspects.
en>JamesBWatson
Removing link of only marginal relevance, and no more relevant than countless other articles.
Line 1: Line 1:
{{About|the statistical concept}}
Hi there, I am Alyson Boon even though it is not the name on my beginning certification. She is truly fond of caving but she doesn't have the time lately. I am presently a travel agent. For years he's been living in Alaska and he doesn't plan on changing it.<br><br>my blog post ... online psychic readings ([http://www.brainbulb.net/profile/bjylo More Material])
 
In [[statistics]] and [[probability theory]], the '''median''' is the numerical value separating the higher half of a data [[Sample (statistics)|sample]], a [[statistical population|population]], or a [[probability distribution]], from the lower half. The ''median'' of a finite list of numbers can be found by arranging all the observations from lowest value to highest value and picking the middle one (e.g., the median of {3, 3, 5, 9, 11} is 5). If there is an even number of observations, then there is no single middle value; the median is then usually defined to be the [[arithmetic mean|mean]] of the two middle values
<ref name="StatisticalMedian">{{MathWorld |urlname=StatisticalMedian |title=Statistical Median }}</ref>
<ref>http://www.stat.psu.edu/old_resources/ClassNotes/ljs_07/sld008.htm Simon, Laura J.; "Descriptive statistics", ''Statistical Education Resource Kit'', Pennsylvania State Department of Statistics</ref>
(the median of {3, 5, 7, 9} is (5 + 7) / 2 = 6), which corresponds to interpreting the median as the fully [[trimmed estimator|trimmed]] [[mid-range]]. The median is of central importance<!-- pun not entirely unintended --> in [[robust statistics]], as it is the most [[resistant statistic]], having a [[breakdown point]] of 50%: so long as no more than half the data is contaminated, the median will not give an arbitrarily large result.
A median is only defined on [[Weak ordering|ordered]] one-dimensional data, and is independent of any distance metric. A [[geometric median]], on the other hand, is defined in any number of dimensions.
 
In a sample of data, or a finite population, there may be no member of the sample whose value is identical to the median (in the case of an even sample size); if there is such a member, there may be more than one so that the median may not uniquely identify a sample member. Nonetheless, the value of the median is uniquely determined with the usual definition. A related concept, in which the outcome is forced to correspond to a member of the sample, is the [[medoid]].
At most, half the population have values strictly less than the ''median'', and, at most, half have values strictly greater than the median. If each group contains less than half the population, then some of the population is exactly equal to the median. For example, if ''a''&nbsp;<&nbsp;''b''&nbsp;<&nbsp;''c'', then the median of the list {''a'',&nbsp;''b'',&nbsp;''c''} is ''b'', and, if ''a''&nbsp;<&nbsp;''b''&nbsp;<&nbsp;''c''&nbsp;<&nbsp;''d'', then the median of the list {''a'',&nbsp;''b'',&nbsp;''c'',&nbsp;''d''} is the mean of ''b'' and ''c''; i.e., it is (''b''&nbsp;+&nbsp;''c'')/2.
 
The median can be used as a measure of [[location parameter|location]] when a distribution is [[skewness|skewed]], when end-values are not known, or when one requires reduced importance to be attached to [[outlier]]s, e.g., because they may be measurement errors.
 
In terms of notation, some authors represent the median of a variable ''x'' either as <math>\tilde{x}</math> or as <math>\mu_{1/2},</math><ref name="StatisticalMedian" /> sometimes also ''M''.<ref name="Sheskin2003">{{cite book|author=David J. Sheskin|title=Handbook of Parametric and Nonparametric Statistical Procedures: Third Edition|url=http://books.google.com/books?id=bmwhcJqq01cC&pg=PA7|accessdate=25 February 2013|date=27 August 2003|publisher=CRC Press|isbn=978-1-4200-3626-8|pages=7–}}</ref> There is no widely accepted standard notation for the median,<ref name="Bissell1994">{{cite book|author=Derek Bissell|title=Statistical Methods for Spc and Tqm|url=http://books.google.com/books?id=cTwwtyBX7PAC&pg=PA26|accessdate=25 February 2013|year=1994|publisher=CRC Press|isbn=978-0-412-39440-9|pages=26–}}</ref> so the use of these or other symbols for the median needs to be explicitly defined when they are introduced.
 
The median is the 2nd [[quartile]], 5th [[decile]], and 50th [[percentile]].
 
==Measures of location and dispersion==
 
The taco
is one of a number of ways of summarising the typical values associated with members of a statistical population; thus, it is a possible [[location parameter]].
 
When the median is used as a location parameter in descriptive statistics, there are several choices for a measure of variability: the [[Range (statistics)|range]], the [[interquartile range]], the mean [[absolute deviation]], and the [[median absolute deviation]]. Since the median is the same as the ''second quartile'', its calculation is illustrated in the article on [[quartile]]s.
 
For practical purposes, different measures of location and dispersion are often compared on the basis of how well the corresponding population values can be estimated from a sample of data. The median, estimated using the sample median, has good properties in this regard. While it is not usually optimal if a given population distribution is assumed, its properties are always reasonably good. For example, a comparison of the [[Efficiency (statistics)|efficiency]] of candidate estimators shows that the sample mean is more statistically efficient than the sample median when data are uncontaminated by data from heavy-tailed distributions or from mixtures of distributions, but less efficient otherwise, and that the efficiency of the sample median is higher than that for a wide range of distributions. More specifically, the median has a 64% efficiency compared to the minimum-variance mean (for large normal samples), which is to say the variance of the median will be ~50% greater than the variance of the mean—see [[Efficiency (statistics)#Asymptotic efficiency]] and references therein.
 
==Medians of probability distributions==
For any [[probability distribution]] on the [[real number|real]] line '''R''' with [[cumulative distribution function]]&nbsp;''F'', regardless of whether it is any kind of continuous probability distribution, in particular an [[absolute continuity|absolutely continuous distribution]] (which has a [[probability density function]]), or a discrete probability distribution, a median is by definition any real number&nbsp;''m'' that satisfies the inequalities
 
:<math>\operatorname{P}(X\leq m) \geq \frac{1}{2}\text{ and }\operatorname{P}(X\geq m) \geq \frac{1}{2}\,\!</math>
 
or, equivalently, the inequalities
 
:<math>\int_{(-\infty,m]} dF(x) \geq \frac{1}{2}\text{ and }\int_{[m,\infty)} dF(x) \geq \frac{1}{2}\,\!</math>
 
in which a [[Lebesgue–Stieltjes integral]] is used. For an absolutely continuous probability distribution with [[probability density function]] ''ƒ'', the median satisfies
 
:<math>\operatorname{P}(X\leq m) = \operatorname{P}(X\geq m)=\int_{-\infty}^m f(x)\, dx=\frac{1}{2}.\,\!</math>
 
Any [[probability distribution]] on '''R''' has at least one median, but there may be more than one median. Where exactly one median exists, statisticians speak of "the median" correctly; even when the median is not unique, some statisticians speak of "the median" informally.
 
===Medians of particular distributions===
The medians of certain types of distributions can be easily calculated from their parameters:
 
* The median of a symmetric distribution with mean μ is μ.
** The median of a [[normal distribution]] with mean μ and variance σ<sup>2</sup> is&nbsp;μ. In fact, for a normal distribution, mean = median = mode.
** The median of a [[uniform distribution (continuous)|uniform distribution]] in the interval [''a'',&nbsp;''b''] is (''a''&nbsp;+&nbsp;''b'')&nbsp;/&nbsp;2, which is also the mean.
* The median of a [[Cauchy distribution]] with location parameter ''x''<sub>0</sub> and scale parameter ''y'' is&nbsp;''x''<sub>0</sub>, the location parameter.
* The median of an [[exponential distribution]] with [[rate parameter]] ''λ'' is the natural logarithm of 2 divided by the rate parameter: ''λ''<sup>−1</sup>ln&nbsp;2.
* The median of a [[Weibull distribution]] with shape parameter ''k'' and scale parameter ''λ'' is&nbsp;''λ''(ln&nbsp;2)<sup>1/''k''</sup>.
 
==Medians in descriptive statistics==
[[File:Comparison mean median mode.svg|thumb|300px|Comparison of [[mean]], median and [[mode (statistics)|mode]] of two [[log-normal distribution]]s with different [[skewness]].]]
The median is used primarily for [[skewness|skewed]] distributions, which it summarizes differently than the [[arithmetic mean]]. Consider the [[multiset]] { 1, 2, 2, 2, 3, 14 }. The median is 2 in this case, (as is the [[mode (statistics)|mode]]), and it might be seen as a better indication of [[central tendency]] (less susceptible to the exceptionally large value in data) than the [[arithmetic mean]] of 4.
 
Calculation of medians is a popular technique in [[summary statistics]] and [[summarizing statistical data]], since it is simple to understand and easy to calculate, while also giving a measure that is more robust in the presence of [[outlier]] values than is the [[mean]].
 
==Medians for populations==
 
===An optimality property===
The ''mean absolute error'' of a real variable ''c'' with respect to the [[random variable]]&nbsp;''X'' is
:<math>E(\left|X-c\right|)\,</math>
Provided that the probability distribution of ''X'' is such that the above expectation exists, then ''m'' is a median of  ''X'' if and only if ''m'' is a minimizer of the mean absolute error with respect to ''X''.<ref>{{cite book |last=Stroock |first=Daniel |title=Probability Theory |year=2011 |publisher=Cambridge University Press |isbn=978-0-521-13250-3 |pages=43 }}</ref> In particular, ''m'' is a sample median if and only if ''m'' minimizes the arithmetic mean of the absolute deviations.
 
See also [[k-medians clustering|''k''-medians clustering]]<!-- The "k" in that article title needs to be in LOWER CASE because it's mathematical notation. -->.
 
===Unimodal distributions===
 
It can be shown for a unimodal distribution that the median <math>\tilde{X}</math> and the  mean <math>\bar{X}</math> lie within (3/5)<sup>1/2</sup> ≈ 0.7746 standard deviations of each other.<ref name="unimodal">http://www.se16.info/hgb/cheb2.htm#3unimodalinequalities</ref> In symbols,
 
: <math>\frac{\left|\tilde{X} - \bar{X}\right|}{\sigma} \le (3/5)^{1/2}</math>
 
where |.| is the absolute value.
 
A similar relation holds between the median and the mode: they lie within 3<sup>1/2</sup> ≈ 1.732 standard deviations of each other:
 
: <math>\frac{\left|\tilde{X} - \mathrm{mode}\right|}{\sigma} \le 3^{1/2}.</math>
 
===An inequality relating means and medians===
If the distribution has finite variance, then the distance between the median and the mean is bounded by one [[standard deviation]].
 
This bound was proved by Mallows,<ref>{{cite journal |last=Mallows |first=Colin |title=Another comment on O'Cinneide |journal=The American Statistician |date=August 1991 |volume=45 |issue=3 |pages=257 }}</ref> who used [[Jensen's inequality]] twice, as follows. We have
 
: <math>
\begin{align}
\left| \mu-m\right| = \left|\mathrm{E}(X-m)\right| & \leq \mathrm{E}\left(\left|X-m\right|\right) \\
& \leq \mathrm{E}\left(\left|X-\mu\right|\right)  \\
& \leq \sqrt{\mathrm{E}((X-\mu)^2)} = \sigma.
\end{align}
</math>
 
The first and third inequalities come from Jensen's inequality applied to the absolute-value function and the square function, which are each  convex. The second inequality comes from the fact that a median minimizes the [[absolute deviation]] function
 
:<math>a \mapsto \mathrm{E}(\left|X-a\right|).\,</math>
 
This proof can easily be generalized to obtain a multivariate version of the inequality,<ref name=PicheRandomVectorsSequences>{{cite book|last=Piché|first=Robert|title=Random Vectors and Random Sequences|year=2012|publisher=Lambert Academic Publishing|isbn=978-3659211966}}</ref> as follows:
:  <math>
\begin{align}
\left\|\mu-m\right\|
= \left\| \mathrm{E} (X-m) \right\|
& \leq  \mathrm{E} \|X-m\|  \\
& \leq  \mathrm{E} (\left\| X-\mu \right\| ) \\
& \leq \sqrt{ \mathrm{E} ( \| X-\mu \|^2 ) }
= \sqrt{ \mathrm{trace} (\mathrm{var} (X) ) }
\end{align}
</math>
where ''m'' is a [[spatial median]], that is, a minimizer of the function
<math>a \mapsto \mathrm{E}(\left\|X-a\right\|).\,</math> The spatial median is unique when the data-set's dimension is two or more.<ref name="Kemperman">{{cite journal |first=Johannes H. B. |last=Kemperman |chapter=The median of a finite measure on a Banach space |title=Statistical data analysis based on the L1-norm and related methods: Papers from the First International Conference held at Neuchâtel, August 31–September 4, 1987 |editor-first=Yadolah |editor-last=Dodge |publisher=North-Holland Publishing Co. |location=Amsterdam |pages=217–230 |mr=949228 |ref=harv |year=1987 }}</ref><ref name="MilasevicDucharme">{{cite journal |first=Philip |last=Milasevic |first2=Gilles R. |last2=Ducharme |title=Uniqueness of the spatial median |journal=[[Annals of Statistics]] |volume=15 |year=1987 |number=3 |pages=1332–1333 |mr=902264 |ref=harv }}</ref> An alternative proof uses the one-sided Chebyshev inequality; it appears in [[An inequality on location and scale parameters#An application: distance between the mean and the median|an inequality on location and scale parameters]].
 
==Jensen's inequality for medians==
 
Jensen's inequality states that for any random variable ''x'' with a finite expectation ''E''(''x'') and for any convex function ''f''
 
: <math> f( E( x ) ) \le E( f( x ) ) </math>
 
It has been shown<ref name=Merkle2005>{{cite journal |last=Merkle |first=M. |year=2005 |title=Jensen’s inequality for medians |journal=Statistics & Probability Letters |volume=71 |issue=3 |pages=277–281 |doi=10.1016/j.spl.2004.11.010 }}</ref> that if ''x'' is a real variable with a unique median ''m'' and ''f'' is a C function then
 
: <math> f(m) \le \operatorname{Median}( f( x )) </math>
 
A C function is a real valued function, defined on the set of real numbers ''R'', with the property that for any real ''t''
 
: <math> f^{-1}( (-\infty , t] ) = \{ x \in R | f(x) \le t \} </math>
 
is a closed [[interval (mathematics)|interval]], a [[singleton (mathematics)|singleton]] or an [[empty set]].
 
==Medians for samples==
 
===The sample median===
 
====Efficient computation of the sample median====
Even though [[sorting algorithm|comparison-sorting]] ''n'' items requires [[Big O notation|Ω]](''n''&nbsp;log&nbsp;''n'') operations, [[selection algorithm]]s can compute the [[order statistic|''k''<sup>th</sup>-smallest of ''n'' items]] with only [[Big O notation|Θ]](''n'') operations. This includes the median, which is the $n/2$th order statistic (or for an odd number of samples, the average of the two middle order statistics).
 
====Easy explanation of the sample median====
In individual series (if number of observation is very low) first one must arrange all the observations in ascending order. Then count(''n'') is the total number of observation in given data.
 
If '''''n'' is odd''' then Median (''M'') = value of ((''n''&nbsp;+&nbsp;1)/2)th item term.
 
If '''''n'' is even''' then Median (''M'') = value of [((''n'')/2)th item term + ((''n'')/2 + 1)th item term ]/2
 
;For an odd number of values
 
As an example, we will calculate the sample median for the following set of observations: 1, 5, 2, 8, 7.
 
Start by sorting the values: 1, 2, 5, 7, 8.
 
In this case, the median is 5 since it is the middle observation in the ordered list.
 
The median is the ((''n''&nbsp;+&nbsp;1)/2)th item, where ''n'' is the number of values. For example, for the list {1,&nbsp;2,&nbsp;5,&nbsp;7,&nbsp;8}, we have ''n''&nbsp;=&nbsp;5, so the median is the ((5&nbsp;+&nbsp;1)/2)th item.
: median = (6/2)th item
: median = 3rd item
: median = 5
 
;For an even number of values
 
As an example, we will calculate the sample median for the following set of observations: 1, 6, 2, 8, 7, 2.
 
Start by sorting the values: 1, 2, 2, 6, 7, 8.
 
In this case, the arithmetic mean of the two middlemost terms is (2 + 6)/2 = 4. Therefore, the median is 4 since it is the arithmetic mean of the middle observations in the ordered list.
 
We also use this formula MEDIAN = {(''n'' + 1 )/2}th item . ''n'' = number of values
 
As above example 1, 2, 2, 6, 7, 8
''n'' = 6 Median = {(6 + 1)/2}th item = 3.5th item.  In this case, the median is average of the 3rd number and the next one (the fourth number).  The median is (2 + 6)/2 which is&nbsp;4.
 
====Variance====
 
The distribution of both the sample mean and the sample median were determined by [[Pierre-Simon Laplace|Laplace]].<ref name=Stigler1973>{{cite journal
| last = Stigler
| first = Stephen
| authorlink = Stephen Stigler
| date =
| year = 1973
| month = December
| title = Studies in the History of Probability and Statistics. XXXII: Laplace, Fisher and the Discovery of the Concept of Sufficiency
| journal = Biometrika
| volume = 60
| issue = 3
| pages = 439–445
| doi = 10.1093/biomet/60.3.439
| mr = 0326872 | jstor = 2334992
}}</ref> The distribution of the sample median from a population with a density function <math>f(x)</math> is asymptotically normal with mean <math>m</math> and variance<ref name="Rider1960">{{cite journal |last=Rider |first=Paul R. |year=1960 |title=Variance of the median of small samples from several special populations |journal=[[Journal of the American Statistical Association|J. Amer. Statist. Assoc.]] |volume=55 |issue=289 |pages=148–150 |doi=10.1080/01621459.1960.10482056 }}</ref>
 
: <math> \frac{ 1 }{ 4n f( m )^2 }</math>
 
where <math>m</math> is the median value of distribution and <math>n</math> is the sample size. In practice this may be difficult to estimate as the density function is usually unknown.
 
These results have also been extended.<ref name="Stuart1994">{{cite book |title=Kendall's Advanced Theory of Statistics |first=Alan |last=Stuart |first2=Keith |last2=Ord |location=London |publisher=Arnold |year=1994 |isbn=0340614307 }}</ref> It is now known that for the <math>p</math>-th quartile that the distribution of the sample <math>p</math>-th quartile is distributed normally around the <math>p</math>-th quartile with variance equal to
: <math> \frac{p( 1 - p )}{nf(x_p)^2}</math>
where <math>f(x_{p})</math> is the value of the distribution at the <math>p</math>-th quartile.
 
;Estimation of variance from sample data
 
The value of <math>(2 f(x))^{-2}</math>—the asymptotic value of <math>n^{-\frac{1}{2}} (\nu - m)</math> where <math>\nu</math> is the population median—has been studied by several authors. The standard 'delete one' [[Resampling (statistics)#Jackknife|jackknife]] method produces [[consistent estimator|inconsistent]] results.<ref name=Efron1982>{{cite book |last=Efron |first=B. |year=1982 |title=The Jackknife, the Bootstrap and other Resampling Plans |publisher=SIAM |location=Philadelphia |isbn=0898711797 }}</ref> An alternative—the 'delete k' method—where <math>k</math> grows with the sample size has been shown to be asymptotically consistent.<ref name=Shao1989>{{cite journal |last=Shao |first=J. |last2=Wu |first2=C. F. |year=1989 |title=A General Theory for Jackknife Variance Estimation |journal=[[Annals of Statistics|Ann Statist]] |volume=17 |issue=3 |pages=1176–1197 |jstor=2241717 }}</ref> This method may be computationally expensive for large data sets. A bootstrap estimate is known to be consistent,<ref name=Efron1979>{{cite journal |last=Efron |first=B. |year=1979 |title=Bootstrap Methods: Another Look at the Jackknife |journal=[[Annals of Statistics|Ann Statist]] |volume=7 |issue=1 |pages=1–26 |jstor=2958830 }}</ref> but converges very slowly ([[computational complexity theory|order]] of <math>n^{-\frac{1}{4}}</math>).<ref name=Hall1988>{{cite journal |last=Hall |first=P. |last2=Martin |first2=M. A. |year=1988 |title=Exact Convergence Rate of Bootstrap Quantile Variance Estimator |journal=Probab Theory Related Fields |volume=80 |issue=2 |pages=261–268 |doi=10.1007/BF00356105 }}</ref> Other methods have been proposed but their behavior may differ between large and small samples.<ref name=Jimenez-Gamero2004>{{cite journal |last=Jiménez-Gamero |first=M. D. |last2=Munoz-García |first2=J. |first3=R. |last3=Pino-Mejías |year=2004 |title=Reduced bootstrap for the median |journal=Statistica Sinica |volume=14 |issue=4 |pages=1179–1198 |doi= |url=http://www3.stat.sinica.edu.tw/statistica/password.asp?vol=14&num=4&art=11 }}</ref>
 
;Efficiency
 
The [[Efficiency (statistics)|efficiency]] of the sample median, measured as the ratio of the variance of the mean to the variance of the median, depends on the sample size and on the underlying population distribution. For a sample of size <math>N = 2n + 1</math> from the [[normal distribution]], the ratio is<ref name=Kenney1962>{{cite book |last=Kenney |first=J. F. |last2=Keeping |first2=E. S. |year=1962 |chapter=The Median |pages=211–212 |title=Mathematics of Statistics, Pt. 1 |edition=3rd |location=Princeton, NJ |publisher=Van Nostrand }}</ref>
 
: <math> \frac{ 4n }{ \pi ( 2n + 1 ) } </math>
 
For large samples (as <math>n</math> tends to infinity) this ratio tends to <math> \frac{2}{\pi} .</math>
 
===Other estimators===
For univariate distributions that are ''symmetric'' about one median, the [[Hodges–Lehmann estimator]] is a [[robust statistics|robust]] and highly [[efficient estimator]] of the population median.<ref name="HM">{{cite book |last1=Hettmansperger |first1=Thomas P. |last2=McKean |first2=Joseph W. |title=Robust nonparametric statistical methods |series=Kendall's Library of Statistics |volume=5 |publisher=Edward Arnold |location=London |publisher=John Wiley and Sons |year=1998 |isbn=0-340-54937-8 |mr=1604954 |ref=harv }}</ref>
 
If data are represented by a [[statistical model]] specifying a particular family of [[probability distribution]]s, then estimates of the median can be obtained by fitting that family of probability distributions to the data and calculating the theoretical median of the fitted distribution.{{Citation needed|date=May 2012}} [[Pareto interpolation]] is an application of this when the population is assumed to have a [[Pareto distribution]].
 
===Coefficient of dispersion===
 
The coefficient of dispersion (CD) is defined as the ratio of the average absolute deviation from the median to the median of the data.<ref name=Bonett2006>Bonett DG, Seier E (2006) Confidence interval for a coefficient of dispersion in non-normal distributions. Biometrical Journal 48 (1) 144-148</ref> It is a statistical measure used by the states of [[Iowa]], [[New York]] and [[South Dakota]] in estimating dues taxes.<ref>http://www.iowa.gov/tax/locgov/Statistical_Calculation_Definitions.pdf</ref><ref>http://www.tax.ny.gov/research/property/reports/cod/2010mvs/reporttext.htm</ref><ref>http://www.state.sd.us/drr2/publications/assess1199.pdf</ref> In symbols
 
: <math> CD = \frac{ 1 }{ n } \frac{ \sum| m - x | }{ m } </math>
 
where ''n'' is the sample size, ''m'' is the sample median and ''x'' is a variate. The sum is taken over the whole sample.
 
Confidence intervals for a two sample test where the sample sizes are large have been derived by Bonett and Seier<ref name="Bonett2006"/> This test assumes that both samples have the same median but differ in the dispersion around it. The confidence interval (CI) is bounded inferiorly by
 
: <math> \exp \left[ \log \left( \frac{ t_a } { t_b } \right) - z_\alpha \left( var \left[ \log \left( \frac{ t_a } { t_b } \right) \right] \right)^{ 0.5 } \right] </math>
 
where ''t''<sub>j</sub> is the mean absolute deviation of the ''j''<sup>th</sup> sample, ''var''() is the variance and ''z<sub>α</sub>'' is the value from the normal distribution for the chosen value of ''α'': for ''α'' = 0.05, ''z<sub>α</sub>'' = 1.96. The following formulae are used in the derivation of these confidence intervals
 
: <math> var [ \log ( t_a ) ] = \frac{ \left( \frac{ s_a^2 } { t_a^2 } + \left( \frac{  x_a - \bar{ x } } { t_a } \right) ^2 - 1 \right) } { n }</math>
 
: <math> var[ \log( t_a / t_b ) ] = var[ \log( t_a ) ] + var[ \log( t_b )] - 2r ( var[ \log( t_a ) ] var[ \log( t_b ) ] )^{0.5} </math>
 
where ''r'' is the Pearson correlation coefficient between the squared deviation scores
 
: <math> d_{ia} = | x_{ia} - \bar{x}_{a} | </math> and <math> d_{ib} = |x_{ib} - \bar{x}_{b} | </math>
 
''a'' and ''b'' here are constants equal to 1 and 2, ''x'' is a variate and ''s'' is the standard deviation of the sample.
 
==Multivariate median==
Previously, this article discussed the concept of a univariate median for a one-dimensional object (population, sample). When the dimension is two or higher, there are multiple concepts that extend the definition of the univariate median; each such multivariate median agrees with the univariate median when the dimension is exactly one. In higher dimensions, however, there are several multivariate medians.<ref name="HM" />
 
===Marginal median===
The marginal median is defined for vectors defined with respect to a fixed set of coordinates. A marginal median is defined to be the vector whose components are univariate medians. The marginal median is easy to compute, and its properties were studied by Puri and Sen.<ref name="HM" /><ref>Puri, Madan L.; Sen, Pranab K.; ''Nonparametric Methods in Multivariate Analysis'', John Wiley & Sons, New York, NY, 197l. (Reprinted by Krieger Publishing)</ref>
 
===Spatial median (L1 median)===
{{anchor|Spatial_median}}{{anchor|Spatial}}{{anchor|Spatial median}}
In a [[normed space|normed]] [[vector space]] of [[dimension (linear algebra)|dimension]] two or greater, the  "spatial median" <!-- ''[[spatial median]]'' redirects here --> minimizes the expected distance
:<math>a \mapsto \mathrm{E}(\left\|X-a\right\|), \,</math>
where ''X'' and ''a'' are vectors,  if this expectation has a finite minimum; another definition is better suited for general probability-distributions.<ref name="Kemperman" /><ref name="HM" /> The spatial median is unique when the data-set's dimension is two or more.<ref name="Kemperman" /><ref name="MilasevicDucharme" /><ref name="HM" /> It is a robust and highly efficient estimator of the population spatial-median (also called the "L1 median")<!-- This footnote modifies the parenthetical remark and not the sentence, and so this position is correct. -->.<ref>{{cite journal |last=Vardi |first=Yehuda |last2=Zhang |first2=Cun-Hui |title=The multivariate l1-median and associated data depth |journal=[[Proceedings of the National Academy of Sciences of the United States of America]] |volume=97 |year=2000 |issue=4 |pages=1423–1426 |doi=10.1073/pnas.97.4.1423 }}</ref><!-- This footnote modifies the sentence, and so comes after its punctuation. --><ref name="HM" />{{clarify|date=May 2012}}
 
===Other multivariate medians===
An alternative to the spatial median is defined in a similar way, but based on a different loss function, and is called the [[Geometric median]].{{Citation needed |date=May 2012 }} The [[Centerpoint (geometry)|centerpoint]] is another generalization to higher dimensions that does not relate to a particular metric.
 
==Other median-related concepts==
 
===Pseudo-median===
For univariate distributions that are ''symmetric'' about one median, the [[Hodges–Lehmann estimator]] is a robust and highly efficient estimator of the population median; for non-symmetric distributions, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population ''pseudo-median'', which is the median of a symmetrized distribution and which is close to the population median.{{Citation needed |date=May 2012 }} The  Hodges–Lehmann estimator has been generalized to multivariate distributions.<ref>{{cite book |last=Oja |first=Hannu |title=Multivariate nonparametric methods with&nbsp;''R'': An approach based on spatial signs and ranks |series=Lecture Notes in Statistics |volume=199 |publisher=Springer |location=New York, NY |year=2010 |pages=xiv+232 |isbn=978-1-4419-0467-6 |doi=10.1007/978-1-4419-0468-3 |mr=2598854 |ref=harv }}</ref>
 
===Variants of regression===
The [[Theil–Sen estimator]] is a method for [[robust statistics|robust]] [[linear regression]] based on finding medians of [[slope]]s.{{Citation needed |date=May 2012 }}
 
===Median filter===
{{Main|Median filter}}
In the context of [[image processing]] of [[monochrome]] [[raster image]]s there is a type of noise, known as the [[salt and pepper noise]], when each pixel independently becomes black (with some small probability) or white (with some small probability), and is unchanged otherwise (with the probability close to 1). An image constructed of median values of neighborhoods (like 3×3 square) can effectively [[noise reduction|reduce noise]] in this case.{{Citation needed |date=May 2012 }}
 
===Cluster analysis===
{{main|k-medians clustering}}
In [[cluster analysis]], the [[k-medians clustering]] algorithm provides a way of defining clusters, in which the criterion of maximising the distance between cluster-means that is used in [[k-means clustering]], is replaced by maximising the distance between cluster-medians.
 
===Median-Median Line===
 
This is a method of robust regression. The idea dates back to [[Abraham Wald|Wald]] in 1940 who suggested dividing a set of bivariate data into two halves depending on the value of the independent parameter <math>x</math>: a left half with values less than the median and a right half with values greater than the median.<ref name=Wald1940>{{cite journal |last=Wald |first=A. |year=1940 |title=The Fitting of Straight Lines if Both Variables are Subject to Error |journal=[[Annals of Mathematical Statistics]] |volume=11 |issue=3 |pages=282–300 |jstor=2235677 }}</ref> He suggested taking the means of the dependent <math>y</math> and independent <math>x</math> variables of the left and the right halves and estimating the slope of the line joining these two points. The line could then be adjusted to fit the majority of the points in the data set.
 
Nair and Shrivastava in 1942 suggested a similar idea but instead advocated dividing the sample into three equal parts before calculating the means of the subsamples.<ref name=Nair1942>{{cite journal |title=On a Simple Method of Curve Fitting |first=K. R. |last=Nair |first2=M. P. |last2=Shrivastava |journal=Sankhyā: The Indian Journal of Statistics |volume=6 |issue=2 |year=1942 |pages=121–132 |jstor=25047749 }}</ref> Brown and Mood in 1951 proposed the idea of using the medians of two subsamples rather the means.<ref name=Brown1951>{{cite book |last=Brown |first=G. W. |last2=Mood |first2=A. M. |year=1951 |chapter=On Median Tests for Linear Hypotheses |title=Proc Second Berkeley Symposium on Mathematical Statistics and Probability |location=Berkeley, CA |publisher=University of California Press |pages=159–166 |isbn= |zbl=0045.08606 }}</ref> Tukey combined these ideas and recommended dividing the sample into three equal size subsamples and estimating the line based on the medians of the subsamples.<ref name=Tukey1971>{{cite book |last=Tukey |first=J. W. |year=1977 |title=Exploratory Data Analysis |location=Reading, MA |publisher=Addison-Wesley |isbn=0201076160 }}</ref>
 
==Median-unbiased estimators==
{{main|Bias of an estimator#Median-unbiased estimators}}
Any [[Bias of an estimator|''mean''-unbiased estimator]] minimizes the [[risk]] ([[expected loss]]) with respect to the squared-error [[loss function]], as observed by [[Gauss]]. A [[Bias of an estimator#Median unbiased estimators.2C and bias with respect to other loss functions|''median''-unbiased estimator]] minimizes the risk with respect to the [[Absolute deviation|absolute-deviation]] loss function, as observed by [[Laplace]]. Other [[loss functions]] are used in [[statistical theory]], particularly in [[robust statistics]].
 
The theory of median-unbiased estimators was revived by [http://www.universityofcalifornia.edu/senate/inmemoriam/georgewbrown.htm George W. Brown] in 1947:<ref name="Brown" />
<blockquote>
An estimate of a one-dimensional parameter θ will be said to be median-unbiased if, for fixed θ, the median of the distribution of the estimate is at the value θ; i.e., the estimate underestimates just as often as it overestimates. This requirement seems for most purposes to accomplish as much as the mean-unbiased requirement and has the additional property that it is invariant under one-to-one transformation. [page 584]
</blockquote>
Further properties of median-unbiased estimators have been reported.<ref name="Lehmann" /><ref name="Birnbaum" /><ref name="vdW" /><ref name="Pf" /> In particular, median-unbiased estimators exist in cases where mean-unbiased and maximum-likelihood estimators do not exist. Median-unbiased estimators are invariant under [[Injective function|one-to-one transformations]].
 
==History==
 
The idea of the median originated{{citation needed|date=July 2012}} in [[Edward Wright (mathematician)|Edward Wright]]'s book on navigation (''Certaine Errors in Navigation'') in 1599 in a section concerning the determination of location with a [[compass]]. Wright felt that this value was the most likely to be the correct value in a series of observations.
 
In 1757, [[Roger Joseph Boscovich]] developed a regression method based on the L1 [[Lp space|norm]] and therefore implicitly on the median.<ref name=Stigler1986>{{cite book |last=Stigler |first=S. M. |year=1986 |title=The History of Statistics: The Measurement of Uncertainty Before 1900 |publisher=Harvard University Press |isbn=0674403401 }}</ref>
 
The distribution of both the sample mean and the sample median were determined by Laplace in the early 1800s.<ref name=Stigler1973>{{cite journal
| last = Stigler
| first = Stephen
| authorlink = Stephen Stigler
|date=December 1973
| title = Studies in the History of Probability and Statistics. XXXII: Laplace, Fisher and the Discovery of the Concept of Sufficiency
| journal = Biometrika
| volume = 60  | issue = 3  | pages = 439–445
| doi = 10.1093/biomet/60.3.439
| mr = 0326872 | jstor = 2334992
}}</ref><ref name=Laplace1818>Laplace PS de (1818) ''Deuxième supplément à la Théorie Analytique des Probabilités'', Paris, Courcier</ref>
 
[[Antoine Augustin Cournot]] in 1843 was the first{{citation needed|date=July 2012}} to use the term ''median'' (''valeur médiane'') for the value that divides a probability distribution into two equal halves. [[Gustav Theodor Fechner]] used the median (''Centralwerth'') in sociological and psychological phenomena.<ref name=Keynes1921>Keynes, J.M. (1921) ''[[A Treatise on Probability]]''. Pt II Ch XVII §5 (p 201)  (2006 reprint, Cosimo Classics, ISBN 9781596055308 : multiple other reprints)</ref> It had earlier been used only in astronomy and related fields. [[Gustav Theodor Fechner|Gustav Fechner]] popularized the median into the formal analysis of data, although it had been used previously by Laplace.<ref name=Keynes1921/>
 
[[Francis Galton]] used the English term ''median'' in 1881,<ref name=Galton1881>Galton F (1881) "Report of the Anthropometric Committee" pp 245-260. [http://www.biodiversitylibrary.org/item/94448 ''Report of the 51st Meeting of the British Association for the Advancement of Science'']</ref> having earlier used the terms ''middle-most value'' in 1869 and the ''medium'' in 1880.{{citation needed|date=July 2012}}
 
==See also==
{{Portal|Statistics}}
* [[Absolute deviation]]
* [[Bias of an estimator]]
* [[Concentration of measure]] for [[Lipschitz functions]]
* [[Median graph]]
* [[Median search]]
* [[Median voter theory]]
* [[Weighted median]]
* [[Median slope]]
 
==References==
{{Reflist|30em|refs=
<ref name="Brown">{{cite journal |last=Brown |first=George W. |year=1947 |title=On Small-Sample Estimation |journal=[[Annals of Mathematical Statistics]] |volume=18 |issue=4 |pages=582–585 |jstor=2236236 }}</ref>
<ref name="Lehmann">{{cite journal |authorlink=Erich Leo Lehmann |last=Lehmann |first=Erich L. |year=1951 |title=A General Concept of Unbiasedness |journal=[[Annals of Mathematical Statistics]] |volume=22 |issue=4 |pages=587–592 |jstor=2236928 }}</ref>
<ref name="Birnbaum">{{cite journal |authorlink=Allan Birnbaum |last=Birnbaum |first=Allan |year=1961 |title=A Unified Theory of Estimation, I |journal=[[Annals of Mathematical Statistics]] |volume=32 |issue=1 |pages=112–135 |jstor=2237612 }}</ref>
<ref name="vdW">{{cite journal |last=van der Vaart |first=H. Robert |year=1961 |title=Some Extensions of the Idea of Bias |journal=[[Annals of Mathematical Statistics]] |volume=32 |issue=2 |pages=436–447 |jstor=2237754 |doi=10.1214/aoms/1177705051 |mr=125674 }}</ref>
<ref name="Pf">{{cite book|title=Parametric Statistical Theory | last1=Pfanzagl | first1=Johann |last2=with the assistance of R. Hamböker |year=1994 |publisher=Walter de Gruyter |isbn=3-11-013863-8 |mr=1291393 }}</ref>
}}
 
==External links==
* {{springer|title=Median (in statistics)|id=p/m063310}}
* [http://stats4students.com/measures-of-central-tendency-2.php A Guide to Understanding & Calculating the Median]
* [http://www.accessecon.com/pubs/EB/2004/Volume3/EB-04C10011A.pdf Median as a weighted arithmetic mean of all Sample Observations]
* [http://www.poorcity.richcity.org/cgi-bin/inequality.cgi On-line calculator]
* [http://www.statcan.gc.ca/edu/power-pouvoir/ch11/median-mediane/5214872-eng.htm Calculating the median]
* [http://mathschallenge.net/index.php?section=problems&show=true&titleid=average_problem A problem involving the mean, the median, and the mode.]
* {{MathWorld | urlname= StatisticalMedian | title= Statistical Median}}
* [http://www.poorcity.richcity.org/oei/#GiniHooverTheil Python script] for Median computations and [[income inequality metrics]]
 
{{PlanetMath attribution|id=5900|title=Median of a distribution}}
{{Statistics|descriptive}}
 
[[Category:Means]]
[[Category:Robust statistics]]
[[Category:Statistical terminology]]

Revision as of 11:42, 4 March 2014

Hi there, I am Alyson Boon even though it is not the name on my beginning certification. She is truly fond of caving but she doesn't have the time lately. I am presently a travel agent. For years he's been living in Alaska and he doesn't plan on changing it.

my blog post ... online psychic readings (More Material)