# Distance correlation

In statistics and in probability theory, distance correlation is a measure of statistical dependence between two random variables or two random vectors of arbitrary, not necessarily equal dimension. An important property is that this measure of dependence is zero if and only if the random variables are statistically independent. This measure is derived from a number of other quantities that are used in its specification, specifically: distance variance, distance standard deviation and distance covariance. These take the same roles as the ordinary moments with corresponding names in the specification of the Pearson product-moment correlation coefficient.

These distance-based measures can be put into an indirect relationship to the ordinary moments by an alternative formulation (described below) using ideas related to Brownian motion, and this has led to the use of names such as Brownian covariance and Brownian distance covariance. Several sets of (xy) points, with the Distance correlation coefficient of x and y for each set. Compare to the graph on correlation

## Background

The classical measure of dependence, the Pearson correlation coefficient, is mainly sensitive to a linear relationship between two variables. Distance correlation was introduced in 2005 by Gabor J Szekely in several lectures to address this deficiency of Pearson’s correlation, namely that it can easily be zero for dependent variables. Correlation = 0 (uncorrelatedness) does not imply independence while distance correlation = 0 does imply independence. The first results on distance correlation were published in 2007 and 2009. It was proved that distance covariance is the same as the Brownian covariance. These measures are examples of energy distances.

## Definitions

### Distance covariance

Let us start with the definition of the sample distance covariance. Let (XkYk), k= 1, 2, ..., n be a statistical sample from a pair of real valued or vector valued random variables (XY). First, compute all pairwise distances

{\begin{aligned}a_{j,k}&=\|X_{j}-X_{k}\|,\qquad j,k=1,2,\ldots ,n,\\b_{j,k}&=\|Y_{j}-Y_{k}\|,\qquad j,k=1,2,\ldots ,n,\end{aligned}} where || ⋅ || denotes Euclidean norm. That is, compute the n by n distance matrices (aj, k) and (bj, k). Then take all doubly centered distances

$A_{j,k}:=a_{j,k}-{\overline {a}}_{j.}-{\overline {a}}_{.k}+{\overline {a}}_{..},\qquad B_{j,k}:=b_{j,k}-{\overline {b}}_{j.}-{\overline {b}}_{.k}+{\overline {b}}_{..},$ where $\textstyle {\overline {a}}_{j.}$ is the j-th row mean, $\textstyle {\overline {a}}_{.k}$ is thek-th column mean, and $\textstyle {\overline {a}}_{..}$ is the grand mean of the distance matrix of the X sample. The notation is similar for the b values. (In the matrices of centered distances (Aj, k) and (Bj,k) all rows and all columns sum to zero.) The squared sample distance covariance is simply the arithmetic average of the products Aj, k Bj, k:

$\operatorname {dCov} _{n}^{2}(X,Y):={\frac {1}{n^{2}}}\sum _{j,k=1}^{n}A_{j,k}\,B_{j,k}.$ The statistic Tn = n dCov2n(X, Y) determines a consistent multivariate test of independence of random vectors in arbitrary dimensions. For an implementation see dcov.test function in the energy package for R.

The population value of distance covariance can be defined along the same lines. Let X be a random variable that takes values in a p-dimensional Euclidean space with probability distribution μ and let Y be a random variable that takes values in a q-dimensional Euclidean space with probability distribution ν, and suppose that X and Y have finite expectations. Write

$a_{\mu }(x):=\operatorname {E} [\|X-x\|],\quad D(\mu ):=\operatorname {E} [a_{\mu }(X)],\quad d_{\mu }(x,x'):=\|x-x'\|-a_{\mu }(x)-a_{\mu }(x')+D(\mu ).$ Finally, define the population value of squared distance covariance of X and Y as

$\operatorname {dCov} ^{2}(X,Y):=\operatorname {E} {\big [}d_{\mu }(X,X')d_{\nu }(Y,Y'){\big ]}.$ One can show that this is equivalent to the following definition:

{\begin{aligned}\operatorname {dCov} ^{2}(X,Y)&:=\operatorname {E} [\|X-X'\|\,\|Y-Y'\|]+\operatorname {E} [\|X-X'\|]\,\operatorname {E} [\|Y-Y'\|]\\&\qquad -\operatorname {E} [\|X-X'\|\,\|Y-Y''\|]-\operatorname {E} [\|X-X''\|\,\|Y-Y'\|]\\&=\operatorname {E} [\|X-X'\|\,\|Y-Y'\|]+\operatorname {E} [\|X-X'\|]\,\operatorname {E} [\|Y-Y'\|]\\&\qquad -2\operatorname {E} [\|X-X'\|\,\|Y-Y''\|],\end{aligned}} where E denotes expected value, and $\textstyle (X,Y),$ $\textstyle (X',Y'),$ and $\textstyle (X'',Y'')$ are independent and identically distributed. Distance covariance can be expressed in terms of Pearson’s covariance, cov, as follows:

$\operatorname {dCov} ^{2}(X,Y)=\operatorname {cov} (\|X-X'\|,\|Y-Y'\|)-2\operatorname {cov} (\|X-X'\|,\|Y-Y''\|).$ This identity shows that the distance covariance is not the same as the covariance of distances, cov(||X-X' ||, ||Y-Y' ||). This can be zero even if X and Y are not independent.

Alternately, the squared distance covariance can be defined as the weighted L2 norm of the distance between the joint characteristic function of the random variables and the product of their marginal characteristic functions:

where ϕX, Y(s, t), ϕX(s), and ϕY(t) are the characteristic functions of (X, Y), X, and Y, respectively, p, q denote the Euclidean dimension of X and Y, and thus of s and t, and cp, cq are constants. The weight function $({c_{p}c_{q}}{|s|_{p}^{1+p}|t|_{q}^{1+q}})^{-1}$ is chosen to produce a scale equivariant and rotation invariant measure that doesn't go to zero for dependent variables. One interpretation of the characteristic function definition is that the variables eisX and eitY are cyclic representations of X and Y with different periods given by s and t, and the expression ϕX, Y(s, t) - ϕX(s) ϕY(t) in the numerator of the characteristic function definition of distance covariance is simply the classical covariance of eisX and eitY. The characteristic function definition clearly shows that dCov2(X, Y) = 0 if and only if X and Y are independent.

### Distance variance

The distance variance is a special case of distance covariance when the two variables are identical. The population value of distance variance is the square root of

$\operatorname {dVar} ^{2}(X):=\operatorname {E} [\|X-X'\|^{2}]+\operatorname {E} ^{2}[\|X-X'\|]-2\operatorname {E} [\|X-X'\|\,\|X-X''\|],$ The sample distance variance is the square root of

$\operatorname {dVar} _{n}^{2}(X):=\operatorname {dCov} _{n}^{2}(X,X)={\tfrac {1}{n^{2}}}\sum _{k,\ell }A_{k,\ell }^{2},$ which is a relative of Corrado Gini’s mean difference introduced in 1912 (but Gini did not work with centered distances).

### Distance standard deviation

The distance standard deviation is the square root of the distance variance.

### Distance correlation

The distance correlation  of two random variables is obtained by dividing their distance covariance by the product of their distance standard deviations. The distance correlation is

$\operatorname {dCor} (X,Y)={\frac {\operatorname {dCov} (X,Y)}{\sqrt {\operatorname {dVar} (X)\,\operatorname {dVar} (Y)}}},$ and the sample distance correlation is defined by substituting the sample distance covariance and distance variances for the population coefficients above.

For easy computation of sample distance correlation see the dcor function in the energy package for R.

## Properties

### Distance covariance

$\operatorname {dCov} (X_{1}+X_{2},Y_{1}+Y_{2})\leq \operatorname {dCov} (X_{1},Y_{1})+\operatorname {dCov} (X_{2},Y_{2}).$ This last property is the most important effect of working with centered distances.

$\operatorname {E} [\operatorname {dCov} _{n}^{2}(X,Y)]={\frac {n-1}{n^{2}}}\left\{(n-2)\operatorname {dCov} ^{2}(X,Y)+\operatorname {E} [\|X-X'\|]\,\operatorname {E} [\|Y-Y'\|]\right\}={\frac {n-1}{n^{2}}}\operatorname {E} [\|X-X'\|]\,\operatorname {E} [\|Y-Y'\|].$ ### Distance variance

(ii) $\operatorname {dVar} _{n}(X)=0$ if and only if every sample observation is identical.

## Generalization

Distance covariance can be generalized to include powers of Euclidean distance. Define

{\begin{aligned}\operatorname {dCov} ^{2}(X,Y;\alpha )&:=\operatorname {E} [\|X-X'\|^{\alpha }\,\|Y-Y'\|^{\alpha }]+\operatorname {E} [\|X-X'\|^{\alpha }]\,\operatorname {E} [\|Y-Y'\|^{\alpha }]\\&\qquad -2\operatorname {E} [\|X-X'\|^{\alpha }\,\|Y-Y''\|^{\alpha }].\end{aligned}} $\operatorname {dCov} _{n}^{2}(X,Y;\alpha ):={\frac {1}{n^{2}}}\sum _{k,\ell }A_{k,\ell }\,B_{k,\ell }.$ $\operatorname {dCov} ^{2}(X,Y):=\operatorname {E} {\big [}d_{\mu }(X,X')d_{\nu }(Y,Y'){\big ]}.$ ## Alternative formulation: Brownian covariance

Brownian covariance is motivated by generalization of the notion of covariance to stochastic processes. The square of the covariance of random variables X and Y can be written in the following form:

$\operatorname {cov} (X,Y)^{2}=\operatorname {E} \left[{\big (}X-\operatorname {E} (X){\big )}{\big (}X^{\mathrm {'} }-\operatorname {E} (X^{\mathrm {'} }){\big )}{\big (}Y-\operatorname {E} (Y){\big )}{\big (}Y^{\mathrm {'} }-\operatorname {E} (Y^{\mathrm {'} }){\big )}\right]$ where E denotes the expected value and the prime denotes independent and identically distributed copies. We need the following generalization of this formula. If U(s), V(t) are arbitrary random processes defined for all real s and t then define the U-centered version of X by

$X_{U}:=U(X)-\operatorname {E} _{X}\left[U(X)\mid \left\{U(t)\right\}\right]$ whenever the subtracted conditional expected value exists and denote by YV the V-centered version of Y. The (U,V) covariance of (X,Y) is defined as the nonnegative number whose square is

$\operatorname {cov} _{U,V}^{2}(X,Y):=\operatorname {E} \left[X_{U}X_{U}^{\mathrm {'} }Y_{V}Y_{V}^{\mathrm {'} }\right]$ whenever the right-hand side is nonnegative and finite. The most important example is when U and V are two-sided independent Brownian motions /Wiener processes with expectation zero and covariance |s| + |t| - |s-t| = 2 min(s,t). (This is twice the covariance of the standard Wiener process; here the factor 2 simplifies the computations.) In this case the (U,V) covariance is called Brownian covariance and is denoted by

$\operatorname {cov} _{W}(X,Y).$ There is a surprising coincidence: The Brownian covariance is the same as the distance covariance:

$\operatorname {cov} _{\mathrm {W} }(X,Y)=\operatorname {dCov} (X,Y),$ and thus Brownian correlation is the same as distance correlation.

On the other hand, if we replace the Brownian motion with the deterministic identity function id then Covid(X,Y) is simply the absolute value of the classical Pearson covariance,

$\operatorname {cov} _{\mathrm {id} }(X,Y)=\left\vert \operatorname {cov} (X,Y)\right\vert .$ 