Cocks IBE scheme: Difference between revisions
en>ChrisGualtieri m →Security: Typo fixing, typos fixed: , → , using AWB |
en>Stemonitis |
||
Line 1: | Line 1: | ||
A '''kernel smoother''' is a [[statistics|statistical]] technique for estimating a real valued [[function (mathematics)|function]] <math>f(X)\,\,\left( X\in \mathbb{R}^{p} \right)</math> by using its noisy observations, when [[non-parametric statistics|no parametric model]] for this function is known. The estimated function is smooth, and the level of smoothness is set by a single parameter. | |||
This technique is most appropriate for low dimensional (''p'' < 3) data visualization purposes. Actually, the kernel smoother represents the set of irregular data points as a smooth line or surface. | |||
==Definitions== | |||
Let <math>K_{h_\lambda}(X_0 ,X)</math> be a kernel defined by | |||
:<math>K_{h_\lambda}(X_0 ,X) = D\left( \frac{\left\| X-X_0 \right\|}{h_\lambda (X_0)} \right)</math> | |||
where: | |||
* <math>X,X_0 \in \mathbb{R}^p</math> | |||
* <math>\left\| \cdot \right\|</math> is the [[Euclidean norm]] | |||
* <math>h_\lambda (X_0)</math> is a parameter (kernel radius) | |||
* ''D''(''t'') typically is a positive real valued function, which value is decreasing (or not increasing) for the increasing distance between the ''X'' and ''X''<sub>0</sub>. | |||
Popular [[Kernel (statistics)| kernels]] used for smoothing include | |||
* [[V. A. Epanechnikov| Epanechnikov]] | |||
* Tri-cube | |||
* [[Gaussian function|Gaussian]] | |||
Let <math>\hat{Y}(X):\mathbb{R}^p \to \mathbb{R}</math> be a continuous function of ''X''. For each <math>X_0 \in \mathbb{R}^p</math>, the Nadaraya-Watson kernel-weighted average (smooth ''Y''(''X'') estimation) is defined by | |||
:<math>\hat{Y}(X_{0})=\frac{\sum\limits_{i=1}^{N}{K_{h_{\lambda }}(X_{0},X_{i})Y(X_{i})}}{\sum\limits_{i=1}^{N}{K_{h_{\lambda }}(X_{0},X_{i})}}</math> | |||
where: | |||
* ''N'' is the number of observed points | |||
* ''Y''(''X''<sub>''i''</sub>) are the observations at ''X''<sub>''i''</sub> points. | |||
In the following sections, we describe some particular cases of kernel smoothers. | |||
==Nearest neighbor smoother== | |||
The idea of the [[k-nearest neighbor algorithm|nearest neighbor]] smoother is the following. For each point ''X''<sub>0</sub>, take m nearest neighbors and estimate the value of ''Y''(''X''<sub>0</sub>) by averaging the values of these neighbors. | |||
Formally, <math>h_m (X_0)=\left\| X_0 - X_{[m]} \right\|</math>, where <math>X_{[m]}</math> is the ''m''th closest to ''X''<sub>0</sub> neighbor, and | |||
: <math>D(t)= \begin{cases} | |||
1/m & \text{if } |t| \le 1 \\ | |||
0 & \text{otherwise} | |||
\end{cases} | |||
</math> | |||
Example: | |||
[[File:NNSmoother.svg]] | |||
In this example, ''X'' is one-dimensional. For each X<sub>0</sub>, the <math>\hat{Y}(X_0)</math> is an average value of 16 closest to ''X''<sub>0</sub> points (denoted by red). The result is not smooth enough. | |||
==Kernel average smoother== | |||
The idea of the kernel average smoother is the following. For each data point ''X''<sub>0</sub>, choose a constant distance size ''λ'' (kernel radius, or window width for ''p'' = 1 dimension), and compute a weighted average for all data points that are closer than <math>\lambda </math> to ''X''<sub>0</sub> (the closer to ''X''<sub>0</sub> points get higher weights). | |||
Formally, <math>h_\lambda (X_0)=\lambda = \text{constant},</math> and ''D''(''t'') is one of the popular kernels. | |||
Example: | |||
[[File:KernelSmoother.svg]] | |||
For each ''X''<sub>0</sub> the window width is constant, and the weight of each point in the window is schematically denoted by the yellow figure in the graph. It can be seen that the estimation is smooth, but the boundary points are biased. The reason for that is the non-equal number of points (from the right and from the left to the ''X''<sub>0</sub>) in the window, when the ''X''<sub>0</sub> is close enough to the boundary. | |||
==Local linear regression== | |||
{{main|Local regression}} | |||
In the two previous sections we assumed that the underlying Y(X) function is locally constant, therefore we were able to use the weighted average for the estimation. The idea of local linear regression is to fit locally a straight line (or a hyperplane for higher dimensions), and not the constant (horizontal line). After fitting the line, the estimation <math>\hat{Y}(X_{0})</math> is provided by the value of this line at ''X''<sub>0</sub> point. By repeating this procedure for each ''X''<sub>0</sub>, one can get the estimation function <math>\hat{Y}(X)</math>. | |||
Like in previous section, the window width is constant <math>h_\lambda (X_0)=\lambda = \text{constant}.</math> | |||
Formally, the local linear regression is computed by solving a weighted least square problem. | |||
For one dimension (''p'' = 1): | |||
<math>\begin{align} | |||
& \min_{\alpha (X_0),\beta (X_0)} \sum\limits_{i=1}^N {K_{h_{\lambda }}(X_0,X_i)\left( Y(X_i)-\alpha (X_0)-\beta (X_{0})X_i \right)^2} \\ | |||
& \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\Downarrow \\ | |||
& \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\hat{Y}(X_{0})=\alpha (X_{0})+\beta (X_{0})X_{0} \\ | |||
\end{align}</math> | |||
The closed form solution is given by: | |||
: <math>\hat{Y}(X_0)=\left( 1,X_0 \right)\left( B^{T}W(X_0)B \right)^{-1}B^{T}W(X_0)y</math> | |||
where: | |||
* <math>y=\left( Y(X_1),\dots,Y(X_N) \right)^T</math> | |||
* <math>W(X_0)= \operatorname{diag} \left( K_{h_{\lambda }}(X_0,X_i) \right)_{N\times N}</math> | |||
* <math>B^{T}=\left( \begin{matrix} | |||
1 & 1 & \dots & 1 \\ | |||
X_{1} & X_{2} & \dots & X_{N} \\ | |||
\end{matrix} \right)</math> | |||
Example: | |||
[[File:Localregressionsmoother.svg]] | |||
The resulting function is smooth, and the problem with the biased boundary points is solved. | |||
Local linear regression can be applied to any dimensional space, though the question of what is a local neighborhood becomes more complicated. It is common to use k nearest training points to a test point to fit the local linear regression. This can lead to high variance of the fitted function. To bound the variance, the set of training points should contain the test point in their convex hull (see Gupta et al. reference). | |||
==Local polynomial regression== | |||
Instead of fitting locally linear functions, one can fit polynomial functions. | |||
For p=1, one should minimize: | |||
<math>\underset{\alpha (X_{0}),\beta _{j}(X_{0}),j=1,...,d}{\mathop{\min }}\,\sum\limits_{i=1}^{N}{K_{h_{\lambda }}(X_{0},X_{i})\left( Y(X_{i})-\alpha (X_{0})-\sum\limits_{j=1}^{d}{\beta _{j}(X_{0})X_{i}^{j}} \right)^{2}}</math> | |||
with <math>\hat{Y}(X_{0})=\alpha (X_{0})+\sum\limits_{j=1}^{d}{\beta _{j}(X_{0})X_{0}^{j}}</math> | |||
In general case (p>1), one should minimize: | |||
<math>\begin{align} | |||
& \hat{\beta }(X_{0})=\underset{\beta (X_{0})}{\mathop{\arg \min }}\,\sum\limits_{i=1}^{N}{K_{h_{\lambda }}(X_{0},X_{i})\left( Y(X_{i})-b(X_{i})^{T}\beta (X_{0}) \right)}^{2} \\ | |||
& b(X)=\left( \begin{matrix} | |||
1, & X_{1}, & X_{2},... & X_{1}^{2}, & X_{2}^{2},... & X_{1}X_{2}\,\,\,... \\ | |||
\end{matrix} \right) \\ | |||
& \hat{Y}(X_{0})=b(X_{0})^{T}\hat{\beta }(X_{0}) \\ | |||
\end{align}</math> | |||
==See also== | |||
*[[Kernel (statistics)]] | |||
*[[Kernel methods]] | |||
*[[Kernel density estimation]] | |||
*[[Kernel regression]] | |||
*[[Local regression]] | |||
==References== | |||
* Li, Q. and J.S. Racine. ''Nonparametric Econometrics: Theory and Practice''. Princeton University Press, 2007, ISBN 0-691-12161-3. | |||
* T. Hastie, R. Tibshirani and J. Friedman, ''The Elements of Statistical Learning'', Chapter 6, Springer, 2001. ISBN 0-387-95284-5 ([http://www-stat.stanford.edu/~tibs/ElemStatLearn/ companion book site]). | |||
* M. Gupta, E. Garcia and E. Chin, [http://www.cs.berkeley.edu/~emc/papers/GuptaGarciaChinTIP2008.pdf "Adaptive Local Linear Regression with Application to Printer Color Management,"] IEEE Trans. Image Processing 2008. | |||
[[Category:Non-parametric statistics]] |
Latest revision as of 09:00, 2 May 2013
A kernel smoother is a statistical technique for estimating a real valued function by using its noisy observations, when no parametric model for this function is known. The estimated function is smooth, and the level of smoothness is set by a single parameter.
This technique is most appropriate for low dimensional (p < 3) data visualization purposes. Actually, the kernel smoother represents the set of irregular data points as a smooth line or surface.
Definitions
where:
- is the Euclidean norm
- is a parameter (kernel radius)
- D(t) typically is a positive real valued function, which value is decreasing (or not increasing) for the increasing distance between the X and X0.
Popular kernels used for smoothing include
- Epanechnikov
- Tri-cube
- Gaussian
Let be a continuous function of X. For each , the Nadaraya-Watson kernel-weighted average (smooth Y(X) estimation) is defined by
where:
- N is the number of observed points
- Y(Xi) are the observations at Xi points.
In the following sections, we describe some particular cases of kernel smoothers.
Nearest neighbor smoother
The idea of the nearest neighbor smoother is the following. For each point X0, take m nearest neighbors and estimate the value of Y(X0) by averaging the values of these neighbors.
Formally, , where is the mth closest to X0 neighbor, and
Example:
In this example, X is one-dimensional. For each X0, the is an average value of 16 closest to X0 points (denoted by red). The result is not smooth enough.
Kernel average smoother
The idea of the kernel average smoother is the following. For each data point X0, choose a constant distance size λ (kernel radius, or window width for p = 1 dimension), and compute a weighted average for all data points that are closer than to X0 (the closer to X0 points get higher weights).
Formally, and D(t) is one of the popular kernels.
Example:
For each X0 the window width is constant, and the weight of each point in the window is schematically denoted by the yellow figure in the graph. It can be seen that the estimation is smooth, but the boundary points are biased. The reason for that is the non-equal number of points (from the right and from the left to the X0) in the window, when the X0 is close enough to the boundary.
Local linear regression
Mining Engineer (Excluding Oil ) Truman from Alma, loves to spend time knotting, largest property developers in singapore developers in singapore and stamp collecting. Recently had a family visit to Urnes Stave Church. In the two previous sections we assumed that the underlying Y(X) function is locally constant, therefore we were able to use the weighted average for the estimation. The idea of local linear regression is to fit locally a straight line (or a hyperplane for higher dimensions), and not the constant (horizontal line). After fitting the line, the estimation is provided by the value of this line at X0 point. By repeating this procedure for each X0, one can get the estimation function . Like in previous section, the window width is constant Formally, the local linear regression is computed by solving a weighted least square problem.
For one dimension (p = 1):
The closed form solution is given by:
where:
Example:
The resulting function is smooth, and the problem with the biased boundary points is solved.
Local linear regression can be applied to any dimensional space, though the question of what is a local neighborhood becomes more complicated. It is common to use k nearest training points to a test point to fit the local linear regression. This can lead to high variance of the fitted function. To bound the variance, the set of training points should contain the test point in their convex hull (see Gupta et al. reference).
Local polynomial regression
Instead of fitting locally linear functions, one can fit polynomial functions.
For p=1, one should minimize:
In general case (p>1), one should minimize:
See also
References
- Li, Q. and J.S. Racine. Nonparametric Econometrics: Theory and Practice. Princeton University Press, 2007, ISBN 0-691-12161-3.
- T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning, Chapter 6, Springer, 2001. ISBN 0-387-95284-5 (companion book site).
- M. Gupta, E. Garcia and E. Chin, "Adaptive Local Linear Regression with Application to Printer Color Management," IEEE Trans. Image Processing 2008.