Structured derivations: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Magioladitis
m clean up using AWB (8279)
 
m fix % bug in <math> use \% to avoid parse errors
 
Line 1: Line 1:
{{Merge to|Kernel density estimation|date=September 2010}}
The tactical tomahawk has become increasingly preferred recently in each the law enforcement and active duty military neighborhood. The effectiveness of the tomahawk as each a tool and weapon make it an invaluable piece of gear that is equally adept at obstacle removal, dynamic entry, opening of crates, digging in fighting positions, IED removal and as a private defense weapon in close quarter combat scenarios.<br><br><br><br>Jeff Anderson is a ten year veteran of the U.S. Army, a [http://Server0.net/index.php?title=Spyderco_Tenacious_Wave Master Fitness] Trainer, and Master Instructor of Close Quarters Combat self defense. A full time fitness and self defense author, Jeff has trained thousands of guys and ladies in the practical application of advanced military fitness procedures as effectively as close combat tactics for "real life" self defense. Excellent knife I've had it for I assume two years now I like it a lot. It really is quite strong I have no complaints about it. Pohl Force knives combine design excellence, superior workmanship, and stringent quality handle processes to make sure just about every Pohl Force knife fulfills our customers' expectations. Every single and every knife is inspected and certified by Dietmar Pohl personally. Posted in Knives , Tools 1 Comment »<br><br>The very first factor that struck me was the grip. The knife characteristics an fantastic grip and is incredibly comfortable. The rubber grip functions an aggressive checkering and feels like a [http://Turkishwiki.com/Spyderco_Tenacious_Made_Usa fixed blade] grip. The grip is extremely tacky and sticks to the hand effectively. Slippage isn't an issue with wet hands and much more importantly sweaty hands. The blade is pretty the classic knife blade it does not break the mold, but works effectively. Half of the blade is serrated and the drop point makes sharpening quick and retains a greater edge. I wanted to transform it up and quit cutting straps and clothing so I focused on ropes. In early demonstration videos, such as the " Cliffhanger " video, the knife user slashed with it. In the final version, the user stabs with the knife.<br><br>If you have any type of inquiries regarding where and how you can make use of Spyderco Tenacious Zip Tie ([http://www.thebestpocketknifereviews.com/spyderco-tenacious-review/ just click the following website]), you can contact us at the web-site. This compact knife from S&W packs a huge punch! The Extreme Ops has a titanium coated 440C stainless, tanto, half serrated combo blade. The blade is three 1/8″ in length and the handle is four 1/8″ giving the knife an open length of 7 1/4″. The handle is created out of higher excellent aluminum with a checkered for elevated grip. The manage has been machined with holes to reduce down on weight and also raise grip. The knife is completed with a durable black-blue finish and also contains a stainless pocket clip. four. The SOGZilla with black zythel manage
[[Kernel density estimation]] is a [[nonparametric]] technique for  [[density estimation]] i.e., estimation of [[probability density function]]s, which is one of the fundamental questions in [[statistics]]. It can be viewed as a generalisation of [[histogram]] density estimation with improved statistical properties. Apart from histograms, other types of density estimators include [[parametric statistics|parametric]], [[spline interpolation|spline]], [[wavelet]] and [[Fourier series]]. Kernel density estimators were first introduced in the scientific literature for [[univariate]] data in the 1950s and 1960s<ref>{{Cite journal| doi=10.1214/aoms/1177728190 | last=Rosenblatt | first=M.| title=Remarks on some nonparametric estimates of a density function | journal=Annals of Mathematical Statistics | year=1956 | volume=27 | pages=832–837}}</ref><ref>{{Cite journal| doi=10.1214/aoms/1177704472| last=Parzen | first=E.| title=On estimation of a probability density function and mode | journal=Annals of Mathematical Statistics| year=1962 | volume=33 | pages=1065–1076}}</ref> and subsequently have been widely adopted. It was soon recognised that analogous estimators for multivariate data would be an important addition to [[multivariate statistics]]. Based on research carried out in the 1990s and 2000s, multivariate kernel density estimation has reached a level of maturity comparable to their univariate counterparts.<ref name="simonoff1996">{{Cite book| author=Simonoff, J.S. | title=Smoothing Methods in Statistics | publisher=Springer | year=1996 | isbn=0-387-94716-7}}</ref>
 
==Motivation==
We take an illustrative [[Synthetic data|synthetic]] [[bivariate data|bivariate]] data set of 50 points to illustrate the construction of histograms. This requires the choice of an anchor point (the lower left corner of the histogram grid). For the histogram on the left, we choose (−1.5,&nbsp;−1.5): for the one on the right, we shift the anchor point by 0.125 in both directions to (−1.625,&nbsp;−1.625). Both histograms have a binwidth of 0.5, so any differences are due to the change in the anchor point only. The colour-coding indicates the number of data points which fall into a bin: 0=white, 1=pale yellow, 2=bright yellow, 3=orange, 4=red. The left histogram appears to indicate that the upper half has a higher density than the lower half, whereas it is the reverse is the case for the right-hand histogram, confirming that histograms are highly sensitive to the placement of the anchor point.<ref>{{Cite book| author=Silverman, B.W. | title=Density Estimation for Statistics and Data Analysis | publisher=Chapman & Hall/CRC | year=1986 | isbn=0-412-24620-1 | pages=7–11}}</ref>
 
[[File:Synthetic data 2D histograms.png|thumb|center|500px|alt=Left. Histogram with anchor point at (−1.5,&nbsp;-1.5). Right. Histogram with anchor point at (−1.625,&nbsp;−1.625). Both histograms have a bin width of 0.5, so differences in appearances of the two histograms are due to the placement of the anchor point.|Comparison of 2D histograms. Left. Histogram with anchor point at (−1.5,&nbsp;-1.5). Right. Histogram with anchor point at (−1.625,&nbsp;−1.625). Both histograms have a bin width of 0.5, so differences in appearances of the two histograms are due to the placement of the anchor point.]]
 
One possible solution to this anchor point placement problem is to remove the histogram binning grid completely. In the left figure below, a kernel (represented by the grey lines) is centred at each of the 50 data points above. The result of summing these kernels is given on the right figure, which is a kernel density estimate. The most striking difference between kernel density estimates and histograms is that the former are easier to interpret since they do not contain artifices induced by a binning grid.
The coloured contours correspond to the smallest region which contains the respective probability mass: red = 25%, orange + red = 50%, yellow + orange + red = 75%, thus indicating that a single central region contains the highest density.
 
[[File:Synthetic data 2D KDE.png|thumb|center|500px|alt=Left. Individual kernels. Right. Kernel density estimate.|Construction of 2D kernel density estimate. Left. Individual kernels. Right. Kernel density estimate.]]
 
The goal of density estimation is to take a finite sample of data and to make inferences about the underyling probability density function everywhere, including where no data are observed. In kernel density estimation, the contribution of each data point is smoothed out from a single point into a region of space surrounding it. Aggregating the individually smoothed contributions gives an overall picture of the structure of the data and its density function. In the details to follow, we show that this approach leads to a reasonable estimate of the underlying density function.
 
==Definition==
The previous figure is a graphical representation of kernel density estimate, which we now define in an exact manner. Let '''x'''<sub>1</sub>, '''x'''<sub>2</sub>, …, '''x'''<sub>''n''</sub> be a [[random sample|sample]] of ''d''-variate [[random vector]]s drawn from a common distribution described by the [[probability density function|density function]] ''ƒ''. The kernel density estimate is defined to be
: <math>
    \hat{f}_\bold{H}(\bold{x})= \frac1n \sum_{i=1}^n K_\bold{H} (\bold{x} - \bold{x}_i)
  </math>
where
* {{nowrap|'''x''' {{=}} (''x''<sub>1</sub>, ''x''<sub>2</sub>, …, ''x<sub>d</sub>'')<sup>''T''</sup>}}, {{nowrap|'''x'''<sub>''i''</sub> {{=}} (''x''<sub>''i''1</sub>, ''x''<sub>''i''2</sub>, …, ''x<sub>id</sub>'')<sup>''T''</sup>, ''i'' {{=}} 1, 2, …, ''n''}} are ''d''-vectors;
* '''H''' is the bandwidth (or smoothing) ''d×d'' matrix which is [[symmetric matrix|symmetric]] and [[positive definite matrix|positive definite]];
* ''K'' is the [[kernel (statistics)|kernel]] function which is a symmetric multivariate density;
* {{nowrap|''K''<sub>'''H'''</sub>('''x''') {{=}} {{!}}'''H'''{{!}}<sup>−1/2</sup>&thinsp;''K''('''H'''<sup>−1/2</sup>'''x''')}}.
 
The choice of the kernel function ''K'' is not crucial to the accuracy of kernel density estimators, so we use the standard [[multivariate normal distribution|multivariate normal]] kernel throughout: {{nowrap|''K''('''x''') {{=}} (2''π'')<sup>−''d''/2</sup>&thinsp;exp(−{{frac|2}}'''x'''<sup>''T''</sup>'''x''')}}. Whereas the choice of the bandwidth matrix <strong>H</strong> is the single most important factor affecting its accuracy since it controls the amount of and orientation of smoothing induced.<ref name="WJ1995">{{Cite book| author1=Wand, M.P | author2=Jones, M.C. | title=Kernel Smoothing | publisher=Chapman & Hall/CRC | location=London | year=1995 | isbn = 0-412-55270-1}}</ref>{{rp|36–39}} That the bandwidth matrix also induces an orientation is a basic difference between multivariate kernel density estimation from its univariate analogue since orientation is not defined for 1D kernels. This leads to the choice of the parametrisation of this bandwidth matrix. The three main parametrisation classes (in increasing order of complexity) are ''S'', the class of positive scalars times the identity matrix; ''D'', diagonal matrices with positive entries on the main diagonal; and ''F'', symmetric positive definite matrices. The ''S'' class kernels have the same amount of smoothing applied in all coordinate directions, ''D'' kernels allow different amounts of smoothing in each of the coordinates, and ''F'' kernels allow arbitrary amounts and orientation of the smoothing. Historically ''S'' and ''D'' kernels are the most widespread due to computational reasons, but research indicates that important gains in accuracy can be obtained using the more general ''F'' class kernels.<ref>{{cite journal | author1=Wand, M.P. | author2=Jones, M.C. | title=Comparison of smoothing parameterizations in bivariate kernel density estimation | journal=Journal of the American Statistical Association | year=1993 | volume=88 | pages=520–528 | url=http://www.jstor.org/stable/2290332}}</ref><ref name="DH2003">{{Cite journal| doi=10.1080/10485250306039 | author1=Duong, T. | author2=Hazelton, M.L. | title=Plug-in bandwidth matrices for bivariate kernel density estimation | journal=Journal of Nonparametric Statistics | year=2003 | volume=15 | pages=17–30}}</ref>
 
[[File:Kernel parametrisation class.png|thumb|center|500px|alt=Comparison of the three main bandwidth matrix parametrisation classes. Left. S positive scalar times the identity matrix. Centre. D diagonal matrix with positive entries on the main diagonal. Right. F symmetric positive definite matrix.|Comparison of the three main bandwidth matrix parametrisation classes. Left. ''S'' positive scalar times the identity matrix. Centre. ''D'' diagonal matrix with positive entries on the main diagonal. Right. ''F'' symmetric positive definite matrix.]]
 
==Optimal bandwidth matrix selection==
The most commonly used optimality criterion for selecting a bandwidth matrix is the MISE or [[mean integrated squared error]]
 
: <math>\operatorname{MISE} (\bold{H}) = \operatorname{E}\!\left[\, \int (\hat{f}_\bold{H} (\bold{x}) - f(\bold{x}))^2 \, d\bold{x} \;\right].</math>
 
This in general does not possess a [[closed-form expression]], so it is usual to use its asymptotic approximation (AMISE) as a proxy
 
: <math>\operatorname{AMISE} (\bold{H}) = n^{-1} |\bold{H}|^{-1/2} R(K) +  \tfrac{1}{4} m_2(K)^2
(\operatorname{vec}^T \bold{H}) \bold{\Psi}_4 (\operatorname{vec}^T \bold{H})</math>
 
where
<ul>
<li><math>R(K) = \int K(\bold{x})^2 \, d\bold{x}</math>, with {{nowrap|''R''(''K'') {{=}} (4''π'')<sup>''−d''/2</sup>}} when ''K'' is a normal kernel
<li><math>\int \bold{x} \bold{x}^T K(\bold{x})^2 \, d\bold{x} = m_2(K) \bold{I}_d</math>,
with <strong>I</strong><sub>d</sub> being the ''d × d'' [[identity matrix]], with ''m''<sub>2</sub> = 1 for the normal kernel
<li>D<sup>2</sup>''ƒ'' is the ''d × d'' Hessian matrix of second order partial derivatives of ''ƒ''
<li><math>\bold{\Psi}_4 = \int (\operatorname{vec} \, \operatorname{D}^2 f(\bold{x})) (\operatorname{vec}^T \operatorname{D}^2 f(\bold{x})) \, d\bold{x}</math> is a ''d''<sup>2</sup> × ''d''<sup>2</sup> matrix of integrated fourth order
partial derivatives of ''ƒ''
<li>vec is the vector operator which stacks the columns of a matrix into a single vector e.g. <math>\operatorname{vec}\begin{bmatrix}a & c \\ b & d\end{bmatrix} = \begin{bmatrix}a & b & c & d\end{bmatrix}^T.</math>
</ul>
The quality of the AMISE approximation to the MISE<ref name="WJ1995"/>{{rp|97}} is given by
 
: <math>\operatorname{MISE} (\bold{H}) = \operatorname{AMISE} (\bold{H}) + o(n^{-1} |\bold{H}|^{-1/2} + \operatorname{tr} \, \bold{H}^2)</math>
 
where ''o'' indicates the usual [[big O notation|small o notation]]. Heuristically this statement implies that the AMISE is a 'good' approximation of the MISE as the sample size <em>n → ∞<em>.  
 
It can be shown that any reasonable bandwidth selector '''H''' has '''H''' = ''O(n<sup>-2/(d+4)</sup>)'' where the [[big O notation]] is applied elementwise. Substituting this into the MISE formula yields that the optimal MISE is ''O(n<sup>-4/(d+4)</sup>).''<ref name="WJ1995"/>{{rp|99-100}} Thus as ''n → ∞'', the MISE → 0, i.e. the kernel density estimate [[convergence in mean|converges in mean square]] and thus also in probability to the true density ''f''. These modes of convergence are confirmation of the statement in the motivation section that kernel methods lead to reasonable density estimators.  An ideal optimal bandwidth selector is
 
: <math>\bold{H}_{\operatorname{AMISE}} = \operatorname{argmin}_{\bold{H} \in F} \, \operatorname{AMISE} (\bold{H}).</math>
 
Since this ideal selector contains the unknown density function ''ƒ'', it cannot be used directly. The many different varieties of data-based bandwidth selectors arise from the different estimators of the AMISE. We concentrate on two classes of selectors which have been shown to be the most widely applicable in practise: smoothed cross validation and plug-in selectors.
 
===Plug-in===
The plug-in (PI) estimate of the AMISE is formed by replacing '''Ψ'''<sub>4</sub> by its estimator <math>\hat{\bold{\Psi}}_4</math>
 
: <math>\operatorname{PI}(\bold{H}) = n^{-1} |\bold{H}|^{-1/2} R(K) + \tfrac{1}{4} m_2(K)^2
(\operatorname{vec}^T \bold{H}) \hat{\bold{\Psi}}_4 (\bold{G}) (\operatorname{vec} \, \bold{H})</math>
 
where <math>\hat{\bold{\Psi}}_4 (\bold{G}) = n^{-2} \sum_{i=1}^n
\sum_{j=1}^n [(\operatorname{vec} \, \operatorname{D}^2) (\operatorname{vec}^T \operatorname{D}^2)] K_\bold{G} (\bold{X}_i - \bold{X}_j)</math>. Thus <math>\hat{\bold{H}}_{\operatorname{PI}} = \operatorname{argmin}_{\bold{H} \in F} \, \operatorname{PI} (\bold{H})</math> is the plug-in selector.<ref>{{Cite journal| author1=Wand, M.P. | author2=Jones, M.C. | title=Multivariate plug-in bandwidth selection | journal=Computational Statistics | year=1994 | volume=9 | pages=97–177}}</ref><ref name="DH2005" /> These references also contain algorithms on optimal estimation of the pilot bandwidth matrix <strong>G</strong> and establish that <math>\hat{\bold{H}}_{\operatorname{PI}}</math> [[convergence in probability|converges in probability]] to '''H'''<sub>AMISE</sub>.
 
===Smoothed cross validation===
Smoothed cross validation (SCV) is a subset of a larger class of [[cross-validation (statistics)|cross validation]] techniques. The SCV estimator differs from the plug-in estimator in the second term
 
: <math>\operatorname{SCV}(\bold{H}) = n^{-1} |\bold{H}|^{-1/2} R(K) +
n^{-2} \sum_{i=1}^n \sum_{j=1}^n (K_{2\bold{H} +2\bold{G}} - 2K_{\bold{H} +2\bold{G}}
+ K_{2\bold{G}}) (\bold{X}_i - \bold{X}_j)</math>
 
Thus <math>\hat{\bold{H}}_{\operatorname{SCV}} = \operatorname{argmin}_{\bold{H} \in F} \, \operatorname{SCV} (\bold{H})</math> is the SCV selector.<ref name="DH2005">{{Cite journal| doi=10.1111/j.1467-9469.2005.00445.x | author1=Duong, T. | author2=Hazelton, M.L. | title=Cross validation bandwidth matrices for multivariate kernel density estimation | journal=Scandinavian Journal of Statistics | year=2005 | volume=32 | pages=485–506}}</ref><ref>{{Cite journal| doi=10.1007/BF01205233 | author1=Hall, P. | author2=Marron, J. | author3=Park, B. | title=Smoothed cross-validation | journal=Probability Theory and Related Fields | year=1992 | volume=92 | pages=1–20}}</ref>
These references also contain algorithms on optimal estimation of the pilot bandwidth matrix <strong>G</strong> and establish that <math>\hat{\bold{H}}_{\operatorname{SCV}}</math> converges in probability to '''H'''<sub>AMISE</sub>.
 
=== Rule of thumb ===
 
Silverman's rule suggests using <math>\sqrt{\mathbf{H}_{ii}} = \left(\frac{4}{d+2}\right)^{\frac{1}{d+4}} n^{\frac{-1}{d+4}} \sigma_i</math> where <math>\sigma_i</math> is the standard deviation of the ith variable and <math>\mathbf{H}_{ij} = 0, i\neq j</math>.
 
==Asymptotic analysis==
In the optimal bandwidth selection section, we introduced the MISE. Its construction relies on the [[expected value]] and the [[variance]] of the density esimator<ref name="WJ1995"/>{{rp|97}}
 
:<math>\operatorname{E} \hat{f}(\bold{x};\bold{H}) = K_\bold{H} * f (\bold{x}) = f(\bold{x}) + \frac{1}{2} m_2(K) \int \operatorname{tr} (\bold{H} \operatorname{D}^2 f(\bold{x})) \, d\bold{x} + O(\operatorname{tr} \, \bold{H}^2)</math>
 
where * is the [[convolution]] operator between two functions, and
 
:<math>\operatorname{Var} \hat{f}(\bold{x};\bold{H}) = n^{-1} |\bold{H}|^{-1/2} R(K) + o(n^{-1} |\bold{H}|^{-1/2}).</math>
 
For these two expressions to be well-defined, we require that all elements of '''H''' tend to 0 and that ''n<sup>-1</sup>'' |'''H'''|<sup>-1/2</sup> tends to 0 as ''n'' tends to infinity. Assuming these two conditions, we see that the expected value tends to the true density ''f'' i.e. the kernel density estimator is asymptotically [[Bias of an estimator|unbiased]]; and that the variance tends to zero. Using the standard mean squared value decomposition
 
:<math>\operatorname{MSE} \, \hat{f}(\bold{x};\bold{H}) = \operatorname{Var} \hat{f}(\bold{x};\bold{H}) + [\operatorname{E} \hat{f}(\bold{x};\bold{H}) - f(\bold{x})]^2</math>
 
we have that the MSE tends to 0, implying that the kernel density estimator is (mean square) consistent and hence converges in probability to the true density ''f''. The rate of convergence of the MSE to 0 is the necessarily the same as the MISE rate noted previously ''O(n<sup>-4/(d+4)</sup>)'', hence the covergence rate of the density estimator to ''f'' is  ''O<sub>p</sub>(n<sup>-2/(d+4)</sup>)'' where ''O<sub>p</sub>'' denotes [[Big O in probability notation|order in probability]]. This establishes pointwise convergence. The functional covergence is established similarly by considering the behaviour of the MISE, and noting that under sufficient regularity, integration does not affect the convergence rates.  
 
For the data-based bandwidth selectors considered, the target is the AMISE bandwidth matrix. We say that a data-based selector converges to the AMISE selector at relative rate ''O<sub>p</sub>(n<sup>-α</sup>), α > 0'' if
 
:<math>\operatorname{vec} (\hat{\bold{H}} - \bold{H}_{\operatorname{AMISE}}) = O(n^{-2\alpha}) \operatorname{vec} \bold{H}_{\operatorname{AMISE}}.</math>
 
It has been established that the plug-in and smoothed cross validation selectors (given a single pilot bandwidth '''G''') both converge at a relative rate of ''O<sub>p</sub>(n<sup>-2/(d+6)</sup>)'' <ref name="DH2005"/><ref>{{Cite journal| doi=10.1016/j.jmva.2004.04.004 | author1=Duong, T. | author2=Hazelton, M.L. | title=Convergence rates for unconstrained bandwidth matrix selectors in multivariate kernel density estimation | journal=Journal of Multivariate Analysis | year=2005 | volume=93 | pages=417–433}}</ref> i.e., both these data-based selectors are consistent estimators.
 
==Density estimation in R with a full bandwidth matrix==
[[File:Old Faithful Geyser KDE with plugin bandwidth.png|thumb|250px|alt=Old Faithful Geyser data kernel density estimate with plug-in bandwidth matrix.|Old Faithful Geyser data kernel density estimate with plug-in bandwidth matrix.]]
 
The [http://cran.r-project.org/web/packages/ks/index.html ks package]<ref>{{Cite journal| author1=Duong, T. | title=ks: Kernel density estimation and kernel discriminant analysis in R | journal=Journal of Statistical Software | year=2007 | volume=21(7) | url=http://www.jstatsoft.org/v21/i07}}</ref> in [[R programming language|R]] implements the plug-in and smoothed cross validation selectors (amongst others). This dataset (included in the base distribution of R) contains
272 records with two measurements each: the duration time of an eruption (minutes) and the
waiting time until the next eruption (minutes) of the [[Old Faithful Geyser]] in Yellowstone National Park, USA.
 
The code fragment computes the kernel density estimate with the plug-in bandwidth matrix <math>\hat{\bold{H}}_\operatorname{PI} = \begin{bmatrix}0.052 & 0.510 \\ 0.510 & 8.882\end{bmatrix}.</math> Again, the coloured contours correspond to the smallest region which contains the respective probability mass: red = 25%, orange + red = 50%, yellow + orange + red = 75%. To compute the SCV selector, <code>Hpi</code> is replaced with <code>Hscv</code>. This is not displayed here since it is mostly similar to the plug-in estimate for this example.
 
<source lang="rsplus" style="overflow:auto;">
library(ks)
data(faithful)
H <- Hpi(x=faithful)
fhat <- kde(x=faithful, H=H)
plot(fhat, display="filled.contour2")
points(faithful, cex=0.5, pch=16)
</source>
 
==Density estimation in R with a diagonal bandwidth matrix==
 
[[File:Old faithful pdf.png|thumb|250px|alt=Old Faithful Geyser data kernel density estimate with diagonal bandwidth matrix.|Old Faithful Geyser data kernel density estimate with diagonal bandwidth matrix.]]
 
This example is again based on the Old Faithful Geyser, but this time we use the [http://cran.r-project.org/web/packages/np/index.html R np package] that employs automatic (data-driven) bandwidth selection for a diagonal bandwidth matrix; see the [http://cran.r-project.org/web/packages/np/vignettes/np.pdf np vignette] for an introduction to the np package. The figure below shows the joint density estimate using a second order Gaussian kernel.
 
'''R script for the example'''
 
The following commands of the R programming language use the
<tt>npudens()</tt> function to deliver optimal smoothing and to create
the figure given above. These commands can be entered at the command
prompt by using copy and paste.
 
<source lang="rsplus" style="overflow:auto;">
library(np)
library(datasets)
data(faithful)
f <- npudens(~eruptions+waiting,data=faithful)
plot(f,view="fixed",neval=100,phi=30,main="",xtrim=-0.2)
</source>
 
Computing kernel density estimates with diagonal bandwidth selectors is also available in the <tt>ks</tt> library, using the <tt>Hpi.diag()</tt> function. To produce a 3D plot similar to that from <tt>npudens()</tt>, the option <tt>display="persp"<tt> is added.
 
<source lang="rsplus" style="overflow:auto;">
library(ks)
data(faithful)
H <- Hpi.diag(x=faithful)
fhat <- kde(x=faithful, H=H)
plot(fhat, display="persp")
</source>
 
==Density estimation in Matlab with a diagonal bandwidth matrix==
 
[[File:Bivariate example.png|thumb|250px|alt=Kernel density estimate with diagonal bandwidth for synthetic normal mixture data. |Kernel density estimate with diagonal bandwidth for synthetic normal mixture data.]]
 
We consider estimating the density of the Gaussian mixture
{{nowrap|(4''π'')<sup>−1</sup>&thinsp;exp(−{{frac|2}} (''x''<sub>1</sub><sup>2</sup> + ''x''<sub>2</sub><sup>2</sup>))
+ (4''π'')<sup>−1</sup>&thinsp;exp(−{{frac|2}} ((''x''<sub>1</sub> - 3.5)<sup>2</sup> + ''x''<sub>2</sub><sup>2</sup>))}},
from 500 randomly generated points. We employ the Matlab routine  for
[http://www.mathworks.com/matlabcentral/fileexchange/17204 2-dimensional data].
The routine is an automatic bandwidth selection method specifically designed
for  a second order Gaussian kernel.<ref>{{Cite journal
| author1 = Botev, Z.I.
| author2 = Grotowski, J.F.
| author3 = Kroese, D.P.
| title = Kernel density estimation via diffusion
| journal = [[Annals of Statistics]]
| volume =  38
| issue = 5
| pages = 2916–2957
| year = 2010
| doi = 10.1214/10-AOS799
}}
</ref>
The figure shows the joint density estimate that results from using the automatically selected bandwidth.
 
'''Matlab script for the example'''
 
Type the following commands in Matlab after
[http://www.mathworks.com/matlabcentral/fileexchange/17204 downloading]
and saving the function kde2d.m
in the current directory.
 
<source lang="matlab" style="overflow:auto;">
clear all 
% generate synthetic data
data=[randn(500,2);
      randn(500,1)+3.5, randn(500,1);];
  % call the routine, which has been saved in the current directory
    [bandwidth,density,X,Y]=kde2d(data);
  % plot the data and the density estimate
    contour3(X,Y,density,50), hold on
    plot(data(:,1),data(:,2),'r.','MarkerSize',5)
</source>
 
===Alternative optimality criteria===
The MISE is the expected integrated ''L<sub>2</sub>'' distance between the density estimate and the true density function ''f''. It is the most widely used, mostly due to its tractability and most software implement MISE-based bandwidth selectors.
There are alternative optimality criteria, which attempt to cover cases where MISE is not an appropriate measure.<ref name="simonoff1996"/>{{rp|34-37,78}} The equivalent ''L<sub>1</sub>'' measure, Mean Integrated Absolute Error, is
 
: <math>\operatorname{MIAE} (\bold{H}) = \operatorname{E}\, \int |\hat{f}_\bold{H} (\bold{x}) - f(\bold{x})| \, d\bold{x}.</math>
 
Its mathematical analysis is considerably more difficult than the MISE ones. In practise, the gain appears not to be significant.<ref>{{cite journal | author1=Hall, P. | author2=Wand, M.P. | title=Minimizing L<sub>1</sub> distance in nonparametric density estimation | journal = Journal of Multivariate Analysis | year=1988 | volume=26 | pages=59–88 | doi=10.1016/0047-259X(88)90073-5}}</ref> The ''L<sub>∞</sub>'' norm is the Mean Uniform Absolute Error
 
: <math>\operatorname{MUAE} (\bold{H}) = \operatorname{E}\, \operatorname{sup}_{\bold{x}} |\hat{f}_\bold{H} (\bold{x}) - f(\bold{x})|.</math>
 
which has been investigated only briefly.<ref>{{cite journal | author1=Cao, R. | author2=Cuevas, A. | author3=Manteiga, W.G.| title=A comparative study of several smoothing methods in density estimation | journal = Computational Statistics and Data Analysis | year=1994 | volume=17 | pages=153–176 | doi=10.1016/0167-9473(92)00066-Z}}</ref> Likelihood error criteria include those based on the Mean [[Kullback-Leibler distance]]
 
:  <math>\operatorname{MKL} (\bold{H}) = \int f(\bold{x}) \, \operatorname{log} [f(\bold{x})] \, d\bold{x} - \operatorname{E} \int f(\bold{x}) \, \operatorname{log} [\hat{f}(\bold{x};\bold{H})] \, d\bold{x}</math>
 
and the Mean [[Hellinger distance]]
 
: <math>\operatorname{MH} (\bold{H}) = \operatorname{E}  \int (\hat{f}_\bold{H} (\bold{x})^{1/2} - f(\bold{x})^{1/2})^2 \, d\bold{x} .</math>
 
The KL can be estimated using a cross-validation method, although KL cross-validation selectors can be sub-optimal even if it remains [[Consistent estimator|consistent]] for bounded density functions.<ref>{{cite journal | author=Hall, P. | title=On Kullback-Leibler loss and density estimation | journal=Annals of Statistics | volume=15 | year=1989 | pages=589–605 | doi=10.1214/aos/1176350606}}</ref> MH selectors have been briefly examined in the literature.<ref>{{cite journal | author1=Ahmad, I.A. | author2=Mugdadi, A.R. | title=Weighted Hellinger distance as an error criterion for bandwidth selection in kernel estimation | journal=Journal of Nonparametric Statistics | volume=18 | year=2006 | pages=215–226 | doi=10.1080/10485250600712008}}</ref>
 
All these optimality criteria are distance based measures, and do not always correspond to more intuitive notions of closeness, so more visual criteria have been developed in response to this concern.<ref>{{cite journal | author1=Marron, J.S. | author2=Tsybakov, A. | title=Visual error criteria for qualitative smoothing | journal = Journal of the American Statistical Association | year=1996 | volume=90 | pages=499–507 | url=http://www.jstor.org/stable/2291060}}</ref>
 
==References==
{{Reflist}}
 
==External links==
* [http://www.mvstat.net/tduong/research mvstat.net] A collection of peer-reviewed articles of the mathematical details of multivariate kernel density estimation and their bandwidth selectors on an <tt>mvstat.net</tt> web page.
* [http://libagf.sf.net libagf] A [[C++]] library for multivariate, [[variable bandwidth kernel density estimation]].
 
==See also==
* [[Kernel density estimation]]&nbsp;&ndash; univariate kernel density estimation.
* [[Variable kernel density estimation]]&nbsp;&ndash; estimation of multivariate densities using the kernel with variable bandwidth
 
[[Category:Estimation of densities]]
[[Category:Non-parametric statistics]]
[[Category:Computational statistics]]
[[Category:Multivariate statistics]]

Latest revision as of 19:19, 28 October 2014

The tactical tomahawk has become increasingly preferred recently in each the law enforcement and active duty military neighborhood. The effectiveness of the tomahawk as each a tool and weapon make it an invaluable piece of gear that is equally adept at obstacle removal, dynamic entry, opening of crates, digging in fighting positions, IED removal and as a private defense weapon in close quarter combat scenarios.



Jeff Anderson is a ten year veteran of the U.S. Army, a Master Fitness Trainer, and Master Instructor of Close Quarters Combat self defense. A full time fitness and self defense author, Jeff has trained thousands of guys and ladies in the practical application of advanced military fitness procedures as effectively as close combat tactics for "real life" self defense. Excellent knife I've had it for I assume two years now I like it a lot. It really is quite strong I have no complaints about it. Pohl Force knives combine design excellence, superior workmanship, and stringent quality handle processes to make sure just about every Pohl Force knife fulfills our customers' expectations. Every single and every knife is inspected and certified by Dietmar Pohl personally. Posted in Knives , Tools 1 Comment »

The very first factor that struck me was the grip. The knife characteristics an fantastic grip and is incredibly comfortable. The rubber grip functions an aggressive checkering and feels like a fixed blade grip. The grip is extremely tacky and sticks to the hand effectively. Slippage isn't an issue with wet hands and much more importantly sweaty hands. The blade is pretty the classic knife blade it does not break the mold, but works effectively. Half of the blade is serrated and the drop point makes sharpening quick and retains a greater edge. I wanted to transform it up and quit cutting straps and clothing so I focused on ropes. In early demonstration videos, such as the " Cliffhanger " video, the knife user slashed with it. In the final version, the user stabs with the knife.

If you have any type of inquiries regarding where and how you can make use of Spyderco Tenacious Zip Tie (just click the following website), you can contact us at the web-site. This compact knife from S&W packs a huge punch! The Extreme Ops has a titanium coated 440C stainless, tanto, half serrated combo blade. The blade is three 1/8″ in length and the handle is four 1/8″ giving the knife an open length of 7 1/4″. The handle is created out of higher excellent aluminum with a checkered for elevated grip. The manage has been machined with holes to reduce down on weight and also raise grip. The knife is completed with a durable black-blue finish and also contains a stainless pocket clip. four. The SOGZilla with black zythel manage