Kriging: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Gareth Jones
→‎Applications: use template for reference
en>Rafnuss
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
In [[data mining]], '''hierarchical clustering''' is a method of [[cluster analysis]] which seeks to build a [[hierarchy]] of clusters. Strategies for hierarchical clustering generally fall into two types: {{Cn|date=December 2013}}
You will find effortless methods to accelerate computer by making the many from the built inside tools inside a Windows and also downloading the Service Pack updates-speed up your PC plus fix error. Simply follow a few protocols to swiftly create your computer fast than ever.<br><br>Google Chrome crashes on Windows 7 by the corrupted cache contents and difficulties with the stored browsing information. Delete the browsing information plus clear the contents of the cache to resolve this issue.<br><br>One of the most overlooked factors a computer might slow down is because the registry has become corrupt. The registry is basically a computer's operating program. Anytime you're running a computer, entries are being prepared and deleted from a registry. The effect this has is it leaves false entries inside your registry. So, your computer's resources must work about these false entries.<br><br>If you feel you don't have enough income at the time to upgrade, then the number one option is to free up some space by deleting certain of the unwelcome files and folders.<br><br>The final step is to make sure that you clean the registry of the computer. The "registry" is a big database that stores significant files, settings & options, and information. Windows reads the files it requires inside purchase for it to run programs through this database. If the registry gets damaged, afflicted, or clogged up, then Windows will likely not be capable to correctly access the files it requires for it to load up programs. As this arises, difficulties and mistakes like the d3d9.dll error happen. To fix this plus prevent future setbacks, you have to download and run a registry cleaning tool. The highly recommended software is the "Frontline [http://bestregistrycleanerfix.com/tune-up-utilities tuneup utilities 2014]".<br><br>Active X controls are used across the entire spectrum of computer plus internet technologies. These controls are called the building blocks of the internet plus as the glue that puts it all together. It is a standard that is chosen by all programmers to create the web more valuable plus interactive. Without these control practices there would basically be no public web.<br><br>Your registry is the area all a important configurations for hardware, software and user profile configurations and preferences are stored. Every time one of these items is changed, the database then starts to expand. Over time, the registry will become bloated with unnecessary files. This causes a general slow down nevertheless inside extreme situations may cause significant jobs and programs to stop functioning all together.<br><br>Ally Wood is a pro software reviewer and has worked in CNET. Then she is functioning for her own review software organization to provide suggestions to the software creator and has done deep test inside registry cleaner software. After reviewing the top registry cleaner, she has written complete review on a review website for you that is accessed for free.
*'''Agglomerative''': This is a "bottom up" approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy.
*'''Divisive''': This is a "top down" approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy.
 
In general, the merges and splits are determined in a [[greedy algorithm|greedy]] manner. The results of hierarchical clustering are usually presented in a [[dendrogram]].
 
In the general case, the complexity of agglomerative clustering is <math>\mathcal{O}(n^3)</math>, which makes them too slow for large data sets. Divisive clustering with an exhaustive search is <math>\mathcal{O}(2^n)</math>, which is even worse. However, for some special cases, optimal efficient agglomerative methods (of complexity <math>\mathcal{O}(n^2)</math>) are known: SLINK<ref name="SLINK">{{cite journal | author=R. Sibson | title=SLINK: an optimally efficient algorithm for the single-link cluster method | journal=The Computer Journal | volume=16 | issue=1 | pages=30–34 | year=1973 | publisher=British Computer Society | url=http://www.cs.gsu.edu/~wkim/index_files/papers/sibson.pdf | doi=10.1093/comjnl/16.1.30}}</ref> for single-linkage and CLINK<ref>{{cite journal | author=D. Defays | title=An efficient algorithm for a complete link method | journal=The Computer Journal | volume=20 | issue=4 | pages=364–366 | year=1977 | publisher=British Computer Society | url=http://comjnl.oxfordjournals.org/content/20/4/364.abstract | doi=10.1093/comjnl/20.4.364}}</ref> for complete-linkage clustering.
 
== Cluster dissimilarity ==
In order to decide which clusters should be combined (for agglomerative), or where a cluster should be split (for divisive), a measure of dissimilarity between sets of observations is required. In most methods of hierarchical clustering, this is achieved by use of an appropriate [[metric (mathematics)|metric]] (a measure of [[distance]] between pairs of observations), and a linkage criterion which specifies the dissimilarity of sets as a function of the pairwise distances of observations in the sets.
 
=== Metric ===
{{See|metric (mathematics)}}
 
The choice of an appropriate metric will influence the shape of the clusters, as some elements may be close to one another according to one distance and farther away according to another. For example, in a 2-dimensional space, the distance between the point (1,0) and the origin (0,0) is always 1 according to the usual norms, but the distance between the point (1,1) and the origin (0,0) can be 2, <math>\scriptstyle\sqrt{2}</math> or 1 under Manhattan distance, Euclidean distance or maximum distance respectively.
 
Some commonly used metrics for hierarchical clustering are:<ref>{{cite web | title=The DISTANCE Procedure: Proximity Measures | url=http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/statug_distance_sect016.htm | work=SAS/STAT 9.2 Users Guide | publisher= [[SAS Institute]] | date= | accessdate=2009-04-26}}</ref>
{|class="wikitable"
! Names
! Formula
|-
| [[Euclidean distance]]
| <math> \|a-b \|_2 = \sqrt{\sum_i (a_i-b_i)^2} </math>
|-
| squared Euclidean distance
| <math> \|a-b \|_2^2 = \sum_i (a_i-b_i)^2 </math>
|-
| [[Manhattan distance]]
| <math> \|a-b \|_1 = \sum_i |a_i-b_i| </math>
|-
| [[Uniform norm|maximum distance]]
| <math> \|a-b \|_\infty = \max_i |a_i-b_i| </math>
|-
| [[Mahalanobis distance]]
| <math> \sqrt{(a-b)^{\top}S^{-1}(a-b)} </math> where ''S'' is the [[covariance matrix]]
|-
| [[cosine similarity]]
| <math> \frac{a \cdot b}{\|a\| \|b\|} </math>
|-
|}
For text or other non-numeric data, metrics such as the [[Hamming distance]] or [[Levenshtein distance]] are often used.
 
A review of cluster analysis in health psychology research found that the most common distance measure in published studies in that research area is the Euclidean distance or the squared Euclidean distance.{{Citation needed|date=April 2009}}
 
=== Linkage criteria ===
The linkage criterion determines the distance between sets of observations as a function of the pairwise distances between observations.
 
Some commonly used linkage criteria between two sets of observations ''A'' and ''B'' are:<ref>{{cite web | title=The CLUSTER Procedure: Clustering Methods | url=http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/statug_cluster_sect012.htm | work=SAS/STAT 9.2 Users Guide | publisher= [[SAS Institute]] | date= | accessdate=2009-04-26}}</ref><ref>Székely, G. J. and Rizzo, M. L. (2005) Hierarchical clustering via Joint Between-Within Distances: Extending Ward's Minimum Variance Method, Journal of Classification 22, 151-183.</ref>
{|class="wikitable"
! Names
! Formula
|-
| Maximum or [[complete linkage clustering]]
| <math> \max \, \{\, d(a,b) : a \in A,\, b \in B \,\}. </math>
|-
| Minimum or [[single-linkage clustering]]
| <math> \min \, \{\, d(a,b) : a \in A,\, b \in B \,\}. </math>
|-
| Mean or average linkage clustering, or [[UPGMA]]
| <math> \frac{1}{|A| |B|} \sum_{a \in A }\sum_{ b \in B} d(a,b). </math>
|-
| [[Energy distance|Minimum energy clustering]]
| <math>  \frac {2}{nm}\sum_{i,j=1}^{n,m} \|a_i- b_j\|_2 - \frac {1}{n^2}\sum_{i,j=1}^{n} \|a_i-a_j\|_2 - \frac{1}{m^2}\sum_{i,j=1}^{m} \|b_i-b_j\|_2 </math>
|}
where ''d'' is the chosen metric.  Other linkage criteria include:
 
* The sum of all intra-cluster variance.
* The decrease in variance for the cluster being merged ([[Ward's method|Ward's criterion]]).<ref name="wards method">{{cite journal
|doi=10.2307/2282967
|last=Ward |first=Joe H.
|title=Hierarchical Grouping to Optimize an Objective Function
|journal=Journal of the American Statistical Association
|volume=58 |issue=301 |year=1963 |pages=236&ndash;244
|mr=0148188
|jstor=2282967
}}</ref>
* The probability that candidate clusters spawn from the same distribution function (V-linkage).
* The product of in-degree and out-degree on a k-nearest-neighbor graph (graph degree linkage).<ref>Zhang, et al. "Graph degree linkage: Agglomerative clustering on a directed graph." 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012. http://arxiv.org/abs/1208.5092</ref>
* The increment of some cluster descriptor (i.e., a quantity defined for measuring the quality of a cluster) after merging two clusters.<ref>Zhang, et al. "Agglomerative clustering via maximum incremental path integral." Pattern Recognition (2013).</ref> <ref>Zhao, and Tang. "Cyclizing clusters via zeta function of a graph."Advances in Neural Information Processing Systems. 2008.</ref> <ref>Ma, et al. "Segmentation of multivariate mixed data via lossy data coding and compression." IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(9) (2007): 1546-1562.</ref>
 
== Discussion ==
Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances.
 
== Example for Agglomerative Clustering ==
For example, suppose this data is to be clustered, and the [[Euclidean distance]] is the [[Metric (mathematics)|distance metric]].
 
Cutting the tree at a given height will give a partitioning clustering at a selected precision. In this example, cutting after the second row of the dendrogram will yield clusters {a} {b c} {d e} {f}. Cutting after the third row will yield clusters {a} {b c} {d e f}, which is a coarser clustering, with a smaller number of larger clusters.
 
[[Image:Clusters.svg|frame|none|Raw data]]
 
The hierarchical clustering [[dendrogram]] would be as such:
 
[[Image:Hierarchical clustering simple diagram.svg|frame|none|Traditional representation]]
 
This method builds the hierarchy from the individual elements by progressively merging clusters. In our example, we have six elements {a} {b} {c} {d} {e} and {f}. The first step is to determine which elements to merge in a cluster. Usually, we want to take the two closest elements, according to the chosen distance.
 
Optionally, one can also construct a [[distance matrix]] at this stage, where the number in the ''i''-th row ''j''-th column is the distance between the ''i''-th and ''j''-th elements. Then, as clustering progresses, rows and columns are merged as the clusters are merged and the distances updated. This is a common way to implement this type of clustering, and has the benefit of caching distances between clusters. A simple agglomerative clustering algorithm is described in the [[single-linkage clustering]] page; it can easily be adapted to different types of linkage (see below).
 
Suppose we have merged the two closest elements ''b'' and ''c'', we now have the following clusters {''a''}, {''b'', ''c''}, {''d''}, {''e''} and {''f''}, and want to merge them further. To do that, we need to take the distance between {a} and {b c}, and therefore define the distance between two clusters.
Usually the distance between two clusters <math>\mathcal{A}</math> and <math>\mathcal{B}</math> is one of the following:
* The maximum distance between elements of each cluster (also called [[complete-linkage clustering]]):
::<math> \max \{\, d(x,y) : x \in \mathcal{A},\, y \in \mathcal{B}\,\}. </math>
* The minimum distance between elements of each cluster (also called [[single-linkage clustering]]):
::<math> \min \{\, d(x,y) : x \in \mathcal{A},\, y \in \mathcal{B} \,\}. </math>
* The mean distance between elements of each cluster (also called average linkage clustering, used e.g. in [[UPGMA]]):
::<math> {1 \over {|\mathcal{A}|\cdot|\mathcal{B}|}}\sum_{x \in \mathcal{A}}\sum_{ y \in \mathcal{B}} d(x,y). </math>
* The sum of all intra-cluster variance.
* The increase in variance for the cluster being merged ([[Ward's method]]<ref name="<ref name="wards method"/>)
* The probability that candidate clusters spawn from the same distribution function (V-linkage).
 
Each agglomeration occurs at a greater distance between clusters than the previous agglomeration, and one can decide to stop clustering either when the clusters are too far apart to be merged (distance criterion) or when there is a sufficiently small number of clusters (number criterion).
 
== Software ==
=== Open Source Frameworks ===
* [http://bonsai.hgc.jp/~mdehoon/software/cluster/ Cluster 3.0] provides a nice [[Graphical User Interface]] to access to different clustering routines and is available for Windows, Mac OS X, Linux, Unix.
* [[Environment for DeveLoping KDD-Applications Supported by Index-Structures|ELKI]] includes multiple hierarchical clustering algorithms, various linkage strategies and also includes the efficient SLINK<ref name="SLINK" /> algorithm, flexible cluster extraction from dendrograms and various other [[cluster analysis]] algorithms.
* [[GNU Octave|Octave]], the [[GNU]] analog to [[MATLAB]] implements hierarchical clustering in [http://octave.sourceforge.net/statistics/function/linkage.html linkage function]
* [[Orange (software)|Orange]], a free data mining software suite, module [http://www.ailab.si/orange/doc/modules/orngClustering.htm orngClustering] for scripting in [[Python (programming language)|Python]], or cluster analysis through visual programming.
* [[R (programming language)|R]] has several functions for hierarchical clustering: see [http://cran.r-project.org/web/views/Cluster.html CRAN Task View: Cluster Analysis & Finite Mixture Models] for more information.
* [[scikit-learn]] implements a hierarchical clustering based on the [[Ward's method|Ward algorithm]] only.
* [[Weka (machine learning)|Weka]] includes hierarchical cluster analysis.
 
=== Standalone implementations ===
* [[CrimeStat]] implements two hierarchical clustering routines, a nearest neighbor (Nnh) and a risk-adjusted(Rnnh).
* [http://code.google.com/p/figue/ figue] is a [[JavaScript]] package that implements some agglomerative clustering functions (single-linkage, complete-linkage, average-linkage) and functions to visualize clustering output (e.g. dendrograms).
* [http://code.google.com/p/scipy-cluster/ hcluster] is a [[Python (programming language)|Python]] implementation, based on [[NumPy]], which supports hierarchical clustering and plotting.
* [http://www.semanticsearchart.com/researchHAC.html Hierarchical Agglomerative Clustering] implemented as C# visual studio project that includes real text files processing, building of document-term matrix with stop words filtering and stemming.
* [http://deim.urv.cat/~sgomez/multidendrograms.php MultiDendrograms] An [[open source]] [[Java (programming language)|Java]] application for variable-group agglomerative hierarchical clustering, with [[graphical user interface]].
* [http://www.mathworks.com/matlabcentral/fileexchange/38018-graph-agglomerative-clustering-gac-toolbox Graph Agglomerative Clustering (GAC) toolbox] implemented several graph-based agglomerative clustering algorithms.
 
=== Commercial ===
* [[MathWorks|MATLAB]] includes hierarchical cluster analysis.
* [[SAS System|SAS]] includes hierarchical cluster analysis.
* [[Mathematica]] includes a Hierarchical Clustering Package
 
== See also ==
* [[Statistical distance]]
* [[Cluster analysis]]
* [[CURE data clustering algorithm]]
* [[Dendrogram]]
* [[Determining the number of clusters in a data set]]
* [[Hierarchical clustering of networks]]
* [[Nearest-neighbor chain algorithm]]
* [[Numerical taxonomy]]
* [[OPTICS algorithm]]
* [[Nearest neighbor search]]
* [[Locality-sensitive hashing]]
 
== Notes ==
{{reflist}}
 
== References and further reading ==
*{{cite book
|last1=Kaufman|first1=L.|last2=Rousseeuw|first2=P.J.
|year=1990
|title=Finding Groups in Data: An Introduction to Cluster Analysis |edition=1
|isbn= 0-471-87876-6
|publisher=John Wiley |location= New York
}}
*{{cite book
|last1=Hastie|first1=Trevor|authorlink1=Trevor Hastie|last2=Tibshirani|first2=Robert|authorlink2=Robert Tibshirani|last3=Friedman|first3=Jerome
|year=2009
|title=The Elements of Statistical Learning |edition=2nd
|isbn=0-387-84857-6
|url=http://www-stat.stanford.edu/~tibs/ElemStatLearn/ |format=PDF |accessdate=2009-10-20
|publisher=Springer |location=New York
|chapter=14.3.12 Hierarchical clustering |pages=520&ndash;528
}}
*{{Cite book | last1=Press | first1=WH | last2=Teukolsky | first2=SA | last3=Vetterling | first3=WT | last4=Flannery | first4=BP | year=2007 | title=Numerical Recipes: The Art of Scientific Computing | edition=3rd | publisher=Cambridge University Press |  publication-place=New York | isbn=978-0-521-88068-8 | chapter=Section 16.4. Hierarchical Clustering by Phylogenetic Trees | chapter-url=http://apps.nrbook.com/empanel/index.html#pg=868}}
 
{{DEFAULTSORT:Hierarchical Clustering}}
[[Category:Network analysis]]
[[Category:Data clustering algorithms]]

Latest revision as of 11:08, 18 December 2014

You will find effortless methods to accelerate computer by making the many from the built inside tools inside a Windows and also downloading the Service Pack updates-speed up your PC plus fix error. Simply follow a few protocols to swiftly create your computer fast than ever.

Google Chrome crashes on Windows 7 by the corrupted cache contents and difficulties with the stored browsing information. Delete the browsing information plus clear the contents of the cache to resolve this issue.

One of the most overlooked factors a computer might slow down is because the registry has become corrupt. The registry is basically a computer's operating program. Anytime you're running a computer, entries are being prepared and deleted from a registry. The effect this has is it leaves false entries inside your registry. So, your computer's resources must work about these false entries.

If you feel you don't have enough income at the time to upgrade, then the number one option is to free up some space by deleting certain of the unwelcome files and folders.

The final step is to make sure that you clean the registry of the computer. The "registry" is a big database that stores significant files, settings & options, and information. Windows reads the files it requires inside purchase for it to run programs through this database. If the registry gets damaged, afflicted, or clogged up, then Windows will likely not be capable to correctly access the files it requires for it to load up programs. As this arises, difficulties and mistakes like the d3d9.dll error happen. To fix this plus prevent future setbacks, you have to download and run a registry cleaning tool. The highly recommended software is the "Frontline tuneup utilities 2014".

Active X controls are used across the entire spectrum of computer plus internet technologies. These controls are called the building blocks of the internet plus as the glue that puts it all together. It is a standard that is chosen by all programmers to create the web more valuable plus interactive. Without these control practices there would basically be no public web.

Your registry is the area all a important configurations for hardware, software and user profile configurations and preferences are stored. Every time one of these items is changed, the database then starts to expand. Over time, the registry will become bloated with unnecessary files. This causes a general slow down nevertheless inside extreme situations may cause significant jobs and programs to stop functioning all together.

Ally Wood is a pro software reviewer and has worked in CNET. Then she is functioning for her own review software organization to provide suggestions to the software creator and has done deep test inside registry cleaner software. After reviewing the top registry cleaner, she has written complete review on a review website for you that is accessed for free.