Deceleration parameter: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
or was that a noun adjunct "negative pressure dark matter" ?
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
{{Lowercase}}
Hi there, I am Andrew Berryhill. My wife and I reside in Mississippi but now I'm contemplating other choices. To play lacross is the factor I adore most of all. Office supervising is my occupation.<br><br>Visit my web page: certified psychics ([http://ltreme.com/index.php?do=/profile-127790/info/ mouse click the up coming webpage])
The '''{{math|<var>k</var>}}-medoids algorithm''' is a [[data clustering|clustering]] [[algorithm]] related to the [[k-means|{{math|<var>k</var>}}-means]] algorithm and the medoidshift algorithm. Both the {{math|<var>k</var>}}-means and {{math|<var>k</var>}}-medoids algorithms are partitional (breaking the dataset up into groups) and both attempt to minimize the distance between points labeled to be in a cluster and a point designated as the center of that cluster. In contrast to the {{math|<var>k</var>}}-means algorithm, {{math|<var>k</var>}}-medoids chooses datapoints as centers ([[medoids]] or exemplars) and works with an arbitrary matrix of distances between datapoints instead of <math>l_2</math>. This method was proposed in 1987<ref>Kaufman, L. and Rousseeuw, P.J. (1987), Clustering by means of Medoids, in Statistical Data Analysis Based on the <math>L_1</math>–Norm and Related Methods, edited by Y. Dodge, North-Holland, 405–416.</ref> for the work with <math>l_1</math> norm and other distances.
 
{{math|<var>k</var>}}-medoid is a classical partitioning technique of clustering that clusters the data set of {{math|<var>n</var>}} objects into {{math|<var>k</var>}} clusters known ''a priori''. A useful tool for determining {{math|<var>k</var>}} is the [[silhouette (clustering)|silhouette]].
 
It is more robust to noise and outliers as compared to [[k-means|{{math|<var>k</var>}}-means]] because it minimizes a sum of pairwise dissimilarities instead of a sum of squared Euclidean distances.
 
A [[medoid]] can be defined as the object of a cluster, whose average dissimilarity to all the objects in the cluster is minimal i.e. it is a most centrally located point in the cluster.
 
The most common realisation of {{math|<var>k</var>}}-medoid clustering is the '''Partitioning Around Medoids (PAM)''' algorithm and is as follows:<ref>{{cite book |author=Sergios Theodoridis & Konstantinos Koutroumbas |title=Pattern Recognition 3rd ed. |pages=635 |year=2006}}</ref>
 
# Initialize: randomly select {{math|<var>k</var>}} of the {{math|<var>n</var>}} data points as the medoids
# Associate each data point to the closest medoid. ("closest" here is defined using any valid [[Metric space|distance metric]], most commonly [[Euclidean distance]], [[Manhattan distance]] or [[Minkowski distance]])
# For each medoid {{math|<var>m</var>}}
## For each non-medoid data point {{math|<var>o</var>}}
### Swap {{math|<var>m</var>}} and {{math|<var>o</var>}} and compute the total cost of the configuration
# Select the configuration with the lowest cost.
# repeat steps 2 to 4 until there is no change in the medoid.
 
==Demonstration of PAM==
Cluster the following data set of ten objects into two clusters i.e. {{math|<var>k</var> {{=}} 2}}.
 
Consider a data set of ten objects as follows:
 
[[Image:kmedoid1.jpg|thumb|right|350 px|Figure 1.1 – distribution of the data]]
 
{| class="wikitable" width=20%
| width=33% | X<sub>1</sub>
| width=33% | 2
| width=33% | 6
|-
| X<sub>2</sub> || 3 || 4
|-
| X<sub>3</sub> || 3 || 8
|-
| X<sub>4</sub> || 4 || 7
|-
| X<sub>5</sub> || 6 || 2
|-
| X<sub>6</sub> || 6 || 4
|-
| X<sub>7</sub> || 7 || 3
|-
| X<sub>8</sub> || 7 || 4
|-
| X<sub>9</sub> || 8 || 5
|-
| X<sub>10</sub> || 7 || 6
|}
 
{{-}}
 
===Step 1===
[[Image:kmedoid2.jpg|thumb|right|350 px|Figure 1.2 – clusters after step 1]]
 
Initialize {{math|<var>k</var>}} centers. 
 
Let us assume {{math|c<sub>1</sub> {{=}} (3,4)}} and {{math|c<sub>2</sub> {{=}} (7,4)}}
 
So here {{math|c<sub>1</sub>}} and {{math|c<sub>2</sub>}} are selected as medoids.
 
Calculate distance so as to associate each data object to its nearest medoid. Cost is calculated using [[Manhattan distance]] ([[Minkowski distance]] metric with {{math|<var>r</var> {{=}} 1}}). Costs to the nearest medoid are shown bold in the table.
 
{| class="wikitable" width=40%
| colspan=1 align="center" | {{math|''i''}}
| colspan=2 align="center" | {{math|c<sub>1</sub>}}
| colspan=2 | Data objects ({{math|X<sub>''i''</sub>}})
| Cost (distance)
|-
| width=20% |  1
| width=20% |  3
| width=20% |  4
| width=20% |  2
| width=20% |  6
| width=20% |  '''3'''
|-
| 3 || 3 || 4 || 3 || 8 || '''4'''
|-
| 4 || 3 || 4 || 4 || 7 || '''4'''
|-
| 5 || 3 || 4 || 6 || 2 || 5
|-
| 6 || 3 || 4 || 6 || 4 || 3
|-
| 7 || 3 || 4 || 7 || 3 || 5
|-
| 9 || 3 || 4 || 8 || 5 || 6
|-
| 10 || 3 || 4 || 7 || 6 || 6
|}
 
{| class="wikitable" width=40%
| colspan=1 align="center" | {{math|''i''}}
| colspan=2 align="center" | {{math|c<sub>2</sub>}}
| colspan=2 | Data objects ({{math|X<sub>''i''</sub>}})
| Cost (distance)
|-
| width=20% |  1
| width=20% |  7
| width=20% |  4
| width=20% |  2
| width=20% |  6
| width=20% |  7
|-
| 3 || 7 || 4 || 3 || 8 || 8
|-
| 4 || 7 || 4 || 4 || 7 || 6
|-
| 5 || 7 || 4 || 6 || 2 || '''3'''
|-
| 6 || 7 || 4 || 6 || 4 || '''1'''
|-
| 7 || 7 || 4 || 7 || 3 || '''1'''
|-
| 9 || 7 || 4 || 8 || 5 || '''2'''
|-
| 10 || 7 || 4 || 7 || 6 || '''2'''
|}
 
Then the clusters become:
 
{{math|Cluster<sub>1</sub> {{=}} {(3,4)(2,6)(3,8)(4,7)} }}
 
{{math|Cluster<sub>2</sub> {{=}} {(7,4)(6,2)(6,4)(7,3)(8,5)(7,6)} }}
 
Since the points {{math|(2,6)}} {{math|(3,8)}} and {{math|(4,7)}} are closer to {{math|c<sub>1</sub>}} hence they form one cluster whilst remaining points form another cluster.
 
So the total cost involved is {{math|20}}.
 
Where cost between any two points is found using formula
 
<math>
\mbox{cost}(x,c) = \sum_{i=1}^d | x_{i} - c_{i} |
</math>
 
where {{math|<var>x</var>}} is any data object, {{math|<var>c</var>}} is the medoid, and {{math|<var>d</var>}} is the dimension of the object which in this case is {{math|2}}.
 
Total cost is the summation of the cost of data object from its medoid in its cluster so here:
 
{{-}}
 
<math>
\begin{align}
\mbox{total cost} & = \{\mbox{cost}((3,4),(2,6)) + \mbox{cost}((3,4),(3,8))+ \mbox{cost}((3,4),(4,7))\} \\
& ~+ \{\mbox{cost}((7,4),(6,2)) + \mbox{cost}((7,4),(6,4)) + \mbox{cost}((7,4),(7,3)) \\
& ~+ \mbox{cost}((7,4),(8,5)) + \mbox{cost}((7,4),(7,6)) \} \\
& = (3 + 4 + 4) + (3 + 1 + 1 + 2 + 2) \\
& = 20 \\
\end{align}
</math>
 
===Step 2===
[[Image:kmedoid3.jpg|thumb|right|350 px|Figure 1.3 – clusters after step 2]]
Select one of the nonmedoids {{math|O′}}
 
Let us assume {{math|O′ {{=}} (7,3)}}
 
So now the medoids are {{math|c<sub>1</sub>(3,4)}} and {{math|O′(7,3)}}
 
If {{math|c1}} and {{math|O′}} are new medoids, calculate the total cost involved
 
By using the formula in the step 1
 
{| class="wikitable" width=40%
| colspan=1 align="center" | {{math|''i''}}
| colspan=2 align="center" | {{math|c<sub>1</sub>}}
| colspan=2 | Data objects ({{math|X<sub>''i''</sub>}})
| Cost (distance)
|-
| width=20% |  1
| width=20% |  3
| width=20% |  4
| width=20% |  2
| width=20% |  6
| width=20% |  '''3'''
|-
| 3 || 3 || 4 || 3 || 8 || '''4'''
|-
| 4 || 3 || 4 || 4 || 7 || '''4'''
|-
| 5 || 3 || 4 || 6 || 2 || 5
|-
| 6 || 3 || 4 || 6 || 4 || 3
|-
| 7 || 3 || 4 || 7 || 4 || 4
|-
| 9 || 3 || 4 || 8 || 5 || 6
|-
| 10 || 3 || 4 || 7 || 6 || 6
|}
 
{| class="wikitable" width=40%
| colspan=1 align="center" | {{math|''i''}}
| colspan=2 align="center" | {{math|O′}}
| colspan=2 | Data objects ({{math|X<sub>''i''</sub>}})
| Cost (distance)
|-
| width=20% |  1
| width=20% |  7
| width=20% |  3
| width=20% |  2
| width=20% |  6
| width=20% |  8
|-
| 3 || 7 || 3 || 3 || 8 || 9
|-
| 4 || 7 || 3 || 4 || 7 || 7
|-
| 5 || 7 || 3 || 6 || 2 || '''2'''
|-
| 6 || 7 || 3 || 6 || 4 || '''2'''
|-
| 7 || 7 || 3 || 7 || 4 || '''1'''
|-
| 9 || 7 || 3 || 8 || 5 || '''3'''
|-
| 10 || 7 || 3 || 7 || 6 || '''3'''
|}
 
{{-}}
[[File:K-means versus k-medoids.png|thumb|550 px|Figure 2. K-medoids versus k-means. Figs 2.1a-2.1f present a typical example of the k-means convergence to a local minimum. This result of k-means clustering contradicts the obvious cluster structure of data set. In this example, k-medoids algorithm (Figs 2.2a-2.2h) with the same initial position of medoids (Fig. 2.2a) converges to the obvious cluster structure. The small circles are data points, the four ray stars are centroids (means), the nine ray stars are medoids.<ref>The illustration was prepared  with the Java applet, E.M. Mirkes, [http://www.math.le.ac.uk/people/ag153/homepage/KmeansKmedoids/Kmeans_Kmedoids.html  K-means and K-medoids: applet]. University of Leicester, 2011.</ref>]]
 
<math>
\begin{align}
\mbox{total cost} & = 3 + 4 + 4 + 2 + 2 + 1 + 3 + 3 \\
& = 22 \\
\end{align}
</math>
 
So cost of swapping medoid from {{math|c<sub>2</sub>}} to {{math|O′}} is
 
<math>
\begin{align}
S & = \mbox{current total cost} - \mbox{past total cost} \\
& = 22 - 20 \\
& = 2 > 0.
\end{align}
</math>===
So moving to {{math|O′}} would be a bad idea, so the previous choice was good. So we try other nonmedoids and found that our first choice was the best. So the configuration does not change and algorithm terminates here (i.e. there is no change in the medoids).
 
It may happen some data points may shift from one cluster to another cluster depending upon their closeness to medoid.
 
In some standard situations, k-medoids demonstrate better performance than k-means. An example is presented in Fig. 2.
The most time-consuming part of the k-medoids algorithm is the calculation of the distances between objects. If a quadratic preprocessing and storage is applicable, the distances matrix can be precomputed to achieve consequent speed-up. See for example,<ref>H.S. Park , C.H. Jun, A simple and fast algorithm for K-medoids clustering, Expert Systems with Applications, 36, (2) (2009), 3336–3341.</ref> where the authors also introduce a heuristic to choose the initial {{math|<var>k</var>}} medoids. A comparative study of K-means and k-medoids algorithms was performed for normal and for uniform distributions of data points.<ref>T. Velmurugan and T. Santhanam, Computational Complexity between K-Means and K-Medoids Clustering Algorithms for Normal and Uniform Distributions of Data Points, Journal of Computer Science 6 (3) (2010), 363-368.</ref> It was demonstrated that in the asymptotic of large data sets the k-medoids algorithm takes less time.
 
==See also==
*[[cluster analysis]]
*[[k-means]]
*[[k-medians clustering|k-medians]]
*[[medoid]]
*[[silhouette (clustering)|silhouette]]
 
==Software==
* [[ELKI]] includes several k-means variants, including K-medoids and PAM.
* [[GNU R]] includes in the "flexclust" package variants of k-means and in the "cluster" package.
* [https://github.com/eracle/Gap Gap] An embrional open source library on distance based clustering.
 
==External links==
 
E.M. Mirkes, [http://www.math.le.ac.uk/people/ag153/homepage/KmeansKmedoids/Kmeans_Kmedoids.html K-means and K-medoids] (Applet), [[University of Leicester]], 2011.
 
==References==
{{Reflist}}
 
{{DEFAULTSORT:K-Medoids}}
[[Category:Statistical algorithms]]
[[Category:Data clustering algorithms]]

Latest revision as of 02:39, 25 April 2014

Hi there, I am Andrew Berryhill. My wife and I reside in Mississippi but now I'm contemplating other choices. To play lacross is the factor I adore most of all. Office supervising is my occupation.

Visit my web page: certified psychics (mouse click the up coming webpage)