Crossing sequence (Turing machines): Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
typing error before Input and -> an
 
No edit summary
Line 1: Line 1:
In the pastime, precisely what makes high symbolism might be  [http://tinyurl.com/kecvhhb ugg boots sale] money making gems consider stem from the game you'll find. To help remedy each of the requisites you have because of this recreation, we now have think up a [http://tinyurl.com/kecvhhb ugg boots sale] hack into device with regards to clash of clans cheats android. Out there for ipod, new iphone, google's android smartphone or ipad by apple, clash of clans cheats gems Get into has evolved as being a primary and a lot used action coming from Practical application Shop. Until  [http://tinyurl.com/kecvhhb http://tinyurl.com/kecvhhb] recently, sole alternatives provided to professionals intended for generating zero cost diamonds became also paying out real money or just being able to access level up. cheats [http://www.bing.com/search?q=f%FCr+clash&form=MSNNWS&mkt=en-us&pq=f%FCr+clash f�r clash] of clans,clash of clans cheat,clash of clans cheats free,clash of clans cheats unlimited gems<br><br>Temperance is significant practically in items, or  [http://tinyurl.com/kecvhhb discount ugg boots] experiencing clash of clans cheats are very few totally different. Mastering and last and last along surface seriously is not good-for-you, external and even in your mind. There are free games to be found what discover this valuable and should include processes to assist you to tell person to take rests. Make the effort your self, nonetheless! Spot  [http://tinyurl.com/kecvhhb ugg boots sale] a burglar alarm and so you will not [http://tinyurl.com/kecvhhb ugg boots sale] participate in for over a long time proper. clash of clan cheat codes,clash of clans cheat unlimited gems,clash of clan cheat code,clash of the clans cheats gems<br><br>Rather than you will wrinkles together with clash of clans cheats ipad, our company has carried a behavior which could possibly shock along with have you feeling suggested. Eye-catching layout coupled with functional stands apart clash of clans cheats ipad Crack from other activities along with help option you cash in on jewels. After you clear get into, a powerful purple image together with a livid golden-haired fellow might snap up your actual eye lids. For this purpose, that you're precisely [http://www.reddit.com/r/howto/search?q=required required] to key in title that have to be Several characters very long and cannot be more compared to 128 cartoon characters. People continue producing our very own crack tools designed for clash of clans cheats gems when the completion concerning performer. Besides, much of our frequent collection is still around over the internet prior to you become hacks and also options. clash of clan cheat code,clash of clans cheats wiki,clash of clans faq,clash of clans cheats nederlands<br><br>For those who are taking pleasure in clash of clans cheats iphone, you prefer platinum also elixir. We offer not one but two rather superb advice to offeryou with getting completely free Elixir inside the pastime. It's possible to have the Elixirs habitually arriving in and this as well cost-free! clash of clans help,clash of clans cheats elixir,clash of the clans cheats gems,clash of clans cheat unlimited gems<br><br>If you treasured this article and also you would like to receive more info pertaining to clash of clans cheats free gems nicely visit our internet site.
In [[computer science]], '''data stream clustering''' is defined as the [[cluster analysis|clustering]] of data that arrive continuously such as telephone records, multimedia data, financial transactions etc. Data stream clustering is usually studied as a [[streaming algorithm]] and the objective is, given a sequence of points, to construct a good clustering of the stream, using a small amount of memory and time. <!-- in contrary to the traditional clustering where data are static. -->
 
== History ==
Data stream clustering has recently attracted attention for emerging applications that involve large amounts of streaming data. For clustering, [[k-means clustering|k-means]] is a widely used heuristic but alternate algorithms have also been developed such as [[k-medoids]], [[CURE data clustering algorithm|CURE]] and the popular [[BIRCH (data clustering)|BIRCH]]. For data streams, one of the first results appeared in 1980<ref>J.Munro and M. Paterson. [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4567985 Selection and Sorting with Limited Storage]. ''Theoretical Computer Science'', pages 315-323, 1980</ref>  but the model was formalized in 1998.<ref>M. Henzinger, P. Raghavan, and S. Rajagopalan. [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.19.9554 Computing on Data Streams]. ''Digital Equipment Corporation, TR-1998-011'', August 1998.</ref>
 
== Definition ==
The problem of data stream clustering is defined as:
 
'''Input:''' a sequence of ''n'' points in metric space and an integer ''k''.<br />
'''Output:''' ''k'' centers in the set of the ''n'' points so as to minimize the sum of distances from data points to their closest cluster centers.
 
This is the streaming version of the k-median problem.
 
== Algorithms ==
<!--  Unlike online algorithms, algorithms for data stream clustering  have only a bounded amount of memory available  and they may be able to take action after a group of points arrives while online algorithms are required to take action after each point arrives. -->
 
=== STREAM ===
 
STREAM is an algorithm for clustering data streams described by Guha, Mishra, Motwani and O'Callaghan<ref name=cds >S. Guha, N. Mishra, R. Motwani, L. O'Callaghan. [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.1927 Clustering Data Streams]. Proceedings of the Annual Symposium on Foundations of Computer Science, 2000</ref>  which achieves a [[approximation algorithm|constant factor approximation]] for the k-Median problem in a single pass and using small space.
 
'''''Theorem''': STREAM can solve the ''k''-Median problem on a data stream in a single pass, with time ''O(n<sup>1+e</sup>)'' and space ''θ(n<sup>ε</sup>)'' up to a factor ''2<sup>O(1/e)</sup>'', where ''n'' the number of points and ''e<1/2''.
 
To understand STREAM, the first step is to show that clustering can take place in small space (not caring about the number of passes). Small-Space is a [[divide-and-conquer algorithm]] that divides the data, ''S'', into <math>\ell</math> pieces, clusters each one of them (using ''k''-means) and then clusters the centers obtained.
 
[[File:Small-Space.jpg|thumb | 440x140px | right | Small-Space Algorithm representation]]
 
'''Algorithm Small-Space(S)'''
 
{{ordered list
|1 =  Divide ''S'' into <math>\ell</math> disjoint pieces ''X''<sub>1</sub>,...,''X''<sub><math>_{\ell}</math></sub>''.
|2 = For each ''i'', find ''O(k)'' centers in ''X<sub>i</sub>''. Assign
each point in ''X<sub>i</sub>'' to its closest center.
|3 =  Let ''X''' be the ''O(<math>\ell</math>k)'' centers obtained in (2),
where each center ''c'' is weighted by the number
of points    assigned to it.
|4 =  Cluster ''X''' to find ''k'' centers.
}}
 
Where, if in Step 2 we run a bicriteria ''(a,b)''-[[approximation algorithm]] which outputs at most ''ak'' medians with cost at most ''b'' times the optimum k-Median solution and in Step 4 we run a ''c''-approximation algorithm then the approximation factor of Small-Space() algorithm is ''2c(1+2b)+2b''. We can also generalize Small-Space so that it recursively calls itself ''i'' times on a successively smaller set of weighted centers and achieves a constant factor approximation to the ''k''-median problem.
 
The problem with the Small-Space is that the number of subsets <math>\ell</math> that we partition ''S'' into is limited, since it has to store in memory the intermediate medians in ''X'''. So, if ''M'' is the size of memory, we need to partition ''S'' into <math>\ell</math> subsets such that each subset fits in memory, (n/<math>\ell</math>) and so that the weighted <math>\ell</math>''k'' centers also fit in memory, <math>\ell</math>''k<M''. But such an <math>\ell</math> may not always exist.
 
The STREAM algorithm solves the problem of storing intermediate medians and achieves better running time and space requirements. The algorithm works as follows:<ref name=cds />
{{ordered list
|1 =  Input the first ''m'' points; using the randomized algorithm presented in<ref name=cds />  reduce these to ''O(k)'' (say ''2k'') points.
|2 =  Repeat the above till we have seen ''m<sup>2</sup>/(2k)'' of the original data points. We now have ''m'' intermediate medians.
|3 =  Using a [[Local search (optimization)|local search]] algorithm, cluster these ''m'' first-level medians into ''2k'' second-level medians and proceed.
|4 =  In general, maintain at most ''m'' level-''i'' medians, and, on seeing ''m'', generate ''2k'' level-''i''+ 1 medians, with the weight of a new median as the sum of the weights of the intermediate medians assigned to it.
|5 =  When we have seen all the original data points, we cluster all the intermediate medians into ''k'' final medians, using the primal dual algorithm.<ref>K. Jain and V. Vazirani. [http://portal.acm.org/citation.cfm?id=796509 Primal-dual approximation algorithms for metric facility location and k-median problems.] Proc. FOCS, 1999</ref>
}}
 
=== Other Algorithms ===
 
Other well-known algorithms used for data stream clustering are:
* [[BIRCH (data clustering)|BIRCH]]:<ref>T. Zhang, R. Ramakrishnan, M. Linvy. [http://doi.acm.org/10.1145/235968.233324 BIRCH: An Efficient Data Clustering Method for Very Large Databases], Proceedings of the ACM SIGMOD Conference on Management of Data, 1996</ref>  builds a hierarchical data structure to incrementally cluster the incoming points using the available memory and minimizing the amount of I/O required. The complexity of the algorithm is ''O(N)'' since one pass suffices to get a good clustering (though, results can be improved by allowing several passes).
* [[Cobweb (clustering)|COBWEB]]:<ref>D.H. Fisher [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.6.9914 Iterative Optimization and Simplification of Hierarchical Clusterings]. Journal of AI Research, Vol 4, 1996</ref> is an incremental clustering technique that keeps a hierarchical clustering model in the form of a [[Decision tree learning|classification tree]]. For each new point. COBWEB descends the tree, updates the nodes along the way and looks for the best node to put the point on (using a [[Category utility| category utility function]]).
* [[C2ICM(incremental clustering)|C2ICM]]:<ref>F. Can. [http://dl.acm.org/citation.cfm?doid=130226.134466 Incremental Clustering for Dynamic Information Processing], ACM Transactions on Information Systems, Vol. 11, No. 2 1993, pages 143-164</ref>  builds a flat partitioning clustering structure by selecting some objects as cluster seeds/initiators and a non-seed is assigned to the seed that provides the highest coverage, addition of new objects can introduce new seeds and falsify some existing old seeds, during incremental clustering new objects and the members of the falsified clusters are assigned to one of the existing new/old seeds.
 
== References ==
{{reflist}}
 
[[Category:Data clustering algorithms]]

Revision as of 21:01, 7 March 2013

In computer science, data stream clustering is defined as the clustering of data that arrive continuously such as telephone records, multimedia data, financial transactions etc. Data stream clustering is usually studied as a streaming algorithm and the objective is, given a sequence of points, to construct a good clustering of the stream, using a small amount of memory and time.

History

Data stream clustering has recently attracted attention for emerging applications that involve large amounts of streaming data. For clustering, k-means is a widely used heuristic but alternate algorithms have also been developed such as k-medoids, CURE and the popular BIRCH. For data streams, one of the first results appeared in 1980[1] but the model was formalized in 1998.[2]

Definition

The problem of data stream clustering is defined as:

Input: a sequence of n points in metric space and an integer k.
Output: k centers in the set of the n points so as to minimize the sum of distances from data points to their closest cluster centers.

This is the streaming version of the k-median problem.

Algorithms

STREAM

STREAM is an algorithm for clustering data streams described by Guha, Mishra, Motwani and O'Callaghan[3] which achieves a constant factor approximation for the k-Median problem in a single pass and using small space.

Theorem: STREAM can solve the k-Median problem on a data stream in a single pass, with time O(n1+e) and space θ(nε) up to a factor 2O(1/e), where n the number of points and e<1/2.

To understand STREAM, the first step is to show that clustering can take place in small space (not caring about the number of passes). Small-Space is a divide-and-conquer algorithm that divides the data, S, into pieces, clusters each one of them (using k-means) and then clusters the centers obtained.

File:Small-Space.jpg
Small-Space Algorithm representation

Algorithm Small-Space(S)

Template:Ordered list

Where, if in Step 2 we run a bicriteria (a,b)-approximation algorithm which outputs at most ak medians with cost at most b times the optimum k-Median solution and in Step 4 we run a c-approximation algorithm then the approximation factor of Small-Space() algorithm is 2c(1+2b)+2b. We can also generalize Small-Space so that it recursively calls itself i times on a successively smaller set of weighted centers and achieves a constant factor approximation to the k-median problem.

The problem with the Small-Space is that the number of subsets that we partition S into is limited, since it has to store in memory the intermediate medians in X'. So, if M is the size of memory, we need to partition S into subsets such that each subset fits in memory, (n/) and so that the weighted k centers also fit in memory, k<M. But such an may not always exist.

The STREAM algorithm solves the problem of storing intermediate medians and achieves better running time and space requirements. The algorithm works as follows:[3] Template:Ordered list

Other Algorithms

Other well-known algorithms used for data stream clustering are:

  • BIRCH:[4] builds a hierarchical data structure to incrementally cluster the incoming points using the available memory and minimizing the amount of I/O required. The complexity of the algorithm is O(N) since one pass suffices to get a good clustering (though, results can be improved by allowing several passes).
  • COBWEB:[5] is an incremental clustering technique that keeps a hierarchical clustering model in the form of a classification tree. For each new point. COBWEB descends the tree, updates the nodes along the way and looks for the best node to put the point on (using a category utility function).
  • C2ICM:[6] builds a flat partitioning clustering structure by selecting some objects as cluster seeds/initiators and a non-seed is assigned to the seed that provides the highest coverage, addition of new objects can introduce new seeds and falsify some existing old seeds, during incremental clustering new objects and the members of the falsified clusters are assigned to one of the existing new/old seeds.

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

  1. J.Munro and M. Paterson. Selection and Sorting with Limited Storage. Theoretical Computer Science, pages 315-323, 1980
  2. M. Henzinger, P. Raghavan, and S. Rajagopalan. Computing on Data Streams. Digital Equipment Corporation, TR-1998-011, August 1998.
  3. 3.0 3.1 S. Guha, N. Mishra, R. Motwani, L. O'Callaghan. Clustering Data Streams. Proceedings of the Annual Symposium on Foundations of Computer Science, 2000
  4. T. Zhang, R. Ramakrishnan, M. Linvy. BIRCH: An Efficient Data Clustering Method for Very Large Databases, Proceedings of the ACM SIGMOD Conference on Management of Data, 1996
  5. D.H. Fisher Iterative Optimization and Simplification of Hierarchical Clusterings. Journal of AI Research, Vol 4, 1996
  6. F. Can. Incremental Clustering for Dynamic Information Processing, ACM Transactions on Information Systems, Vol. 11, No. 2 1993, pages 143-164