|
|
Line 1: |
Line 1: |
| Cover's Theorem is a statement in [[computational learning theory]] and is one of the primary theoretical motivations for the use of non-linear [[kernel methods]] in [[machine learning]] applications. The theorem states that given a set of training data that is not [[linearly separable]], one can with high probability transform it into a training set that is linearly separable by projecting it into a higher dimensional space via some non-linear transformation.
| | The best proof of his experiments In case you cherished this short article as well as you wish to obtain more [http://Www.Alexa.com/search?q=details&r=topsites_index&p=bigtop details] regarding [http://miamiusedcars.org/used-car-lifts-for-garages/ Used Car Lifts For Garages] kindly visit the web-site. . |
| | |
| The [[mathematical proof|proof]] is easy. A [[map (mathematics)|deterministic mapping]] may be used. Indeed, suppose there are <math>n</math> samples. Lift them onto the vertices of the [[simplex]] in the <math>n-1</math> dimensional real space. Every [[partition of a set|partition]] of the samples into two sets is separable by a [[linear separability|linear separator]]. QED. | |
| | |
| {{Quotation|A complex pattern-classification problem, cast in a high-dimensional space nonlinearly, is more likely to be linearly separable than in a low-dimensional space, provided that the space is not densely populated.|Cover, T.M. |Geometrical and Statistical properties of systems of linear inequalities with applications in pattern recognition.|1965}}
| |
| | |
| ==References==
| |
| | |
| {{Reflist}}
| |
| *{{cite book |title= Neural Networks and Learning Machines Third Edition |last= Haykin |first= Simon |year= 2009 |publisher= Pearson Education Inc |location=Upper Saddle River, New Jersey |isbn= 978-0-13-147139-9 |pages= 232–236}}
| |
| | |
| *{{cite journal |last=Cover |first=T.M. | year=1965 |title=Geometrical and Statistical properties of systems of linear inequalities with applications in pattern recognition |journal=IEEE Transactions on Electronic Computers |volume=EC-14 |pages=326–334}}
| |
| | |
| * Mehrotra, K., Mohan, C.K., Ranka, S. (1997) ''Elements of artificial neural networks'', 2nd edition. MIT Press. (Section 3.5) ISBN 0-262-13328-8 [http://books.google.co.uk/books?id=6d68Y4Wq_R4C&pg=PA88&lpg=PA88&dq=Cover's+theorem&source=bl&ots=6pEdU0CYz4&sig=V2FqwVwkYaDQgmUNk22XEjnQFpw&hl=en&ei=mYdFTNqTMpHQjAfJw5j1Bg&sa=X&oi=book_result&ct=result&resnum=7&ved=0CDAQ6AEwBg#v=onepage&q=Cover's%20theorem&f=false Google books]
| |
| | |
| {{DEFAULTSORT:Cover's Theorem}}
| |
| [[Category:Computational learning theory]] | |
| [[Category:Statistical classification]]
| |
| [[Category:Neural networks]]
| |
| | |
| {{statistics-stub}}
| |
The best proof of his experiments In case you cherished this short article as well as you wish to obtain more details regarding Used Car Lifts For Garages kindly visit the web-site. .