<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://en.formulasearchengine.com/index.php?action=history&amp;feed=atom&amp;title=Affine_term_structure_model</id>
	<title>Affine term structure model - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://en.formulasearchengine.com/index.php?action=history&amp;feed=atom&amp;title=Affine_term_structure_model"/>
	<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/index.php?title=Affine_term_structure_model&amp;action=history"/>
	<updated>2026-04-18T04:54:02Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.0-wmf.28</generator>
	<entry>
		<id>https://en.formulasearchengine.com/index.php?title=Affine_term_structure_model&amp;diff=28346&amp;oldid=prev</id>
		<title>24.131.80.19: fix link</title>
		<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/index.php?title=Affine_term_structure_model&amp;diff=28346&amp;oldid=prev"/>
		<updated>2013-02-13T21:03:23Z</updated>

		<summary type="html">&lt;p&gt;fix link&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Similarity learning&amp;#039;&amp;#039;&amp;#039; is one type of a supervised [[machine learning]] task in [[artificial intelligence]]. It is closely related to [[regression (machine learning)|regression]] and [[classification in machine learning|classification]], but the goal is to learn from examples a function that measure how similar or related two objects are. It has applications in [[ranking]] and in [[recommendation systems]].&lt;br /&gt;
&lt;br /&gt;
== Learning setup ==&lt;br /&gt;
&lt;br /&gt;
There are three common setups for similarity and metric distance learning.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;[[Regression (machine learning)|Regression]] similarity learning&amp;#039;&amp;#039;. In this setup, pairs of objects are given &amp;lt;math&amp;gt; (x_i^1, x_i^2) &amp;lt;/math&amp;gt; together with a measure of their similarity &amp;lt;math&amp;gt; y_i \in R &amp;lt;/math&amp;gt;. The goal is to learn a function that approximates &amp;lt;math&amp;gt; f(x_i^1, x_i^2) \sim y_i &amp;lt;/math&amp;gt; for every new labeled triplet example &amp;lt;math&amp;gt;(x_i^1, x_i^2, y_i)&amp;lt;/math&amp;gt;. This is typically achieved by  minimizing a regularized loss &amp;lt;math&amp;gt; min_W \sum_i loss(w;x_i^1, x_i^2,y_i) + reg(w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
* &amp;#039;&amp;#039;[[Classification in machine learning|Classification]] similarity learning&amp;#039;&amp;#039;. Given are pairs of similar objects &amp;lt;math&amp;gt;(x_i, x_i^+) &amp;lt;/math&amp;gt; and non similar objects &amp;lt;math&amp;gt;(x_i, x_i^-)&amp;lt;/math&amp;gt;. An equivalent formulation is that every pair &amp;lt;math&amp;gt;(x_i^1, x_i^2)&amp;lt;/math&amp;gt; is given together with a binary label &amp;lt;math&amp;gt;y_i \in \{0,1\}&amp;lt;/math&amp;gt; that determines if the two objects are similar or not. The goal is again to learn a classifier that can decide if a new pair of objects is similar or not.&lt;br /&gt;
* &amp;#039;&amp;#039;Ranking similarity learning&amp;#039;&amp;#039;. Given are triplets of objects &amp;lt;math&amp;gt;(x_i, x_i^+, x_i^-)&amp;lt;/math&amp;gt; whose relative similarity obey a predefined order: &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; is known to be more similar to &amp;lt;math&amp;gt;x_i^+&amp;lt;/math&amp;gt; than to &amp;lt;math&amp;gt;x_i^-&amp;lt;/math&amp;gt;. The goal is to learn a function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; such that for any new triplet of objects &amp;lt;math&amp;gt;(x, x^+, x^-)&amp;lt;/math&amp;gt;, it obeys &amp;lt;math&amp;gt;f(x, x^+) &amp;gt; f(x, x^-)&amp;lt;/math&amp;gt;. This setup assumes a weaker form of supervision than in regression, because instead of providing an exact measure of similarity, one only has to provide the relative order of similarity. For this reason, ranking-based similarity learning is easier to apply in real large scale applications.&amp;lt;ref&amp;gt;{{cite journal|last=Chechik|first=Gal|coauthors=V. Sharma, U. Shalit, S. Bengio|title=Large Scale Online Learning of Image Similarity Through Ranking|journal=Journal of Machine Learning research|year=2010|volume=11|pages=1109–1135|url=http://www.jmlr.org/papers/volume11/chechik10a/chechik10a.pdf}}&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
A common approach for learning similarity, is to model the similarity function as a [[bilinear form]]. For example, in the case of ranking similarity learning, one aims to learn a matrix W that parametrizes the similarity function &amp;lt;math&amp;gt; f_W(x, z)  = x^T W z &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Metric learning ==&lt;br /&gt;
&lt;br /&gt;
Similarity learning is closely related to &amp;#039;&amp;#039;distance metric learning&amp;#039;&amp;#039;. Metric learning is the task of learning a distance function over objects. A [[Metric (mathematics)|metric]] or [[distance function]] has to obey three axioms: [[non-negative|non-negativity]], [[symmetry]] and [[subadditivity]] / triangle inequality. &lt;br /&gt;
&lt;br /&gt;
When the objects &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; are vectors in &amp;lt;math&amp;gt;R^d&amp;lt;/math&amp;gt;, then any positive definite matrix &amp;lt;math&amp;gt;W&amp;gt;0&amp;lt;/math&amp;gt; defines a distance function of the space of x through the bilinear form &amp;lt;math&amp;gt;f(x_1, x_2) = x_1^T W x_2&amp;lt;/math&amp;gt;. Some well known approaches for metric learning include [[Large margin nearest neighbor]] &amp;lt;ref name=LMNN&amp;gt;{{cite journal|last=Weinberger|first=K.Q.|coauthors=Blitzer J. C., Saul L. K.|title=Distance Metric Learning for Large Margin Nearest Neighbor Classification|journal=Advances in Neural Information Processing Systems 18 (NIPS)|year=2006|pages=1473–1480|url=http://books.nips.cc/papers/files/nips18/NIPS2005_0265.pdf}}&amp;lt;/ref&amp;gt; &lt;br /&gt;
, Information theoretic metric learning (ITML).&amp;lt;ref name=ITML&amp;gt;{{cite journal | last=Davis | first=J.V. | coauthors = Kulis B., Jain P., Sra s., Dhillon I.S. | title=Information-theoretic metric learning | journal=International conference in machine learning | year=2007 | pages=209–216 | url=http://www.cs.utexas.edu/users/pjain/itml/}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In [[statistics]], the [[covariance]] matrix of the data is sometimes used to define a distance metric called [[Mahalanobis distance]].&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
Similarity learning is used in information retrieval for learning to rank, and in [[recommendation systems]]. Also, many machine learning approaches rely on some metric. This include unsupervised learning such as [[clustering (machine learning)|clustering]], which groups together close or similar objects. It also includes supervised approaches like [[K-nearest neighbor algorithm]] which rely on labels of nearby objects to decide on the label of a new object. Metric learning has been proposed as a preprocessing step for many of these approaches &lt;br /&gt;
&amp;lt;ref name=XING&amp;gt;{{cite journal|last=Xing|first=E.P.|title=Distance Metric Learning, with Application to Clustering with Side-information |    coauthors= Ng A.Y. Jordan MI. Jordan Russell S.| journal=Advances in Neural Information Processing Systems 15 | year=2002| pages = 505––512 | publisher = MIT Press}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Further reading ==&lt;br /&gt;
For further information on this topic, see the survey on metric and similarity learning by Bellet et al. &amp;lt;ref name=survey&amp;gt;{{cite arXiv |author=A. Bellet, A. Habrard, M. Sebban |eprint=1306.6709 |class=cs.LG |title=A Survey on Metric Learning for Feature Vectors and Structured Data |year=2013}}&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Machine learning]]&lt;/div&gt;</summary>
		<author><name>24.131.80.19</name></author>
	</entry>
</feed>