Main Page: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
mNo edit summary
mNo edit summary
Line 1: Line 1:
{{context|date=May 2012}}
In mathematics a '''Steinberg symbol''' is a pairing function which generalises the [[Hilbert symbol]] and plays a role in the [[algebraic K-theory]] of [[field (mathematics)|fields]].  It is named after mathematician [[Robert Steinberg]].
In [[statistics]] and [[machine learning]], a '''Bayesian interpretation of regularization''' for [[kernel methods]] is often useful. Kernel methods are central to both the [[Regularization (mathematics)|regularization]] and the [[Bayesian probability|Bayesian]] point of view in machine learning.  In regularization they are a natural choice for the [[Statistical learning theory#Formal Description|hypothesis space]] and the regularization functional through the notion of [[reproducing kernel Hilbert space]]sIn Bayesian probability they are a key component of [[Gaussian processes]], where the kernel function is known as the covariance function.  Kernel methods have traditionally been used in [[supervised learning]] problems where the ''input space'' is usually a ''space of vectors'' while the ''output space'' is a ''space of scalars''. More recently these methods have been extended to problems that deal with [[Kernel methods for vector output|multiple outputs]] such as in [[multi-task learning]].<ref name=AlvRosLaw11>{{cite journal|last=Álvarez|first=Mauricio A.|author2=Rosasco, Lorenzo |author3=Lawrence, Neil D. |title=Kernels for Vector-Valued Functions: A Review|journal=ArXiv e-prints|date=June 2011}}</ref>


In this article we analyze the connections between the regularization and the Bayesian point of view for kernel methods in the case of scalar outputs.  A mathematical equivalence between the regularization and the Bayesian point of view is easily proved in cases where the reproducing kernel Hilbert space is ''finite-dimensional''. The infinite-dimensional case raises subtle mathematical issues; we will consider here the finite-dimensional case. We start with a brief review of the main ideas underlying kernel methods for scalar learning, and briefly introduce the concepts of regularization and Gaussian processes. We then show how both points of view arrive at essentially equivalent estimators, and show the connection that ties them together.
For a field ''F'' we define a ''Steinberg symbol'' (or simply a ''symbol'') to be a function
<math>( \cdot , \cdot ) : F^* \times F^* \rightarrow G</math>, where ''G'' is an abelian group, written multiplicatively, such that
* <math>( \cdot , \cdot ) </math> is bimultiplicative;
* if <math>a+b = 1</math> then <math>(a,b) = 1</math>.


==The Supervised Learning Problem==
The symbols on ''F'' derive from a "universal" symbol, which may be regarded as taking values in <math>F^* \otimes F^* / \langle a \otimes 1-a \rangle</math>.  By a theorem of Matsumoto, this group is <math>K_2 F</math> and is part of the [[Milnor K-theory]] for a field.


The classical [[supervised learning]] problem requires estimating the output for some new input point <math>\mathbf{x}'</math> by learning a scalar-valued estimator <math>\hat{f}(\mathbf{x}')</math> on the basis of a training set <math>S</math> consisting of <math>n</math> input-output pairs, <math>S = (\mathbf{X},\mathbf{Y}) = (\mathbf{x}_1,y_1),\ldots,(\mathbf{x}_n,y_n)</math>.<ref name=Vap98>{{cite book|last=Vapnik|first=Vladimir|title=Statistical learning theory|year=1998|publisher=Wiley|isbn=9780471030034|url=http://books.google.com/books?id=GowoAQAAMAAJ&q=statistical+learning+theory&dq=statistical+learning+theory&hl=en&sa=X&ei=HruyT66kOoKhgwf3reSXCQ&ved=0CDsQ6AEwAA}}</ref>  Given a symmetric and positive bivariate function <math>k(\cdot,\cdot)</math> called a ''kernel'', one of the most popular estimators in machine learning is given by
==Properties==
If (⋅,⋅) is a symbol then (assuming all terms are defined)
* <math> (a, -a) = 1 </math>;
* <math> (b, a) = (a, b)^{-1} </math>;
* <math> (a, a) = (a, -1) </math> is an element of order 1 or 2;
* <math> (a, b) = (a+b, -b/a) </math>.


{{NumBlk|:|<math>
==Examples==
\hat{f}(\mathbf{x}') = \mathbf{k}^\top(\mathbf{K} + \lambda n \mathbf{I})^{-1} \mathbf{Y},
* The [[Hilbert symbol]] on ''F'' with values in {±1} defined by<ref>{{cite book | last=Serre | first=Jean-Pierre | authorlink=Jean-Pierre Serre | title=A Course in Arithmetic | publisher=[[Springer-Verlag]] | location=Berlin, New York | series=[[Graduate Texts in Mathematics]] | volume=7 | isbn=978-3-540-90040-5 | year=1996 }}</ref>
</math>|{{EquationRef|1}}}}
:<math>(a,b)=\begin{cases}1,&\mbox{ if }z^2=ax^2+by^2\mbox{ has a non-zero solution }(x,y,z)\in F^3;\\-1,&\mbox{ if  not.}\end{cases}</math>


where <math>\mathbf{K} \equiv k(\mathbf{X},\mathbf{X})</math> is the [[Gramian matrix|kernel matrix]] with entries <math>\mathbf{K}_{ij} = k(\mathbf{x}_i,\mathbf{x}_j)</math>, <math> \mathbf{k} = [k(\mathbf{x}_1,\mathbf{x}'),\ldots,k(\mathbf{x}_n,\mathbf{x}')]^\top</math>, and <math>\mathbf{Y} = [y_1,\ldots,y_n]^\top</math>.  We will see how this estimator can be derived both from a regularization and a Bayesian perspective.
==See also==
* [[Steinberg group (K-theory)]]


==A Regularization Perspective==
==References==
 
{{reflist}}
The main assumption in the regularization perspective is that the set of functions <math>\mathcal{F}</math> is assumed to belong to a reproducing kernel Hilbert space <math>\mathcal{H}_k</math>.<ref name=Vap98 /><ref name=Wah90 /><ref name=SchSmo02>{{cite book|last=Schölkopf|first=Bernhard|title=Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond|year=2002|publisher=MIT Press|isbn=9780262194754|author2=Smola, Alexander J.}}</ref><ref name=GirPog90>{{cite journal|last=Girosi|first=F.|author2=Poggio, T.|title=Networks and the best approximation property|journal=Biological Cybernetics|year=1990|volume=63|issue=3|pages=169–176|publisher=Springer|doi=10.1007/bf00195855}}</ref>
* {{cite book | first1=P.E. | last1=Conner | first2=R. | last2=Perlis | title=A Survey of Trace Forms of Algebraic Number Fields | series=Series in Pure Mathematics | volume=2 | publisher=World Scientific | year=1984 | isbn=9971-966-05-0 | zbl=0551.10017 }}
 
* {{cite book | title=Introduction to Quadratic Forms over Fields | volume=67 | series=Graduate Studies in Mathematics | first=Tsit-Yuen | last=Lam | publisher=American Mathematical Society | year=2005 | isbn=0-8218-1095-2 | pages=132–142 }}
===Reproducing Kernel Hilbert Space===
* {{cite journal | first=Robert | last=Steinberg | authorlink=Robert Steinberg | title=Générateurs, relations et revêtements de groupes algébriques | journal=Colloq. Théorie des Groupes Algébriques | location=Bruxelles | year=1962 | publisher=Gauthier-Villars | pages=113–127 | mr=MR0153677 | zbl=0272.20036 | language=French }}
 
A [[reproducing kernel Hilbert space]] (RKHS) <math>\mathcal{H}_k</math> is a [[Hilbert space]] of functions defined by a [[Symmetry in mathematics|symmetric]], [[positive-definite function]] <math>k : \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}</math> called the ''reproducing kernel'' such that the function <math>k(\mathbf{x},\cdot)</math> belongs to <math>\mathcal{H}_k</math> for all <math>\mathbf{x} \in \mathcal{X}</math>.<ref name=Aro50>{{cite journal|last=Aronszajn|first=N|title=Theory of Reproducing Kernels|journal=Transactions of the American Mathematical Society|date=May 1950|volume=68|issue=3|pages=337–404|doi=10.2307/1990404}}</ref><ref name=Sch64>{{cite journal|last=Schwartz|first=Laurent|title=Sous-espaces hilbertiens d’espaces vectoriels topologiques et noyaux associés (noyaux reproduisants)|journal=Journal d'analyse mathématique|year=1964|volume=13|issue=1|pages=115–256|publisher=Springer|doi=10.1007/bf02786620}}</ref><ref name=CucSma01>{{cite journal|last=Cucker|first=Felipe|author2=Smale, Steve|title=On the mathematical foundations of learning|journal=Bulletin of the American Mathematical Society|date=October 5, 2001|volume=39|issue=1|pages=1–49|doi=10.1090/s0273-0979-01-00923-5}}</ref> There are three main properties make an RKHS appealing:
 
1. The ''reproducing property'', which gives name to the space,
 
<math>
f(\mathbf{x}) = \langle f,k(\mathbf{x},\cdot) \rangle_k, \quad \forall \ f \in \mathcal{H}_k,
</math>
 
where <math>\langle \cdot,\cdot \rangle_k</math> is the inner product in <math>\mathcal{H}_k</math>.
 
2. Functions in an RKHS are in the closure of the linear combination of the kernel at given points,
 
<math>
f(\mathbf{x}) = \sum_i k(\mathbf{x}_i,\mathbf{x})c_i
</math>. 
 
This allows the construction in a unified framework of both linear and generalized linear models.
 
3. The norm in an RKHS can be written as
 
<math>\|f\|_k = \sum_{i,j} k(\mathbf{x}_i,\mathbf{x}_j) c_i c_j
</math>
 
and is a natural measure of how ''complex'' the function is.
 
===The Regularized Functional===
 
The estimator is derived as the minimizer of the regularized functional
 
{{NumBlk|:|<math>
\frac{1}{n} \sum_{i=1}^{n}(f(\mathbf{x}_i)-y_i)^2 + \lambda \|f\|_k^2,
</math>|{{EquationRef|2}}}}
 
where <math>f \in \mathcal{H}_k</math> and <math>\|\cdot\|_k</math> is the norm in <math>\mathcal{H}_k</math>.  The first term in this functional, which measures the average of the squares of the errors between the <math>f(\mathbf{x}_i)</math> and the <math>y_i</math>, is called the ''empirical risk'' and represents the cost we pay by predicting <math>f(\mathbf{x}_i)</math> for the true value <math>y_i</math>.  The second term in the functional is the squared norm in a RKHS multiplied by a weight <math>\lambda</math> and serves the purpose of stabilizing the problem<ref name=Wah90 /><ref name=GirPog90 /> as well as of adding a trade-off between fitting and complexity of the estimator.<ref name=Vap98 />  The weight <math>\lambda</math>, called the ''regularizer'', determines the degree to which instability and complexity of the estimator should be penalized (higher penalty for increasing value of <math>\lambda</math>).
 
===Derivation of the Estimator===
 
The explicit form of the estimator in equation ({{EquationNote|1}}) is derived in two steps.  First, the representer theorem<ref name=KimWha70>{{cite journal|last=Kimeldorf|first=George S.|author2=Wahba, Grace|title=A correspondence between Bayesian estimation on stochastic processes and smoothing by splines|journal=The Annals of Mathematical Statistics|year=1970|volume=41|issue=2|pages=495–502|doi=10.1214/aoms/1177697089}}</ref><ref name=SchHerSmo01>{{cite journal|last=Schölkopf|first=Bernhard|author2=Herbrich, Ralf |author3=Smola, Alex J. |title=A Generalized Representer Theorem|journal=COLT/EuroCOLT 2001, LNCS|year=2001|volume=2111/2001|pages=416–426|doi=10.1007/3-540-44581-1_27}}</ref><ref name=DevEtal04>{{cite journal|last=De Vito|first=Ernesto|author2=Rosasco, Lorenzo |author3=Caponnetto, Andrea |author4=Piana, Michele |author5= Verri, Alessandro |title=Some Properties of Regularized Kernel Methods|journal=Journal of Machine Learning Research|date=October 2004|volume=5|pages=1363–1390}}</ref> states that the minimizer of the functional ({{EquationNote|2}}) can always be written as a linear combination of the kernels centered at the training-set points,
 
{{NumBlk|:|<math>
\hat{f}(\mathbf{x}') = \sum_{i=1}^n c_i k(\mathbf{x}_i,\mathbf{x}') = \mathbf{k}^\top \mathbf{c},
</math>|{{EquationRef|3}}}}
 
for some <math>\mathbf{c} \in \mathbb{R}^n</math>. The explicit form of the coefficients <math>\mathbf{c} = [c_1,\ldots,c_n]^\top</math> can be found by substituting for <math>f(\cdot)</math> in the functional ({{EquationNote|2}}).  For a function of the form in equation ({{EquationNote|3}}), we have that
 
<math>\begin{align}
\|f\|_k^2 & = \langle f,f \rangle_k, \\
& = \left\langle \sum_{i=1}^N c_i k(\mathbf{x}_i,\cdot), \sum_{j=1}^N c_j k(\mathbf{x}_j,\cdot) \right\rangle_k, \\
& = \sum_{i=1}^N \sum_{j=1}^N c_i c_j \langle k(\mathbf{x}_i,\cdot), k(\mathbf{x}_j,\cdot) \rangle_k, \\
& = \sum_{i=1}^N \sum_{j=1}^N c_i c_j k(\mathbf{x}_i,\mathbf{x}_j), \\
& = \mathbf{c}^\top \mathbf{K} \mathbf{c}.
\end{align}</math>
 
We can rewrite the functional ({{EquationNote|2}}) as
 
<math>
\frac{1}{n} \| \mathbf{y} - \mathbf{K} \mathbf{c} \|^2 + \lambda \mathbf{c}^\top \mathbf{K} \mathbf{c}.
</math>
 
This functional is convex in <math>\mathbf{c}</math> and therefore we can find its minimum by setting the gradient with respect to <math>\mathbf{c}</math> to zero,
 
<math>\begin{align}
-\frac{1}{n} \mathbf{K} (\mathbf{Y} - \mathbf{K} \mathbf{c}) + \lambda \mathbf{K} \mathbf{c} & = 0, \\
(\mathbf{K} + \lambda n \mathbf{I}) \mathbf{c} & = \mathbf{Y}, \\
\mathbf{c} & = (\mathbf{K} + \lambda n \mathbf{I})^{-1} \mathbf{Y}.
\end{align}</math>
 
Substituting this expression for the coefficients in equation ({{EquationNote|3}}), we obtain the estimator stated previously in equation ({{EquationNote|1}}),
 
<math>
\hat{f}(\mathbf{x}') = \mathbf{k}^\top(\mathbf{K} + \lambda n \mathbf{I})^{-1} \mathbf{Y}.
</math>
 
==A Bayesian Perspective==
 
The notion of a kernel plays a crucial role in Bayesian probability as the covariance function of a stochastic process called the ''[[Gaussian process]]''.
 
===A Review of Bayesian Probability===
 
As part of the Bayesian framework, the Gaussian process specifies the [[Prior probability|''prior distribution'']] that describes the prior beliefs about the properties of the function being modeled.  These beliefs are updated after taking into account observational data by means of a [[Likelihood function|''likelihood function'']] that relates the prior beliefs to the observations.  Taken together, the prior and likelihood lead to an updated distribution called the [[Posterior probability|''posterior distribution'']] that is customarily used for predicting test cases.
 
===The Gaussian Process===
 
A [[Gaussian process]] (GP) is a stochastic process in which any finite number of random variables that are sampled follow a joint [[Multivariate normal distribution|Normal distribution]].<ref name=RasWil06 />  The mean vector and covariance matrix of the Gaussian distribution completely specify the GP.  GPs are usually used as a priori distribution for functions, and as such the mean vector and covariance matrix can be viewed as functions, where the covariance function is also called the ''kernel'' of the GP.  Let a function <math>f</math> follow a Gaussian process with mean function <math>m</math> and kernel function <math>k</math>,
 
<math>
f \sim \mathcal{GP}(m,k).
</math>
 
In terms of the underlying Gaussian distribution, we have that for any finite set <math>\mathbf{X} = \{\mathbf{x}_i\}_{i=1}^{n}</math> if we let <math>f(\mathbf{X}) = [f(\mathbf{x}_1),\ldots,f(\mathbf{x}_n)]^\top</math> then
 
<math>
f(\mathbf{X}) \sim \mathcal{N}(\mathbf{m},\mathbf{K}),
</math>
 
where <math>\mathbf{m} = m(\mathbf{X}) = [m(\mathbf{x}_1),\ldots,m(\mathbf{x}_N)]^\top</math> is the mean vector and <math>\mathbf{K} = k(\mathbf{X},\mathbf{X})</math> is the covariance matrix of the multivariate Gaussian distribution.
 
===Derivation of the Estimator===
{{see|Minimum mean square error#Linear MMSE estimator for linear observation process}}
In a regression context, the likelihood function is usually assumed to be a Gaussian distribution and the observations to be independent and identically distributed (iid),
 
<math>
p(y|f,\mathbf{x},\sigma^2) = \mathcal{N}(f(\mathbf{x}),\sigma^2).
</math>
 
This assumption corresponds to the observations being corrupted with zero-mean Gaussian noise with variance <math>\sigma^2</math>. The iid assumption makes it possible to factorize the likelihood function over the data points given the set of inputs <math>\mathbf{X}</math> and the variance of the noise <math>\sigma^2</math>, and thus the posterior distribution can be computed analytically. For a test input vector <math>\mathbf{x}'</math>, given the training data <math>S = \{\mathbf{X},\mathbf{Y}\}</math>, the posterior distribution is given by
 
<math>
p(f(\mathbf{x}')|S,\mathbf{x}',\boldsymbol{\phi}) = \mathcal{N}(m(\mathbf{x}'),\sigma^2(\mathbf{x}')),
</math>
 
where <math>\boldsymbol{\phi}</math> denotes the set of parameters which include the variance of the noise <math>\sigma^2</math> and any parameters from the covariance function <math>k</math> and where
 
<math>\begin{align}
m(\mathbf{x}') & = \mathbf{k}^\top (\mathbf{K} + \sigma^2 \mathbf{I})^{-1} \mathbf{Y}, \\
\sigma^2(\mathbf{x}') & = k(\mathbf{x}',\mathbf{x}') - \mathbf{k}^\top (\mathbf{K} + \sigma^2 \mathbf{I})^{-1} \mathbf{k}.
\end{align}</math>
 
==The Connection Between Regularization and Bayes==
 
A connection between regularization theory and Bayesian theory can only be achieved in the case of ''finite dimensional RKHS''. Under this assumption, regularization theory and Bayesian theory are connected through Gaussian process prediction.<ref name=Wah90>{{cite book|last=Wahba|first=Grace|title=Spline models for observational data|year=1990|publisher=SIAM}}</ref><ref name=RasWil06>{{cite book|last=Rasmussen|first=Carl Edward|title=Gaussian Processes for Machine Learning|year=2006|publisher=The MIT Press|isbn=0-262-18253-X|url=http://www.gaussianprocess.org/gpml/|author2=Williams, Christopher K. I.}}</ref>
 
In the finite dimensional case, every RKHS can be described in terms of a feature map <math>\Phi : \mathcal{X} \rightarrow \mathbb{R}^p</math> such that<ref name=Vap98 />
 
<math>
k(\mathbf{x},\mathbf{x}') = \sum_{i=1}^p \Phi^i(\mathbf{x})\Phi^i(\mathbf{x}').
</math>
 
Functions in the RKHS with kernel <math>\mathbf{K}</math> can be then be written as
 
<math>
f_{\mathbf{w}}(\mathbf{x}) = \sum_{i=1}^p \mathbf{w}^i \Phi^i(\mathbf{x}) = \langle \mathbf{w},\Phi(\mathbf{x}) \rangle,
</math>
 
and we also have that
 
<math>
\|f_{\mathbf{w}} \|_k = \|\mathbf{w}\|.
</math>


We can now build a Gaussian process by assuming <math> \mathbf{w} = [w^1,\ldots,w^p]^\top </math> to be distributed according to a multivariate Gaussian distribution with zero mean and identity covariance matrix,
==External links==
*[http://eom.springer.de/S/s130540.htm Steinberg symbol] at the [[Encyclopaedia of Mathematics]]


<math>
[[Category:K-theory]]
\mathbf{w} \sim \mathcal{N}(0,\mathbf{I}) \propto \exp(-\|\mathbf{w}\|^2).
</math>
 
If we assume a Gaussian likelihood we have
 
<math>
P(\mathbf{Y}|\mathbf{X},f) = \mathcal{N}(f(\mathbf{X}),\sigma^2 \mathbf{I}) \propto \exp\left(-\frac{1}{\sigma^2} \| f_{\mathbf{w}}(\mathbf{X}) - \mathbf{Y} \|^2\right),
</math>
 
where <math> f_{\mathbf{w}}(\mathbf{X}) = (\langle\mathbf{w},\Phi(\mathbf{x}_1)\rangle,\ldots,\langle\mathbf{w},\Phi(\mathbf{x}_n \rangle) </math>. The resulting posterior distribution is the given by
 
<math>
P(f|\mathbf{X},\mathbf{Y}) \propto \exp\left(-\frac{1}{\sigma^2} \|f_{\mathbf{w}}(\mathbf{X}) - \mathbf{Y}\|_n^2 + \|\mathbf{w}\|^2\right)
</math>
 
We can see that a ''maximum posterior (MAP)'' estimate is equivalent to the minimization problem defining [[Tikhonov regularization]], where in the Bayesian case the regularization parameter is related to the noise variance.
 
From a philosophical perspective, the loss function in a regularization setting plays a different role than the likelihood function in the Bayesian setting. Whereas the loss function measures the error that is incurred when predicting <math>f(\mathbf{x})</math> in place of <math>y</math>, the likelihood function measures how likely the observations are from the model that was assumed to be true in the generative process. From a mathematical perspective, however, the formulations of the regularization and Bayesian frameworks make the loss function and the likelihood function to have the same mathematical role of promoting the inference of functions <math>f</math> that approximate the labels <math>y</math> as much as possible.
 
==References==
{{Reflist}}


[[Category:Bayesian statistics]]
{{algebra-stub}}
[[Category:Machine learning]]
[[Category:Probability theory]]

Revision as of 20:54, 18 August 2014

In mathematics a Steinberg symbol is a pairing function which generalises the Hilbert symbol and plays a role in the algebraic K-theory of fields. It is named after mathematician Robert Steinberg.

For a field F we define a Steinberg symbol (or simply a symbol) to be a function (,):F*×F*G, where G is an abelian group, written multiplicatively, such that

The symbols on F derive from a "universal" symbol, which may be regarded as taking values in F*F*/a1a. By a theorem of Matsumoto, this group is K2F and is part of the Milnor K-theory for a field.

Properties

If (⋅,⋅) is a symbol then (assuming all terms are defined)

Examples

(a,b)={1, if z2=ax2+by2 has a non-zero solution (x,y,z)F3;1, if not.

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • One of the biggest reasons investing in a Singapore new launch is an effective things is as a result of it is doable to be lent massive quantities of money at very low interest rates that you should utilize to purchase it. Then, if property values continue to go up, then you'll get a really high return on funding (ROI). Simply make sure you purchase one of the higher properties, reminiscent of the ones at Fernvale the Riverbank or any Singapore landed property Get Earnings by means of Renting

    In its statement, the singapore property listing - website link, government claimed that the majority citizens buying their first residence won't be hurt by the new measures. Some concessions can even be prolonged to chose teams of consumers, similar to married couples with a minimum of one Singaporean partner who are purchasing their second property so long as they intend to promote their first residential property. Lower the LTV limit on housing loans granted by monetary establishments regulated by MAS from 70% to 60% for property purchasers who are individuals with a number of outstanding housing loans on the time of the brand new housing purchase. Singapore Property Measures - 30 August 2010 The most popular seek for the number of bedrooms in Singapore is 4, followed by 2 and three. Lush Acres EC @ Sengkang

    Discover out more about real estate funding in the area, together with info on international funding incentives and property possession. Many Singaporeans have been investing in property across the causeway in recent years, attracted by comparatively low prices. However, those who need to exit their investments quickly are likely to face significant challenges when trying to sell their property – and could finally be stuck with a property they can't sell. Career improvement programmes, in-house valuation, auctions and administrative help, venture advertising and marketing, skilled talks and traisning are continuously planned for the sales associates to help them obtain better outcomes for his or her shoppers while at Knight Frank Singapore. No change Present Rules

    Extending the tax exemption would help. The exemption, which may be as a lot as $2 million per family, covers individuals who negotiate a principal reduction on their existing mortgage, sell their house short (i.e., for lower than the excellent loans), or take part in a foreclosure course of. An extension of theexemption would seem like a common-sense means to assist stabilize the housing market, but the political turmoil around the fiscal-cliff negotiations means widespread sense could not win out. Home Minority Chief Nancy Pelosi (D-Calif.) believes that the mortgage relief provision will be on the table during the grand-cut price talks, in response to communications director Nadeam Elshami. Buying or promoting of blue mild bulbs is unlawful.

    A vendor's stamp duty has been launched on industrial property for the primary time, at rates ranging from 5 per cent to 15 per cent. The Authorities might be trying to reassure the market that they aren't in opposition to foreigners and PRs investing in Singapore's property market. They imposed these measures because of extenuating components available in the market." The sale of new dual-key EC models will even be restricted to multi-generational households only. The models have two separate entrances, permitting grandparents, for example, to dwell separately. The vendor's stamp obligation takes effect right this moment and applies to industrial property and plots which might be offered inside three years of the date of buy. JLL named Best Performing Property Brand for second year running

    The data offered is for normal info purposes only and isn't supposed to be personalised investment or monetary advice. Motley Fool Singapore contributor Stanley Lim would not personal shares in any corporations talked about. Singapore private home costs increased by 1.eight% within the fourth quarter of 2012, up from 0.6% within the earlier quarter. Resale prices of government-built HDB residences which are usually bought by Singaporeans, elevated by 2.5%, quarter on quarter, the quickest acquire in five quarters. And industrial property, prices are actually double the levels of three years ago. No withholding tax in the event you sell your property. All your local information regarding vital HDB policies, condominium launches, land growth, commercial property and more

    There are various methods to go about discovering the precise property. Some local newspapers (together with the Straits Instances ) have categorised property sections and many local property brokers have websites. Now there are some specifics to consider when buying a 'new launch' rental. Intended use of the unit Every sale begins with 10 p.c low cost for finish of season sale; changes to 20 % discount storewide; follows by additional reduction of fiftyand ends with last discount of 70 % or extra. Typically there is even a warehouse sale or transferring out sale with huge mark-down of costs for stock clearance. Deborah Regulation from Expat Realtor shares her property market update, plus prime rental residences and houses at the moment available to lease Esparina EC @ Sengkang

External links

Template:Algebra-stub

  1. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534