Radiation zone: Difference between revisions
en>Mimihitam No edit summary |
en>ClueBot NG m Reverting possible vandalism by 98.224.65.37 to version by I dream of horses. False positive? Report it. Thanks, ClueBot NG. (1657500) (Bot) |
||
Line 1: | Line 1: | ||
A '''Bellman equation''', also known as a '''dynamic programming equation''', named after its discoverer, [[Richard Bellman]], is a [[necessary condition]] for optimality associated with the mathematical [[Optimization (mathematics)|optimization]] method known as [[dynamic programming]]. It writes the value of a decision problem at a certain point in time in terms of the payoff from some initial choices and the value of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into simpler subproblems, as Bellman's '''Principle of Optimality''' prescribes. | |||
The Bellman equation was first applied to engineering [[control theory]] and to other topics in applied mathematics, and subsequently became an important tool in [[economic theory]]. | |||
Almost any problem which can be solved using [[optimal control theory]] can also be solved by analyzing the appropriate Bellman equation. However, the term 'Bellman equation' usually refers to the dynamic programming equation associated with [[discrete-time]] optimization problems. In continuous-time optimization problems, the analogous equation is a [[partial differential equation]] which is usually called the [[Hamilton–Jacobi–Bellman equation]]. | |||
== Analytical concepts in dynamic programming == | |||
To understand the Bellman equation, several underlying concepts must be understood. First, any optimization problem has some objective – minimizing travel time, minimizing cost, maximizing profits, maximizing utility, et cetera. The mathematical function that describes this objective is called the '''[[Optimization (mathematics)#Optimization problems|objective function]]'''. | |||
Dynamic programming breaks a multi-period planning problem into simpler steps at different points in time. Therefore, it requires keeping track of how the decision situation is evolving over time. The information about the current situation which is needed to make a correct decision is called the '''state''' (See Bellman, 1957, Ch. III.2).<ref name=BellmanDP>Bellman, R.E. 1957. ''Dynamic Programming''. Princeton University Press, Princeton, NJ. Republished 2003: Dover, ISBN 0-486-42809-5.</ref><ref name=dreyfus>S. Dreyfus (2002), [http://www.wu-wien.ac.at/usr/h99c/h9951826/bellman_dynprog.pdf 'Richard Bellman on the birth of dynamic programming'] ''Operations Research'' 50 (1), pp. 48–51.</ref> For example, to decide how much to consume and spend at each point in time, people would need to know (among other things) their initial wealth. Therefore, wealth would be one of their '''[[state variable]]s''', but there would probably be others. | |||
The variables chosen at any given point in time are often called the '''[[control variable]]s'''. For example, given their current wealth, people might decide how much to consume now. Choosing the control variables now may be equivalent to choosing the next state; more generally, the next state is affected by other factors in addition to the current control. For example, in the simplest case, today's wealth (the state) and consumption (the control) might exactly determine tomorrow's wealth (the new state), though typically other factors will affect tomorrow's wealth too. | |||
The dynamic programming approach describes the optimal plan by finding a rule that tells what the controls should be, given any possible value of the state. For example, if consumption (''c'') depends ''only'' on wealth (''W''), we would seek a rule <math>c(W)</math> that gives consumption as a function of wealth. Such a rule, determining the controls as a function of the states, is called a '''policy function''' (See Bellman, 1957, Ch. III.2).<ref name=BellmanDP /> | |||
Finally, by definition, the optimal decision rule is the one that achieves the best possible value of the objective. For example, if someone chooses consumption, given wealth, in order to maximize happiness (assuming happiness ''H'' can be represented by a mathematical function, such as a [[utility]] function), then each level of wealth will be associated with some highest possible level of happiness, <math>H(W)</math>. The best possible value of the objective, written as a function of the state, is called the '''value function'''. | |||
[[Richard Bellman]] showed that a dynamic [[Optimization (mathematics)|optimization]] problem in [[discrete time]] can be stated in a [[recursion|recursive]], step-by-step form by writing down the relationship between the value function in one period and the value function in the next period. The relationship between these two value functions is called the '''Bellman equation'''. | |||
== Deriving the Bellman equation == | |||
=== A dynamic decision problem === | |||
Let the state at time <math>t</math> be <math>x_t</math>. For a decision that begins at time 0, we take as given the initial state <math>x_0</math>. At any time, the set of possible actions depends on the current state; we can write this as <math> a_{t} \in \Gamma (x_t)</math>, where the action <math>a_t</math> represents one or more control variables. We also assume that the state changes from <math>x</math> to a new state <math>T(x,a)</math> when action <math>a</math> is taken, and that the current payoff from taking action <math>a</math> in state <math>x</math> is <math>F(x,a)</math>. Finally, we assume impatience, represented by a [[discount factor]] <math>0<\beta<1</math>. | |||
Under these assumptions, an infinite-horizon decision problem takes the following form: | |||
:<math> V(x_0) \; = \; \max_{ \left \{ a_{t} \right \}_{t=0}^{\infty} } \sum_{t=0}^{\infty} \beta^t F(x_t,a_{t}), </math> | |||
subject to the constraints | |||
:<math> a_{t} \in \Gamma (x_t), \; x_{t+1}=T(x_t,a_t), \; \forall t = 0, 1, 2, \dots </math> | |||
Notice that we have defined notation <math>V(x_0)</math> to represent the optimal value that can be obtained by maximizing this objective function subject to the assumed constraints. This function is the ''value function''. It is a function of the initial state variable <math>x_0 </math>, since the best value obtainable depends on the initial situation. | |||
=== Bellman's Principle of Optimality === | |||
The dynamic programming method breaks this decision problem into smaller subproblems. Richard Bellman's '''Principle of Optimality''' describes how to do this:<blockquote>Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. (See Bellman, 1957, Chap. III.3.)<ref name=BellmanDP /><ref name=dreyfus /><ref name=BellmanTheory>R Bellman, ''On the Theory of Dynamic Programming'', Proceedings of the National Academy of Sciences, 1952</ref></blockquote> | |||
In computer science, a problem that can be broken apart like this is said to have [[optimal substructure]]. In the context of dynamic [[game theory]], this principle is analogous to the concept of [[subgame perfect equilibrium]], although what constitutes an optimal policy in this case is conditioned on the decision-maker's opponents choosing similarly optimal policies from their points of view. | |||
As suggested by the Principle of Optimality, we will consider the first decision separately, setting aside all future decisions (we will start afresh from time 1 with the new state <math>x_1 </math>). Collecting the future decisions in brackets on the right, the previous problem is equivalent to: | |||
:<math> \max_{ a_0 } \left \{ F(x_0,a_0) | |||
+ \beta \left[ \max_{ \left \{ a_{t} \right \}_{t=1}^{\infty} } | |||
\sum_{t=1}^{\infty} \beta^{t-1} F(x_t,a_{t}): | |||
a_{t} \in \Gamma (x_t), \; x_{t+1}=T(x_t,a_t), \; \forall t \geq 1 \right] \right \}</math> | |||
subject to the constraints | |||
:<math> a_0 \in \Gamma (x_0), \; x_1=T(x_0,a_0). </math> | |||
Here we are choosing <math>a_0</math>, knowing that our choice will cause the time 1 state to be <math>x_1=T(x_0,a_0)</math>. That new state will then affect the decision problem from time 1 on. The whole future decision problem appears inside the square brackets on the right. | |||
=== The Bellman equation === | |||
So far it seems we have only made the problem uglier by separating today's decision from future decisions. But we can simplify by noticing that what is inside the square brackets on the right is ''the value'' of the time 1 decision problem, starting from state <math>x_1=T(x_0,a_0)</math>. | |||
Therefore we can rewrite the problem as a [[Recursion|recursive]] definition of the value function: | |||
:<math>V(x_0) = \max_{ a_0 } \{ F(x_0,a_0) + \beta V(x_1) \} </math>, subject to the constraints: <math> a_0 \in \Gamma (x_0), \; x_1=T(x_0,a_0). </math> | |||
This is the Bellman equation. It can be simplified even further if we drop time subscripts and plug in the value of the next state: | |||
:<math>V(x) = \max_{a \in \Gamma (x) } \{ F(x,a) + \beta V(T(x,a)) \}.</math> | |||
The Bellman equation is classified as a [[functional equation]], because solving it means finding the unknown function ''V'', which is the ''value function''. Recall that the value function describes the best possible value of the objective, as a function of the state ''x''. By calculating the value function, we will also find the function ''a''(''x'') that describes the optimal action as a function of the state; this is called the ''policy function''. | |||
=== The Bellman equation in a stochastic problem === | |||
{{See also|Markov Decision Process}} | |||
In the deterministic setting, other techniques besides dynamic programming can be used to tackle the above [[optimal control]] problem. Although the agent has to account for the stochasticity, this approach becomes convenient for certain problems. | |||
For a specific example from economics, consider an infinitely-lived consumer with initial wealth endowment ''a<sub>0</sub>'' at period 0. He has an instantaneous [[utility function]] ''u''(''c'') where ''c'' denotes consumption and discounts the next period utility at a rate of 0<''β''<1. Assume what is not consumed in period ''t'' carries over next period with interest rate ''r''. Then the consumer's utility maximization problem is to choose a consumption plan {''c<sub>t</sub>''} that solves | |||
:<math>\max \sum_0 ^{\infty} \beta^t u (c_t)</math> | |||
subject to | |||
:<math>a_{t+1} = (1 + r) (a_t - c_t), \; c_t \geq 0,</math> | |||
and | |||
:<math>\lim_{t \rightarrow \infty} a_t \geq 0.</math> | |||
The first constraint is the capital accumulation/law of motion specified by the problem, while the second constraint is a [[transversality condition]] that the consumer does not carry debt at the end of his life. The Bellman equation is | |||
:<math>V(a) = \max_{ 0 \leq c \leq a } \{ u(c) + \beta V((1+r) (a - c)) \},</math> | |||
Alternatively, one can treat the sequence problem directly using, for example, the [[Hamiltonian]] equations. | |||
Now, if the interest rate varies from period to period, the consumer is face with a stochastic optimization problem. Let the interest ''r'' follow a [[Markov process]] with probability transition function ''Q''(''r'', ''dμ<sub>r</sub>'') where ''dμ<sub>r</sub>'' denotes the [[probability measure]] governing the distribution of interest rate next period if current interest rate is ''r''. The timing of the model is that the consumer decides his current period consumption after the current period interest rate is announced. | |||
Rather than simply choosing a single sequence {''c<sub>t</sub>''}, the consumer now must chose a sequence {''c<sub>t</sub>''} for each possible realization of a {''r<sub>t</sub>''} in such a way that his lifetime expected utility is maximized: | |||
:<math>\max E( \sum_0 ^{\infty} \beta^t u (c_t) ).</math> | |||
The expectation ''E'' is taken with respect to the appropriate probability measure given by ''Q'' on the sequences of ''r'''s. Because ''r'' is governed by a Markov process, dynamic programming simplifies the problem significantly. Then Bellmann equation is simply | |||
:<math>V(a, r) = \max_{ 0 \leq c \leq a } \{ u(c) + \beta \int V((1+r) (a - c), r') Q(r, d\mu_r) \} .</math> | |||
Under some reasonable assumption, the resulting optimal policy function ''g''(''a'',''r'') is [[measurable]]. | |||
For a general stochastic sequential optimization problem with Markovian shocks and where the agent is faced with his decision ''[[ex-post]]'', the Bellmann equation takes a very similar form | |||
:<math>V(x, z) = \max_{c \in \Gamma(x,z)} F(x, c, z) + \beta \int V( T(x,c), z') d\mu_z(z'). </math> | |||
==Solution methods== | |||
*The [[method of undetermined coefficients]], also known as 'guess and verify', can be used to solve some infinite-horizon, [[Autonomous system (mathematics)|autonomous]] Bellman equations. | |||
*The Bellman equation can be solved by [[backwards induction]], either [[Closed-form expression|analytically]] in a few special cases, or [[numerical analysis|numerically]] on a computer. Numerical backwards induction is applicable to a wide variety of problems, but may be infeasible when there are many state variables, due to the [[curse of dimensionality]]. Approximate dynamic programming has been introduced by [[Dimitri Bertsekas|D. P. Bertsekas]] and J. N. Tsitsiklis with the use of [[artificial neural network]]s ([[multilayer perceptron]]s) for approximating the Bellman function.<ref name="NeuroDynProg">Bertsekas, D. P., Tsitsiklis, J. N., ''Neuro-dynamic programming''. Athena Scientific, 1996</ref> This is an effective mitigation strategy for reducing the impact of dimensionality by replacing the memorization of the complete function mapping for the whole space domain with the memorization of the sole neural network parameters. | |||
*By calculating the first-order conditions associated with the Bellman equation, and then using the [[envelope theorem]] to eliminate the derivatives of the value function, it is possible to obtain a system of [[difference equation]]s or [[differential equation]]s called the '[[Euler–Lagrange equation|Euler equation]]s'. Standard techniques for the solution of difference or differential equations can then be used to calculate the dynamics of the state variables and the control variables of the optimization problem. | |||
==Applications in economics== | |||
The first known application of a Bellman equation in economics is due to Martin Beckmann and Richard Muth.<ref>Martin Beckmann and Richard Muth, 1954, "On the solution to the fundamental equation of inventory theory," ''Cowles Commission Discussion Paper'' 2116.</ref> Martin Beckmann also wrote extensively on consumption theory using the Bellman equation in 1959. His work influenced [[Edmund S. Phelps]], among others. | |||
A celebrated economic application of a Bellman equation is Merton's seminal 1973 article on the [[ICAPM|intertemporal capital asset pricing model]].<ref>[[Robert C. Merton]], 1973, "An Intertemporal Capital Asset Pricing Model," ''Econometrica 41'': 867–887.</ref> (See also [[Merton's portfolio problem]]).The solution to Merton's theoretical model, one in which investors chose between income today and future income or capital gains, is a form of Bellman's equation. Because economic applications of dynamic programming usually result in a Bellman equation that is a [[difference equation]], economists refer to dynamic programming as a "recursive method" and a subfield of [[recursive economics]] is now recognized within Economics. | |||
Stokey, Lucas & Prescott describe stochastic and nonstochastic dynamic programming in considerable detail, and develop theorems for the existence of solutions to problems meeting certain conditions. They also describe many examples of modeling theoretical problems in economics using recursive methods.<ref>*[[Nancy Stokey]], and [[Robert E. Lucas]], with [[Edward Prescott]], 1989. ''Recursive Methods in Economic Dynamics''. Harvard Univ. Press.</ref> This book led to dynamic programming being employed to solve a wide range of theoretical problems in economics, including optimal [[economic growth]], [[resource extraction]], [[principal–agent problem]]s, [[public finance]], business [[investment]], [[asset pricing]], [[factor of production|factor]] supply, and [[industrial organization]]. Ljungqvist & Sargent apply dynamic programming to study a variety of theoretical questions in [[monetary policy]], [[fiscal policy]], [[taxation]], [[economic growth]], [[search theory]], and [[labor economics]].<ref>[[Lars Ljungqvist]] & [[Thomas Sargent]], 2004. ''Recursive Macroeconomic Theory''. MIT Press.</ref> Dixit & Pindyck showed the value of the method for thinking about [[capital budgeting]].<ref>[[Avinash Dixit]] & Robert Pindyck, 1994. ''Investment Under Uncertainty''. Princeton Univ. Press.</ref> Anderson adapted the technique to business valuation, including privately held businesses.<ref>Anderson, Patrick L., Business Economics & Finance, CRC Press, 2004 (chapter 10), ISBN 1-58488-348-0; The Value of Private Businesses in the United States, ''Business Economics'' (2009) 44, 87–108. {{doi|10.1057/be.2009.4}}. ''Economics of Business Valuation'', Stanford University Press (2013); ISBN 9780804758307. [http://www.sup.org/book.cgi?id=11400 Stanford Press]</ref> | |||
Using dynamic programming to solve concrete problems is complicated by informational difficulties, such as choosing the unobservable discount rate. There are also computational issues, the main one being the [[curse of dimensionality]] arising from the vast number of possible actions and potential state variables that must be considered before an optimal strategy can be selected. For an extensive discussion of computational issues, see Miranda & Fackler,<ref>Miranda, M., & Fackler, P., 2002. ''Applied Computational Economics and Finance''. MIT Press</ref> and Meyn 2007.<ref>S. P. Meyn, 2007. [http://decision.csl.uiuc.edu/~meyn/pages/CTCN/CTCN.html Control Techniques for Complex Networks], Cambridge University Press, 2007. Appendix contains abridged [http://decision.csl.uiuc.edu/~meyn/pages/book.html Meyn & Tweedie].</ref> | |||
== Example == | |||
In [[Markov decision process|MDP]], a Bellman equation refers to a [[recursion]] for expected rewards. For example, the expected reward for being in a particular state ''s'' and following some fixed policy <math>\pi</math> has the Bellman equation: | |||
:<math> V^\pi(s)= R(s) + \gamma \sum_{s'} P(s'|s,\pi(s)) V^\pi(s').\ </math> | |||
This equation describes the expected reward for taking the action prescribed by some policy <math>\pi</math>. | |||
The equation for the optimal policy is referred to as the ''Bellman optimality equation'': | |||
:<math> V^*(s)= R(s) + \max_a \gamma \sum_{s'} P(s'|s,a) V^*(s').\ </math> | |||
It describes the reward for taking the action giving the highest expected return. | |||
==See also== | |||
* [[Dynamic programming]] | |||
* [[Hamilton–Jacobi–Bellman equation]] | |||
* [[Markov decision process]] | |||
* [[Optimal control theory]] | |||
* [[Optimal substructure]] | |||
* [[Recursive competitive equilibrium]] | |||
* [[Bellman pseudospectral method]] | |||
== References == | |||
{{Reflist}} | |||
{{DEFAULTSORT:Bellman Equation}} | |||
[[Category:Mathematical optimization]] | |||
[[Category:Equations]] | |||
[[Category:Dynamic programming]] | |||
[[Category:Control theory]] |
Revision as of 07:05, 17 January 2014
A Bellman equation, also known as a dynamic programming equation, named after its discoverer, Richard Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the value of a decision problem at a certain point in time in terms of the payoff from some initial choices and the value of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into simpler subproblems, as Bellman's Principle of Optimality prescribes.
The Bellman equation was first applied to engineering control theory and to other topics in applied mathematics, and subsequently became an important tool in economic theory.
Almost any problem which can be solved using optimal control theory can also be solved by analyzing the appropriate Bellman equation. However, the term 'Bellman equation' usually refers to the dynamic programming equation associated with discrete-time optimization problems. In continuous-time optimization problems, the analogous equation is a partial differential equation which is usually called the Hamilton–Jacobi–Bellman equation.
Analytical concepts in dynamic programming
To understand the Bellman equation, several underlying concepts must be understood. First, any optimization problem has some objective – minimizing travel time, minimizing cost, maximizing profits, maximizing utility, et cetera. The mathematical function that describes this objective is called the objective function.
Dynamic programming breaks a multi-period planning problem into simpler steps at different points in time. Therefore, it requires keeping track of how the decision situation is evolving over time. The information about the current situation which is needed to make a correct decision is called the state (See Bellman, 1957, Ch. III.2).[1][2] For example, to decide how much to consume and spend at each point in time, people would need to know (among other things) their initial wealth. Therefore, wealth would be one of their state variables, but there would probably be others.
The variables chosen at any given point in time are often called the control variables. For example, given their current wealth, people might decide how much to consume now. Choosing the control variables now may be equivalent to choosing the next state; more generally, the next state is affected by other factors in addition to the current control. For example, in the simplest case, today's wealth (the state) and consumption (the control) might exactly determine tomorrow's wealth (the new state), though typically other factors will affect tomorrow's wealth too.
The dynamic programming approach describes the optimal plan by finding a rule that tells what the controls should be, given any possible value of the state. For example, if consumption (c) depends only on wealth (W), we would seek a rule that gives consumption as a function of wealth. Such a rule, determining the controls as a function of the states, is called a policy function (See Bellman, 1957, Ch. III.2).[1]
Finally, by definition, the optimal decision rule is the one that achieves the best possible value of the objective. For example, if someone chooses consumption, given wealth, in order to maximize happiness (assuming happiness H can be represented by a mathematical function, such as a utility function), then each level of wealth will be associated with some highest possible level of happiness, . The best possible value of the objective, written as a function of the state, is called the value function.
Richard Bellman showed that a dynamic optimization problem in discrete time can be stated in a recursive, step-by-step form by writing down the relationship between the value function in one period and the value function in the next period. The relationship between these two value functions is called the Bellman equation.
Deriving the Bellman equation
A dynamic decision problem
Let the state at time be . For a decision that begins at time 0, we take as given the initial state . At any time, the set of possible actions depends on the current state; we can write this as , where the action represents one or more control variables. We also assume that the state changes from to a new state when action is taken, and that the current payoff from taking action in state is . Finally, we assume impatience, represented by a discount factor .
Under these assumptions, an infinite-horizon decision problem takes the following form:
subject to the constraints
Notice that we have defined notation to represent the optimal value that can be obtained by maximizing this objective function subject to the assumed constraints. This function is the value function. It is a function of the initial state variable , since the best value obtainable depends on the initial situation.
Bellman's Principle of Optimality
The dynamic programming method breaks this decision problem into smaller subproblems. Richard Bellman's Principle of Optimality describes how to do this:
Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. (See Bellman, 1957, Chap. III.3.)[1][2][3]
In computer science, a problem that can be broken apart like this is said to have optimal substructure. In the context of dynamic game theory, this principle is analogous to the concept of subgame perfect equilibrium, although what constitutes an optimal policy in this case is conditioned on the decision-maker's opponents choosing similarly optimal policies from their points of view.
As suggested by the Principle of Optimality, we will consider the first decision separately, setting aside all future decisions (we will start afresh from time 1 with the new state ). Collecting the future decisions in brackets on the right, the previous problem is equivalent to:
subject to the constraints
Here we are choosing , knowing that our choice will cause the time 1 state to be . That new state will then affect the decision problem from time 1 on. The whole future decision problem appears inside the square brackets on the right.
The Bellman equation
So far it seems we have only made the problem uglier by separating today's decision from future decisions. But we can simplify by noticing that what is inside the square brackets on the right is the value of the time 1 decision problem, starting from state .
Therefore we can rewrite the problem as a recursive definition of the value function:
This is the Bellman equation. It can be simplified even further if we drop time subscripts and plug in the value of the next state:
The Bellman equation is classified as a functional equation, because solving it means finding the unknown function V, which is the value function. Recall that the value function describes the best possible value of the objective, as a function of the state x. By calculating the value function, we will also find the function a(x) that describes the optimal action as a function of the state; this is called the policy function.
The Bellman equation in a stochastic problem
DTZ's public sale group in Singapore auctions all forms of residential, workplace and retail properties, outlets, homes, lodges, boarding homes, industrial buildings and development websites. Auctions are at present held as soon as a month.
We will not only get you a property at a rock-backside price but also in an space that you've got longed for. You simply must chill out back after giving us the accountability. We will assure you 100% satisfaction. Since we now have been working in the Singapore actual property market for a very long time, we know the place you may get the best property at the right price. You will also be extremely benefited by choosing us, as we may even let you know about the precise time to invest in the Singapore actual property market.
The Hexacube is offering new ec launch singapore business property for sale Singapore investors want to contemplate. Residents of the realm will likely appreciate that they'll customize the business area that they wish to purchase as properly. This venture represents one of the crucial expansive buildings offered in Singapore up to now. Many investors will possible want to try how they will customise the property that they do determine to buy by means of here. This location has offered folks the prospect that they should understand extra about how this course of can work as well.
Singapore has been beckoning to traders ever since the value of properties in Singapore started sky rocketing just a few years again. Many businesses have their places of work in Singapore and prefer to own their own workplace area within the country once they decide to have a everlasting office. Rentals in Singapore in the corporate sector can make sense for some time until a business has discovered a agency footing. Finding Commercial Property Singapore takes a variety of time and effort but might be very rewarding in the long term.
is changing into a rising pattern among Singaporeans as the standard of living is increasing over time and more Singaporeans have abundance of capital to invest on properties. Investing in the personal properties in Singapore I would like to applaud you for arising with such a book which covers the secrets and techniques and tips of among the profitable Singapore property buyers. I believe many novice investors will profit quite a bit from studying and making use of some of the tips shared by the gurus." – Woo Chee Hoe Special bonus for consumers of Secrets of Singapore Property Gurus Actually, I can't consider one other resource on the market that teaches you all the points above about Singapore property at such a low value. Can you? Condominium For Sale (D09) – Yong An Park For Lease
In 12 months 2013, c ommercial retails, shoebox residences and mass market properties continued to be the celebrities of the property market. Models are snapped up in report time and at document breaking prices. Builders are having fun with overwhelming demand and patrons need more. We feel that these segments of the property market are booming is a repercussion of the property cooling measures no.6 and no. 7. With additional buyer's stamp responsibility imposed on residential properties, buyers change their focus to commercial and industrial properties. I imagine every property purchasers need their property funding to understand in value.
In the deterministic setting, other techniques besides dynamic programming can be used to tackle the above optimal control problem. Although the agent has to account for the stochasticity, this approach becomes convenient for certain problems.
For a specific example from economics, consider an infinitely-lived consumer with initial wealth endowment a0 at period 0. He has an instantaneous utility function u(c) where c denotes consumption and discounts the next period utility at a rate of 0<β<1. Assume what is not consumed in period t carries over next period with interest rate r. Then the consumer's utility maximization problem is to choose a consumption plan {ct} that solves
subject to
and
The first constraint is the capital accumulation/law of motion specified by the problem, while the second constraint is a transversality condition that the consumer does not carry debt at the end of his life. The Bellman equation is
Alternatively, one can treat the sequence problem directly using, for example, the Hamiltonian equations.
Now, if the interest rate varies from period to period, the consumer is face with a stochastic optimization problem. Let the interest r follow a Markov process with probability transition function Q(r, dμr) where dμr denotes the probability measure governing the distribution of interest rate next period if current interest rate is r. The timing of the model is that the consumer decides his current period consumption after the current period interest rate is announced.
Rather than simply choosing a single sequence {ct}, the consumer now must chose a sequence {ct} for each possible realization of a {rt} in such a way that his lifetime expected utility is maximized:
The expectation E is taken with respect to the appropriate probability measure given by Q on the sequences of r's. Because r is governed by a Markov process, dynamic programming simplifies the problem significantly. Then Bellmann equation is simply
Under some reasonable assumption, the resulting optimal policy function g(a,r) is measurable.
For a general stochastic sequential optimization problem with Markovian shocks and where the agent is faced with his decision ex-post, the Bellmann equation takes a very similar form
Solution methods
- The method of undetermined coefficients, also known as 'guess and verify', can be used to solve some infinite-horizon, autonomous Bellman equations.
- The Bellman equation can be solved by backwards induction, either analytically in a few special cases, or numerically on a computer. Numerical backwards induction is applicable to a wide variety of problems, but may be infeasible when there are many state variables, due to the curse of dimensionality. Approximate dynamic programming has been introduced by D. P. Bertsekas and J. N. Tsitsiklis with the use of artificial neural networks (multilayer perceptrons) for approximating the Bellman function.[4] This is an effective mitigation strategy for reducing the impact of dimensionality by replacing the memorization of the complete function mapping for the whole space domain with the memorization of the sole neural network parameters.
- By calculating the first-order conditions associated with the Bellman equation, and then using the envelope theorem to eliminate the derivatives of the value function, it is possible to obtain a system of difference equations or differential equations called the 'Euler equations'. Standard techniques for the solution of difference or differential equations can then be used to calculate the dynamics of the state variables and the control variables of the optimization problem.
Applications in economics
The first known application of a Bellman equation in economics is due to Martin Beckmann and Richard Muth.[5] Martin Beckmann also wrote extensively on consumption theory using the Bellman equation in 1959. His work influenced Edmund S. Phelps, among others.
A celebrated economic application of a Bellman equation is Merton's seminal 1973 article on the intertemporal capital asset pricing model.[6] (See also Merton's portfolio problem).The solution to Merton's theoretical model, one in which investors chose between income today and future income or capital gains, is a form of Bellman's equation. Because economic applications of dynamic programming usually result in a Bellman equation that is a difference equation, economists refer to dynamic programming as a "recursive method" and a subfield of recursive economics is now recognized within Economics.
Stokey, Lucas & Prescott describe stochastic and nonstochastic dynamic programming in considerable detail, and develop theorems for the existence of solutions to problems meeting certain conditions. They also describe many examples of modeling theoretical problems in economics using recursive methods.[7] This book led to dynamic programming being employed to solve a wide range of theoretical problems in economics, including optimal economic growth, resource extraction, principal–agent problems, public finance, business investment, asset pricing, factor supply, and industrial organization. Ljungqvist & Sargent apply dynamic programming to study a variety of theoretical questions in monetary policy, fiscal policy, taxation, economic growth, search theory, and labor economics.[8] Dixit & Pindyck showed the value of the method for thinking about capital budgeting.[9] Anderson adapted the technique to business valuation, including privately held businesses.[10]
Using dynamic programming to solve concrete problems is complicated by informational difficulties, such as choosing the unobservable discount rate. There are also computational issues, the main one being the curse of dimensionality arising from the vast number of possible actions and potential state variables that must be considered before an optimal strategy can be selected. For an extensive discussion of computational issues, see Miranda & Fackler,[11] and Meyn 2007.[12]
Example
In MDP, a Bellman equation refers to a recursion for expected rewards. For example, the expected reward for being in a particular state s and following some fixed policy has the Bellman equation:
This equation describes the expected reward for taking the action prescribed by some policy .
The equation for the optimal policy is referred to as the Bellman optimality equation:
It describes the reward for taking the action giving the highest expected return.
See also
- Dynamic programming
- Hamilton–Jacobi–Bellman equation
- Markov decision process
- Optimal control theory
- Optimal substructure
- Recursive competitive equilibrium
- Bellman pseudospectral method
References
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.
- ↑ 1.0 1.1 1.2 Bellman, R.E. 1957. Dynamic Programming. Princeton University Press, Princeton, NJ. Republished 2003: Dover, ISBN 0-486-42809-5.
- ↑ 2.0 2.1 S. Dreyfus (2002), 'Richard Bellman on the birth of dynamic programming' Operations Research 50 (1), pp. 48–51.
- ↑ R Bellman, On the Theory of Dynamic Programming, Proceedings of the National Academy of Sciences, 1952
- ↑ Bertsekas, D. P., Tsitsiklis, J. N., Neuro-dynamic programming. Athena Scientific, 1996
- ↑ Martin Beckmann and Richard Muth, 1954, "On the solution to the fundamental equation of inventory theory," Cowles Commission Discussion Paper 2116.
- ↑ Robert C. Merton, 1973, "An Intertemporal Capital Asset Pricing Model," Econometrica 41: 867–887.
- ↑ *Nancy Stokey, and Robert E. Lucas, with Edward Prescott, 1989. Recursive Methods in Economic Dynamics. Harvard Univ. Press.
- ↑ Lars Ljungqvist & Thomas Sargent, 2004. Recursive Macroeconomic Theory. MIT Press.
- ↑ Avinash Dixit & Robert Pindyck, 1994. Investment Under Uncertainty. Princeton Univ. Press.
- ↑ Anderson, Patrick L., Business Economics & Finance, CRC Press, 2004 (chapter 10), ISBN 1-58488-348-0; The Value of Private Businesses in the United States, Business Economics (2009) 44, 87–108. 21 year-old Glazier James Grippo from Edam, enjoys hang gliding, industrial property developers in singapore developers in singapore and camping. Finds the entire world an motivating place we have spent 4 months at Alejandro de Humboldt National Park.. Economics of Business Valuation, Stanford University Press (2013); ISBN 9780804758307. Stanford Press
- ↑ Miranda, M., & Fackler, P., 2002. Applied Computational Economics and Finance. MIT Press
- ↑ S. P. Meyn, 2007. Control Techniques for Complex Networks, Cambridge University Press, 2007. Appendix contains abridged Meyn & Tweedie.