Electric flux: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Wamiq
Reverted good faith edits by 173.167.245.89 (talk). (HG)
en>थाराक्काड् कौशिक्
m Error in dimensional formula corrected to "T<sup>–3" from "T<sup>–1"
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
'''Temporal difference (TD) learning''' is a prediction method. It has been mostly used for solving the [[reinforcement learning]] problem. "TD learning is a combination of [[Monte Carlo method|Monte Carlo]] ideas and [[dynamic programming]] (DP) ideas."<ref name="RSutton-1998">{{cite book |author=Richard Sutton and Andrew Barto |title=Reinforcement Learning |publisher=MIT Press |year=1998 |url=http://www.cs.ualberta.ca/~sutton/book/the-book.html |isbn=0-585-02445-6}}</ref> TD resembles a [[Monte Carlo method]] because it learns by [[Sampling (statistics)|sampling]] the environment according to some ''policy''. TD is related to [[dynamic programming]] techniques because it approximates its current estimate based on previously learned estimates (a process known as [[Bootstrapping (machine learning)|bootstrapping]]). The TD learning algorithm is related to the temporal difference model of animal learning{{Verify source|date=April 2009}}.
Bryan can be a superstar during the producing and also the occupation development 1st next to his third hotel recording, & , is the evidence. He burst open on the   [http://lukebryantickets.pyhgy.com luke bryan is from where] scene in 2001 together with his funny mixture of down-property accessibility, motion picture superstar very good looks and lyrics, is placed t inside a major way. The newest re around the land graph or chart and #2 in the pop graphs, building it the 2nd greatest first appearance during that time of 2011 for a country musician. <br><br>The son of a , is aware of persistence and perseverance are important elements with regards to a successful  profession- . His  [http://www.senatorwonderling.com luke bryan concert ticket] to start with album, Keep Me, made the Top  hits “All My Girlfriends “Country and Say” Guy,” whilst his  work, Doin’  Point, located the vocalist-a few direct No. 9 singles: More Contacting Is actually a Great Issue.<br><br>Inside the slip of 2012, Concert tours: Luke Bryan And which  [http://www.museodecarruajes.org vivid tickets] had an impressive list of , including Metropolitan. “It’s almost like you’re acquiring a   acceptance to travel to a higher level, affirms those designers that had been an element of the Concert toursaround in to a larger sized measure of designers.” It twisted among the most successful  organized tours in its 10-season background.<br><br>My homepage :: upcoming luke bryan concerts ([http://lukebryantickets.neodga.com neodga.com])
 
As a prediction method, TD learning takes into account the fact that subsequent predictions are often correlated in some sense. In standard supervised predictive learning, one learns only from actually observed values: A prediction is made, and when the observation is available, the prediction is adjusted to better match the observation. As elucidated in,<ref name="RSutton-1988">{{cite journal |author=Richard Sutton |title=Learning to predict by the methods of temporal differences |journal=Machine Learning |volume=3 |issue=1 |pages=9–44 |year=1988 |doi=10.1007/BF00115009}} (A revised version is available on [http://www.cs.ualberta.ca/~sutton/publications.html Richard Sutton's publication page])</ref> the core idea of TD learning is that we adjust predictions to match other, more accurate, predictions about the future. This procedure is a form of bootstrapping, as illustrated with the following example:
 
: Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday - and thus be able to change, say, Monday's model before Saturday arrives.<ref name="RSutton-1988"/>
 
Mathematically speaking, both in a standard and a TD approach, we would try to optimize some cost function, related to the error in our predictions of the expectation of some random variable, E[z]. However, while in the standard approach we in some sense assume E[z] = z (the actual observed value), in the TD approach we use a model. For the particular case of reinforcement learning, which is the major application of TD methods, z is the total return and E[z] is given by the [[Bellman equation]] of the return.
 
== TD algorithm in neuroscience ==
The TD [[algorithm]] has also received attention in the field of [[neuroscience]]. Researchers discovered that the firing rate of [[dopamine]] [[neurons]] in the [[ventral tegmental area]] (VTA) and [[substantia nigra]] (SNc) appear to mimic the error function in the algorithm.<ref name="WSchultz-1997">{{cite journal |author=Schultz, W, Dayan, P & Montague, PR. |year=1997 |title=A neural substrate of prediction and reward |journal=Science |volume=275 |issue=5306 |pages=1593–1599 |doi=10.1126/science.275.5306.1593 |pmid=9054347}}</ref> The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received. The larger the error function, the larger the difference between the expected and actual reward. When this is paired with a stimulus that accurately reflects a future reward, the error can be used to associate the stimulus with the future [[reward system|reward]].
 
[[Dopamine]] cells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice.<ref name="WSchultz-1998">{{cite journal |author=Schultz, W. |year=1998 |title=Predictive reward signal of dopamine neurons |journal=J Neurophysiology |volume=80 |issue=1 |pages=1–27}}</ref> Initially the dopamine cells increased firing rates when the monkey received juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained, there was no increase in firing rate upon presentation of the predicted reward. Continually, the firing rate for the dopamine cells decreased below normal activation when the expected reward was not produced. This mimics closely how the error function in TD is used for [[reinforcement learning]].
 
The relationship between the model and potential neurological function has produced research attempting to use TD to explain many aspects of behavioral research.<ref name="PDayan-2001">{{cite journal |author=Dayan, P. |year=2001 |title=Motivated reinforcement learning |journal=Advances in Neural Information Processing Systems |volume=14 |pages=11–18 |publisher=MIT Press |url=http://books.nips.cc/papers/files/nips14/CS01.pdf}}</ref> It has also been used to study conditions such as [[schizophrenia]] or the consequences of pharmacological manipulations of dopamine on learning.<ref name="ASmith-2006">{{cite journal |author=Smith, A., Li, M., Becker, S. and Kapur, S. |year=2006 |title=Dopamine, prediction error, and associative learning: a model-based account |journal=Network: Computation in Neural Systems |volume=17 |issue=1 |pages=61–84 |doi=10.1080/09548980500361624 |pmid=16613795}}</ref>
 
== Mathematical formulation ==
Let <math> r_t </math> be the reinforcement on time step ''t''. Let <math> \bar V_t </math> be the correct prediction that is equal to the discounted sum of all future reinforcement. The discounting is done by powers of factor of <math> \gamma </math> such that reinforcement at distant time step is less important.  
:<math> \bar V_t = \sum_{i=0}^{\infty} \gamma^i r_{t+i} </math>
where <math> 0 \le \gamma < 1 </math>.
This formula can be expanded
:<math> \bar V_t = r_{t} + \sum_{i=1}^{\infty} \gamma^i r_{t+i} </math>
by changing the index of i to start from 0.
:<math> \bar V_t = r_{t} + \sum_{i=0}^{\infty} \gamma^{i+1} r_{t+i+1} </math>
:<math> \bar V_t = r_{t} + \gamma \sum_{i=0}^{\infty} \gamma^{i} r_{t+i+1} </math>
:<math> \bar V_t = r_{t} + \gamma \bar V_{t+1} </math>
 
Thus, the reinforcement is the difference between the ideal prediction and the current prediction.
:<math> r_{t} = \bar V_{t} - \gamma \bar V_{t+1} </math>
 
'''TD-Lambda''' is a learning algorithm invented by [[Richard S. Sutton]] based on earlier work on temporal difference learning by [[Arthur Samuel]].<ref name="RSutton-1998"/> This algorithm was famously applied by [[Gerald Tesauro]] to create [[TD-Gammon]], a program that learned to play the game of [[backgammon]] at the level of expert human players.<ref name='CACM'>{{cite journal|title=Temporal Difference Learning and TD-Gammon|journal=Communications of the ACM|date=March 1995|first=Gerald|last=Tesauro|coauthors=|volume=38|issue=3|pages=|id= |url=http://www.research.ibm.com/massive/tdl.html|accessdate=2010-02-08 }}</ref> The lambda (<math>\lambda</math>) parameter refers to the trace decay parameter, with <math>0 \le \lambda \le 1</math>. Higher settings lead to longer lasting traces; that is, a larger proportion of credit from a reward can be given to more distant states and actions when <math>\lambda</math> is higher, with <math>\lambda = 1</math> producing parallel learning to Monte Carlo RL algorithms.
 
== See also ==
* [[Reinforcement learning]]
* [[Q-learning]]
* [[SARSA]]
* [[Rescorla-Wagner model]]
* [[PVLV]]
 
==Notes==
<references/>
 
==Bibliography==
* {{cite journal |author=Sutton, R.S., Barto A.G. |year=1990 |title=Time Derivative Models of Pavlovian Reinforcement |journal=Learning and Computational Neuroscience: Foundations of Adaptive Networks |pages=497–537 |url=http://www.cs.ualberta.ca/~sutton/papers/sutton-barto-90.pdf}}
 
* {{cite journal |author=Gerald Tesauro |title=Temporal Difference Learning and TD-Gammon |journal=Communications of the ACM |date=March 1995 |volume=38 |issue=3 |url=http://www.research.ibm.com/massive/tdl.html}}
 
* Imran Ghory. [http://www.cs.bris.ac.uk/Publications/Papers/2000100.pdf Reinforcement Learning in Board Games].
 
* S. P. Meyn, 2007.  [https://netfiles.uiuc.edu/meyn/www/spm_files/CTCN/CTCN.html Control Techniques for Complex Networks], Cambridge University Press, 2007. See final chapter, and appendix with abridged [https://netfiles.uiuc.edu/meyn/www/spm_files/book.html Meyn & Tweedie].
 
==External links==
* [http://scholarpedia.org/article/Temporal_Difference_Learning Scholarpedia Temporal difference Learning]
* [http://webdocs.cs.ualberta.ca/~sutton/book/11/node2.html TD-Gammon]
* [http://rlai.cs.ualberta.ca/TDNets/index.html TD-Networks Research Group]
* [http://pitoko.net/tdgravity Connect Four TDGravity Applet] (+ mobile phone version) - self-learned using TD-Leaf method (combination of TD-Lambda with shallow tree search)
 
{{DEFAULTSORT:Temporal Difference Learning}}
[[Category:Computational neuroscience]]
[[Category:Machine learning algorithms]]

Latest revision as of 08:33, 13 May 2014

Bryan can be a superstar during the producing and also the occupation development 1st next to his third hotel recording, & , is the evidence. He burst open on the luke bryan is from where scene in 2001 together with his funny mixture of down-property accessibility, motion picture superstar very good looks and lyrics, is placed t inside a major way. The newest re around the land graph or chart and #2 in the pop graphs, building it the 2nd greatest first appearance during that time of 2011 for a country musician.

The son of a , is aware of persistence and perseverance are important elements with regards to a successful profession- . His luke bryan concert ticket to start with album, Keep Me, made the Top hits “All My Girlfriends “Country and Say” Guy,” whilst his work, Doin’ Point, located the vocalist-a few direct No. 9 singles: More Contacting Is actually a Great Issue.”

Inside the slip of 2012, Concert tours: Luke Bryan And which vivid tickets had an impressive list of , including Metropolitan. “It’s almost like you’re acquiring a acceptance to travel to a higher level, affirms those designers that had been an element of the Concert toursaround in to a larger sized measure of designers.” It twisted among the most successful organized tours in its 10-season background.

My homepage :: upcoming luke bryan concerts (neodga.com)