https://en.formulasearchengine.com/index.php?title=Talk:Hamilton%E2%80%93Jacobi%E2%80%93Bellman_equation&feed=atom&action=historyTalk:Hamilton–Jacobi–Bellman equation - Revision history2022-12-05T10:20:30ZRevision history for this page on the wikiMediaWiki 1.39.0-wmf.22https://en.formulasearchengine.com/index.php?title=Talk:Hamilton%E2%80%93Jacobi%E2%80%93Bellman_equation&diff=295930&oldid=preven>Andylong: /* Terminal condition */ new section2012-07-03T11:32:10Z<p><span dir="auto"><span class="autocomment">Terminal condition: </span> new section</span></p>
<p><b>New page</b></p><div>{{Sys rating|class=start|importance=mid|field=Control theory}}<br />
<br />
Um, I don't think the Hamilton-Jacobi-Bellman equation is the Hamilton-Jacobi equation anymore than let's say Shannon information is the thermodynamic entropy. [[User:Phys|Phys]] 02:57, 15 Aug 2004 (UTC)<br />
<br />
Phys is right. There is some mixing together here of Hamilton-Jacobi-Bellman and Hamilton-Jacobi, of Optimal Control and Physics. The result is confusing. I will rewrite from the point of view of O.C. only. Someone else can add the relation to physics and to the pre-Bellman work. [[User:Encyclops|Encyclops]] July 2005<br />
<br />
The historical reason for the name is that Bellman got the idea from the mathematics book of [[Carathéodory]] on the Calculus of Variations which used the Hamilton-Jacobi theory. [[User:JFB80|JFB80]] ([[User talk:JFB80|talk]]) 21:23, 13 November 2010 (UTC) <br />
<br />
== Bracket notation ==<br />
<br />
Is there a reason for using the notation <math>\langle a,b \rangle </math> to denote inner product here? I'd prefer ordinary matrix notation: <math> a^Tb </math>. The latter is less confusing, since it can't be confused with other variations of scalar product, such as <math> \int a^\prime b \, \mathrm{d} x</math>--[[User:PeR|PeR]] 12:08, 14 June 2006 (UTC)<br />
<br />
<math> a^Tb </math> is very clear. But when a and b are somewhat messy expressions it becomes less readable. In our case what would we have <math>\left(\frac{\partial}{\partial x}V(x,t)\right)^T F(x, u)</math> ? I don't know if I like it or not. [[User:Encyclops|Encyclops]] 00:23, 15 June 2006 (UTC)<br />
<br />
:I think really the notation used should be <math>\nabla</math> and the <math>\cdot</math>. The notation for <math>\frac {\delta} {\delta x}</math> where x is a vector is not particularly intuitive.- ([[User:Wolfkeeper|User]]) '''Wolfkeeper''' ([[User_talk:Wolfkeeper|Talk]]) 18:28, 20 May 2009 (UTC)<br />
<br />
:Yes, <math>\nabla</math> or <math>\nabla_x</math> seems good to me, FWIW. And a dot for the inner product, I like also. [[User:Encyclops|Encyclops]] ([[User talk:Encyclops|talk]]) 19:08, 20 May 2009 (UTC)<br />
<br />
== Sufficient condition? ==<br />
The current article claims that the HJB is a sufficient condition. That sounds wrong to me, because first of all the equation ''itself'' is not a sufficient condition: I assume what is meant is that "if V solves HJB, this suffices to conclude that it optimizes the objective". But is this true in general? I know that in discrete-time, infinite-horizon cases, a solution of the Bellman equation only serves to identify a candidate solution for the original sequence problem, that is, solving the Bellman equation is necessary but not sufficient for optimality. (See Stokey-Lucas-Prescott, ''Recursive Methods in Economic Dynamics''. Theorem 4.3 shows that for an infinite horizon problem, satisfying the Bellman equation ''and'' an appropriate 'transversality condition' suffices for optimality; but the Bellman equation ''alone'' is not sufficient.)<br />
<br />
Is the sufficiency claim in this article based on the fact that the example given has a finite horizon T? If so, this should be clarified, and it would be helpful to add more general cases too.<br />
--[[User:Rinconsoleao|Rinconsoleao]] ([[User talk:Rinconsoleao|talk]]) 08:07, 30 May 2008 (UTC)<br />
<br />
:The continuous time/continuous state case we are looking at here is more complex than the discrete time case you mention; there are some delicate technical issues that do not arise in d.t. control. A number of "verification theorems" have been proven, using various assumptions. The simplest theorems, one of which goes back to Bellman, says that if a control satisfies HJB and the terminal condition, then that control is optimal. HJB => optimality. In this sense HJB is a sufficient condition. However, there could exist optimal solutions that are not smooth (not continuous or not differentiable), do not satisfy HJB, but are nevertheless optimal. There are also other verification theorems that establish HJB as necessary and sufficient, but that requires additional assumptions, so they are more restrictive. We also have to ask what kind of "solutions" of HJB we are talking about, the "classical" PDE solutions that Bellman used or the modern [[viscosity solution]]s. Frankly, my knowledge of this area is not sufficient ;-) to give an overview of all these theorems. [[User:Encyclops|Encyclops]] ([[User talk:Encyclops|talk]]) 23:49, 30 May 2008 (UTC)<br />
<br />
== Sufficient condition? A confirmation ==<br />
I do agree with this remark. For me, HJB is a necessary condition i.e. an optimal control should necessarily satisfied HJB. I refer to Oksendal (Stochastic differential equations, theorem 11.2.1).<br />
<br />
The important question is to know if the finding of a solution of a HJB equation is sufficient. No. Once found, the solution of the HJB PDE has to satisfies some criteria (a verification theorem, Oksendal 11.2.2). <br />
<br />
HJB is necessary but not sufficient.<br />
<br />
[[Special:Contributions/130.104.59.97|130.104.59.97]] ([[User talk:130.104.59.97|talk]]) 09:31, 3 December 2009 (UTC)Devil may cry.<br />
<br />
I am quite confused by this article. In my references the HJB equation is a sufficient not necessary condition. In the book of Bertsekas: Dynamic programming and optimal control, Athena Scientific at pag. 93 it is stated that the theorem about HJB is a sufficient condition. And in all my courses of optimal control, every professor always remarks the difference between the Pontryagin minimum principle (necessary) and the HJB (sufficient). Checking the book of Oksendal I saw that the formulation presented is for stochastic process from a stochastic/mathematical point of view. Now, I do not have the knowledge to understand where is the trick, however, I am quite sure that the HJB equation for optimal control problem, as formulated in this article and proved in Bertsekas, is a sufficient condition not necessary. Could someone make a more deep investigation? [[User:Pivs|Pivs]] ([[User talk:Pivs|talk]]) 20:43, 4 May 2012 (UTC)<br />
<br />
== Multiply by dt? ==<br />
<br />
I wonder if the two last terms (before the big O) of the last equation should not be multiplied by dt. <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/61.26.5.133|61.26.5.133]] ([[User talk:61.26.5.133|talk]]) 03:18, 13 March 2009 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot--><br />
<br />
:I think you may be right. Any other opinions ? [[User:Encyclops|Encyclops]] ([[User talk:Encyclops|talk]]) 01:42, 16 March 2009 (UTC)<br />
<br />
:Agreed. Done. --[[User:Rinconsoleao|Rinconsoleao]] ([[User talk:Rinconsoleao|talk]]) 09:44, 8 June 2009 (UTC)<br />
<br />
== Terminal condition ==<br />
<br />
In section ''[[Hamilton–Jacobi–Bellman equation#The partial differential equation|The partial differential equation]]'' I see a terminal condition which does not quite look like a meaningful terminal condition. Is it actually meant to be<br />
:<math><br />
V(x(T),T) = D(x(T))<br />
</math><br />
and if so, may I correct that?<br />
Thank you<br />
--[[User:Andylong|Andylong]] ([[User talk:Andylong|talk]]) 11:32, 3 July 2012 (UTC)</div>en>Andylong