|
|
Line 1: |
Line 1: |
| {{Thermodynamics|cTopic=[[Laws of thermodynamics|Laws]]}}
| | I'm Norris and I live with my husband and our 3 children in Alandsbro, in the south area. My hobbies are Swimming, Amateur astronomy and Rugby league football.<br><br>Have a look at my webpage ... [http://dostirani.ir/index.php?do=/blog/6483/fifa-coin-generator/ Fifa 15 Coin Generator] |
| The '''second law of thermodynamics''' states that the [[entropy]] of an [[isolated system]] never decreases, because isolated systems spontaneously evolve toward [[thermodynamic equilibrium]]—the state of maximum entropy. Equivalently, [[perpetual motion machines]] of the [[Perpetual motion#Classification|second kind]] are impossible.
| |
| | |
| The second law is an [[Empirical evidence|empirically validated]] postulate of [[thermodynamics]], but it can be understood and explained using the underlying quantum [[statistical mechanics]]. In the language of statistical mechanics, entropy is a measure of the number of microscopic configurations corresponding to a macroscopic state. Because thermodynamic equilibrium corresponds to a vastly greater number of microscopic configurations than any non-equilibrium state, it has the maximum entropy, and the second law follows because random chance alone practically guarantees that the system will evolve towards such thermodynamic equilibrium.
| |
| | |
| It is an expression of the fact that over time, differences in temperature, [[pressure]], and [[chemical potential]] decrease in an isolated non-gravitational [[physical system]], leading eventually to a state of thermodynamic equilibrium.
| |
| | |
| The second law may be expressed in many specific ways, but the first formulation is credited to the French scientist [[Nicolas Léonard Sadi Carnot|Sadi Carnot]] in 1824 (see [[Timeline of thermodynamics]]). Strictly speaking, the early statements of the Second Law are only correct in a horizontal plane in a gravitational field.
| |
| | |
| The second law has been shown to be equivalent to the [[internal energy]] ''U'' being a weakly [[convex function]], when written as a function of extensive properties (mass, volume, entropy, ...).<ref>{{cite book |last1=van Gool |first1=W. |last2=Bruggink |first2=J.J.C. (Eds) |url= |title=Energy and time in the economic and physical sciences |publisher=North-Holland |year=1985 |pages=41–56 |quote= |isbn=0444877487}}</ref><ref>{{Cite journal | last1 = Grubbström | first1 = Robert W. | doi = 10.1016/j.apenergy.2007.01.003 | title = An Attempt to Introduce Dynamics Into Generalised Exergy Considerations| journal = Applied Energy| volume = 84| pages = 701–718 | year = 2007}}</ref>
| |
| | |
| {{TOC limit}}
| |
| | |
| ==Description==
| |
| The [[first law of thermodynamics]] provides the basic definition of thermodynamic energy, also called [[internal energy]], associated with all [[thermodynamic system]]s, but unknown in classical mechanics, and states the rule of conservation of energy in nature.
| |
| | |
| The concept of energy in the first law does not, however, account for the observation that natural processes have a preferred direction of progress. For example, heat always flows spontaneously from regions of higher temperature to regions of lower temperature, and never the reverse, unless external work is performed on the system. The first law is completely symmetrical with respect to the initial and final states of an evolving system. The key concept for the explanation of this phenomenon through the second law of thermodynamics is the definition of a new physical property, the [[entropy]].
| |
| | |
| In a reversible process, an infinitesimal increment in the entropy ({{math|''dS''}}) of a system results from an infinitesimal transfer of heat ({{math|δ''Q''}}) to a [[closed system]] divided by the common temperature ({{math|''T''}}) of the system and the surroundings which supply the heat.<ref>Bailyn, M. (1994), p. 120.</ref>
| |
| : <math>dS = \frac{\delta Q}{T} \!</math>
| |
| | |
| The entropy of an [[isolated system]] in its own internal thermodynamic equilibrium does not change with time. An isolated system may consist initially of several subsystems, separated from one another by partitions, but still each in its own internal thermodynamic equilibrium. If the partitions are removed, the former subsystems will in general interact and produce a new common final system in its own internal thermodynamic equilibrium. The sum of the entropies of the initial subsystems is in general less than the entropy of the final common system. If all of the initial subsystems have all the same values of their [[Intensive and extensive properties|intensive variables]], then the sum of the initial entropies will be equal to the final common entropy, and the final common system will have the same values of its intensive variables.
| |
| | |
| For a body in thermal equilibrium with another, there are indefinitely many empirical temperature scales, in general respectively depending on the properties of a particular reference thermometric body. Thermal equilibrium between two bodies entails that they have equal temperatures. The [[zeroth law of thermodynamics]] in its usual short statement allows recognition that two bodies have the same temperature, especially that a test body has the same temperature as a reference thermometric body.<ref name=dugdale>{{cite book|author=J. S. Dugdale|title=Entropy and its Physical Meaning|publisher=Tayler & Francis|year=1996, 1998 |isbn=0-7484-0569-0|page=13|quote=This law is the basis of temperature.}}</ref> The second law allows a distinguished temperature scale, which defines an absolute, [[thermodynamic temperature]], independent of the properties of any particular thermometric body.<ref>[[Mark Zemansky|Zemansky, M.W.]] (1968), pp. 207–209.</ref><ref>Quinn, T.J. (1983), p. 8.</ref>
| |
| | |
| The second law of thermodynamics may be expressed in many specific ways,<ref name=MIT>{{cite web|title=Concept and Statements of the Second Law|url=http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node37.html|accessdate=2010-10-07 |publisher=web.mit.edu}}</ref> the most prominent classical statements<ref name=lieb>Lieb, E.H., Yngvason, J. (1999).</ref> being the statement by [[Rudolf Clausius]] (1854), the statement by [[William Thomson, 1st Baron Kelvin|Lord Kelvin]] (1851), and the statement in axiomatic thermodynamics by [[Constantin Carathéodory]] (1909). These statements cast the law in general physical terms citing the impossibility of certain processes. The Clausius and the Kelvin statements have been shown to be equivalent.
| |
| | |
| ===Carnot's principle===
| |
| | |
| The historical origin of the second law of thermodynamics was in Carnot's principle. It refers to a cycle of a [[Carnot engine]], fictively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and work transfers are between subsystems that are always in their own internal states of thermodynamic equilibrium. The Carnot engine is an idealized device of special interest to engineers who are concerned with the efficiency of heat engines. Carnot's principle was recognized by Carnot at a time when the [[caloric theory]] of heat was seriously considered, before the recognition of the first law of thermodynamics, and before the mathematical expression of the concept of entropy. Interpreted in the light of the first law, it is physically equivalent to the second law of thermodynamics, and remains valid today. It states
| |
| | |
| <blockquote>The efficiency of a quasi-static or reversible Carnot cycle depends only on the temperatures of the two heat reservoirs, and is independent of the working substance. A Carnot engine operated in this way is the most efficient possible heat engine using those two temperatures.<ref>[[Nicolas Léonard Sadi Carnot|Carnot, S.]] (1824/1986).</ref><ref>[[Clifford Truesdell|Truesdell, C.]] (1980), Chapter 5.</ref><ref>Adkins, C.J. (1968/1983), pp. 56–58.</ref><ref>Münster, A. (1970), p. 11.</ref><ref>Kondepudi, D., [[Ilya Prigogine|Prigogine, I.]] (1998), pp.67–75.</ref><ref>Lebon, G., Jou, D., Casas-Vázquez, J. (2008), p. 10.</ref><ref>Eu, B.C. (2002), pp. 32–35.</ref></blockquote>
| |
| | |
| ===Clausius statement===
| |
| The German scientist [[Rudolf Clausius]] laid the foundation for the second law of thermodynamics in 1850 by examining the relation between heat transfer and work.<ref>[[Rudolf Clausius|Clausius, R.]] (1850).</ref> His formulation of the second law, which was published in German in 1854, is known as the ''Clausius statement'':
| |
| <blockquote>Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.<ref name="Clausius"/></blockquote>
| |
| | |
| Heat cannot spontaneously flow from cold regions to hot regions without external work being performed on the system, which is evident from ordinary experience of refrigeration, for example. In a refrigerator, heat flows from cold to hot, but only when forced by an external agent, the refrigeration system.
| |
| | |
| ===Kelvin statement===
| |
| [[William Thomson, 1st Baron Kelvin|Lord Kelvin]] expressed the second law as
| |
| <blockquote>It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of the surrounding objects.<ref>[[William Thomson, 1st Baron Kelvin|Thomson, W.]] (1851).</ref></blockquote>
| |
| | |
| ===Planck's Principle===
| |
| In 1926 [[Max Planck]] wrote an important paper on the basics of thermodynamics. He indicated the principle
| |
| | |
| ::The internal energy of a closed system is increased by an isochoric adiabatic process.<ref name="Munster 45"/><ref name="L&Y"/>
| |
| | |
| This formulation does not mention heat and does not mention temperature, nor even entropy, and does not necessarily implicitly rely on those concepts, but it implies the content of the second law. A closely related statement is that "Frictional pressure never does positive work."<ref>[[Clifford Truesdell|Truesdell, C.]], Muncaster, R.G. (1980). ''Fundamentals of Maxwell's Kinetic Theory of a Simple Monatomic Gas, Treated as a Branch of Rational Mechanics'', Academic Press, New York, ISBN0-12-701350-4, p. 15.</ref> Using a now obsolete form of words, Planck himself wrote: "The production of heat by friction is irreversible."<ref>[[Max Planck|Planck, M.]] (1926), p. 457, Wikipedia editor's translation.</ref>
| |
| | |
| ===Principle of Carathéodory===
| |
| <!-- [[Caratheodory's principle]] redirects here -->
| |
| [[Constantin Carathéodory]] formulated thermodynamics on a purely mathematical axiomatic foundation. His statement of the second law is known as the Principle of Carathéodory, which may be formulated as follows:<ref>[[Constantin Carathéodory|Carathéodory, C.]] (1909).</ref>
| |
| <blockquote>In every neighborhood of any state S of an adiabatically isolated system there are states inaccessible from S.''<ref>Buchdahl, H.A. (1966), p. 68.</ref></blockquote>
| |
| With this formulation he described the concept of [[adiabatic accessibility]] for the first time and provided the foundation for a new subfield of classical thermodynamics, often called [[Ruppeiner geometry|geometrical thermodynamics]]. It follows from Carathéodory's principle that quantity of energy quasi-statically transferred as heat is a holonomic [[process function]], in other words, <math>\delta Q=TdS</math>.<ref name="Sychev1991">{{cite book |last=Sychev |first=V. V. |title=The Differential Equations of Thermodynamics |url=http://www.amazon.com/The-Differential-Equations-Of-Thermodynamics/dp/1560321210/ref=sr_1_4?ie=UTF8&qid=1353986248&sr=8-4&keywords=Sychev |accessdate=2012-11-26|year=1991 |publisher=Taylor & Francis |isbn=978-1560321217}}</ref>
| |
| | |
| Though it is almost customary in textbooks to say that Carathéodory's principle expresses the second law and to treat it as equivalent to the Clausius or to the Kelvin-Planck statements, such is not the case. To get all the content of the second law, Carathéodory's principle needs to be supplemented by Planck's principle, that isochoric work always increases the internal energy of a closed system that was initially in its own internal thermodynamic equilibrium.<ref name="Munster 45">Münster, A. (1970), p. 45.</ref><ref name="L&Y">Lieb, E.H., Yngvason, J. (1999), p. 49.</ref><ref>[[Max Planck|Planck, M.]] (1926).</ref><ref>Buchdahl, H.A. (1966), p. 69.</ref>
| |
| | |
| ===Equivalence of the Clausius and the Kelvin statements===
| |
| [[Image:Deriving Kelvin Statement from Clausius Statement.svg|thumb|300px|Derive Kelvin Statement from Clausius Statement]]
| |
| | |
| Suppose there is an engine violating the Kelvin statement: i.e.,one that drains heat and converts it completely into work in a cyclic fashion without any other result. Now pair it with a reversed [[Carnot engine]] as shown by the graph. The net and sole effect of this newly created engine consisting of the two engines mentioned is transferring heat <math>\Delta Q=Q\left(\frac{1}{\eta}-1\right)</math> from the cooler reservoir to the hotter one, which violates the Clausius statement. Thus a violation of the Kelvin statement implies a violation of the Clausius statement, i.e. the Clausius statement implies the Kelvin statement. We can prove in a similar manner that the Kelvin statement implies the Clausius statement, and hence the two are equivalent.
| |
| | |
| ===Gravitational systems===
| |
| In non-gravitational systems, objects always have positive [[heat capacity]], meaning that the temperature rises with energy. Therefore, when energy flows from a high-temperature object to a low-temperature object, the source temperature is decreased while the sink temperature is increased; hence temperature differences tend to diminish over time.
| |
| | |
| However, this is not always the case for systems in which the gravitational force is important. The most striking examples are black holes, which - [[Black hole thermodynamics|according to theory]] - have negative heat capacity. The larger the black hole, the more energy it contains, but the lower its temperature. Thus, the [[supermassive black hole]] in the center of the [[milky way]] is supposed to have a temperature of 10<sup>−14</sup> K, much lower than the [[cosmic microwave background]] temperature of 2.7K, but as it absorbs photons of the cosmic microwave background its mass is increasing so that its low temperature further decreases with time.
| |
| | |
| For this reason, gravitational systems tend towards non-even distribution of mass and energy.
| |
| | |
| ==Corollaries==
| |
| | |
| ===Perpetual motion of the second kind===
| |
| {{main|Perpetual motion}}
| |
| Before the establishment of the Second Law, many people who were interested in inventing a perpetual motion machine had tried to circumvent the restrictions of [[First Law of Thermodynamics]] by extracting the massive internal energy of the environment as the power of the machine. Such a machine is called a "perpetual motion machine of the second kind". The second law declared the impossibility of such machines.
| |
| | |
| ===Carnot theorem===
| |
| [[Carnot theorem (thermodynamics)|Carnot's theorem]] (1824) is a principle that limits the maximum efficiency for any possible engine. The efficiency solely depends on the temperature difference between the hot and cold thermal reservoirs. Carnot's theorem states:
| |
| *All irreversible heat engines between two heat reservoirs are less efficient than a [[Carnot engine]] operating between the same reservoirs.
| |
| *All reversible heat engines between two heat reservoirs are equally efficient with a Carnot engine operating between the same reservoirs.
| |
| | |
| In his ideal model, the heat of caloric converted into work could be reinstated by reversing the motion of the cycle, a concept subsequently known as [[thermodynamic reversibility]]. Carnot however further postulated that some caloric is lost, not being converted to mechanical work. Hence no real heat engine could realise the [[Carnot cycle]]'s reversibility and was condemned to be less efficient.
| |
| | |
| Though formulated in terms of caloric (see the obsolete [[caloric theory]]), rather than [[entropy]], this was an early insight into the second law.
| |
| | |
| ===Clausius Inequality===
| |
| The [[Clausius Theorem]] (1854) states that in a cyclic process
| |
| | |
| : <math>\oint \frac{\delta Q}{T} \leq 0.</math>
| |
| | |
| The equality holds in the reversible case<ref>[http://scienceworld.wolfram.com/physics/ClausiusTheorem.html '''Clausius theorem'''] at [[Wolfram Research]]</ref> and the '<' is in the irreversible case. The reversible case is used to introduce the state function [[entropy]]. This is because in cyclic processes the variation of a state function is zero from state functionality.
| |
| | |
| ===Thermodynamic temperature===
| |
| {{main|Thermodynamic temperature}}
| |
| For an arbitrary heat engine, the efficiency is:
| |
| | |
| : <math>\eta = \frac {A}{q_H} = \frac{q_H-q_C}{q_H} = 1 - \frac{q_C}{q_H} \qquad (1)</math>
| |
| | |
| where A is the work done per cycle. Thus the efficiency depends only on q<sub>C</sub>/q<sub>H</sub>.
| |
| | |
| [[Carnot theorem (thermodynamics)|Carnot's theorem]] states that all reversible engines operating between the same heat reservoirs are equally efficient.
| |
| Thus, any reversible heat engine operating between temperatures ''T''<sub>1</sub> and ''T''<sub>2</sub> must have the same efficiency, that is to say, the efficiency is the function of temperatures only:
| |
| <math>\frac{q_C}{q_H} = f(T_H,T_C)\qquad (2).</math>
| |
| | |
| In addition, a reversible heat engine operating between temperatures ''T''<sub>1</sub> and ''T''<sub>3</sub> must have the same efficiency as one consisting of two cycles, one between ''T''<sub>1</sub> and another (intermediate) temperature ''T''<sub>2</sub>, and the second between ''T''<sub>2</sub> and''T''<sub>3</sub>. This can only be the case if
| |
| | |
| : <math>
| |
| | |
| f(T_1,T_3) = \frac{q_3}{q_1} = \frac{q_2 q_3} {q_1 q_2} = f(T_1,T_2)f(T_2,T_3).
| |
| </math>
| |
| | |
| Now consider the case where <math>T_1</math> is a fixed reference temperature: the temperature of the [[triple point]] of water. Then for any ''T''<sub>2</sub> and ''T''<sub>3</sub>,
| |
| | |
| : <math>f(T_2,T_3) = \frac{f(T_1,T_3)}{f(T_1,T_2)} = \frac{273.16 \cdot f(T_1,T_3)}{273.16 \cdot f(T_1,T_2)}.</math>
| |
| | |
| Therefore if thermodynamic temperature is defined by
| |
| | |
| : <math>T = 273.16 \cdot f(T_1,T) \,</math>
| |
| | |
| then the function ''f'', viewed as a function of thermodynamic temperature, is simply
| |
| | |
| : <math>f(T_2,T_3) = \frac{T_3}{T_2},</math>
| |
| | |
| and the reference temperature ''T''<sub>1</sub> will have the value 273.16. (Of course any reference temperature and any positive numerical value could be used—the choice here corresponds to the [[Kelvin]] scale.)
| |
| | |
| ===Entropy===
| |
| {{main|entropy (classical thermodynamics)}}
| |
| According to the [[Clausius theorem|Clausius equality]], for a reversible process
| |
| | |
| : <math>\oint \frac{\delta Q}{T}=0</math>
| |
| | |
| That means the line integral <math>\int_L \frac{\delta Q}{T}</math> is path independent.
| |
| | |
| So we can define a state function S called entropy, which satisfies
| |
| | |
| : <math>dS = \frac{\delta Q}{T} \!</math>
| |
| | |
| With this we can only obtain the difference of entropy by integrating the above formula. To obtain the absolute value, we need the [[Third Law of Thermodynamics]], which states that S=0 at [[absolute zero]] for perfect crystals.
| |
| | |
| For any irreversible process, since entropy is a state function, we can always connect the initial and terminal status with an imaginary reversible process and integrating on that path to calculate the difference in entropy.
| |
| | |
| Now reverse the reversible process and combine it with the said irreversible process. Applying [[Clausius inequality]] on this loop,
| |
| | |
| : <math>-\Delta S+\int\frac{\delta Q}{T}=\oint\frac{\delta Q}{T}< 0</math>
| |
| | |
| Thus,
| |
| | |
| : <math>\Delta S \ge \int \frac{\delta Q}{T} \,\!</math>
| |
| | |
| where the equality holds if the transformation is reversible.
| |
| | |
| Notice that if the process is an [[adiabatic process]], then <math>\delta Q=0</math>, so <math>\Delta S\ge 0</math>.
| |
| | |
| ===Exergy, available useful work===
| |
| {{See also|Exergy}}
| |
| An important and revealing idealized special case is to consider applying the Second Law to the scenario of an isolated system (called the total system or universe), made up of two parts: a sub-system of interest, and the sub-system's surroundings. These surroundings are imagined to be so large that they can be considered as an ''unlimited'' heat reservoir at temperature ''T<sub>R</sub>'' and pressure ''P<sub>R</sub>'' — so that no matter how much heat is transferred to (or from) the sub-system, the temperature of the surroundings will remain ''T<sub>R</sub>''; and no matter how much the volume of the sub-system expands (or contracts), the pressure of the surroundings will remain ''P<sub>R</sub>''.
| |
| | |
| Whatever changes to ''dS'' and ''dS<sub>R</sub>'' occur in the entropies of the sub-system and the surroundings individually, according to the Second Law the entropy ''S<sub>tot</sub>'' of the isolated total system must not decrease:
| |
| | |
| : <math> dS_{\mathrm{tot}}= dS + dS_R \ge 0 </math>
| |
| | |
| According to the [[First Law of Thermodynamics]], the change ''dU'' in the internal energy of the sub-system is the sum of the heat ''δq'' added to the sub-system, ''less'' any work ''δw'' done ''by'' the sub-system, ''plus'' any net chemical energy entering the sub-system ''d ∑μ<sub>iR</sub>N<sub>i</sub>'', so that:
| |
| | |
| : <math> dU = \delta q - \delta w + d(\sum \mu_{iR}N_i) \,</math>
| |
| | |
| where μ<sub>iR</sub> are the [[chemical potential]]s of chemical species in the external surroundings.
| |
| | |
| Now the heat leaving the reservoir and entering the sub-system is
| |
| | |
| : <math> \delta q = T_R (-dS_R) \le T_R dS </math>
| |
| | |
| where we have first used the definition of entropy in classical thermodynamics (alternatively, in statistical thermodynamics, the relation between entropy change, temperature and absorbed heat can be derived); and then the Second Law inequality from above.
| |
| | |
| It therefore follows that any net work ''δw'' done by the sub-system must obey
| |
| | |
| : <math> \delta w \le - dU + T_R dS + \sum \mu_{iR} dN_i \,</math>
| |
| | |
| It is useful to separate the work ''δw'' done by the subsystem into the ''useful'' work ''δw<sub>u</sub>'' that can be done ''by'' the sub-system, over and beyond the work ''p<sub>R</sub> dV'' done merely by the sub-system expanding against the surrounding external pressure, giving the following relation for the useful work (exergy) that can be done:
| |
| | |
| : <math> \delta w_u \le -d (U - T_R S + p_R V - \sum \mu_{iR} N_i )\,</math>
| |
| | |
| It is convenient to define the right-hand-side as the exact derivative of a thermodynamic potential, called the ''availability'' or ''[[exergy]]'' ''E'' of the subsystem,
| |
| | |
| : <math> E = U - T_R S + p_R V - \sum \mu_{iR} N_i </math>
| |
| | |
| The Second Law therefore implies that for any process which can be considered as divided simply into a subsystem, and an unlimited temperature and pressure reservoir with which it is in contact,
| |
| | |
| : <math> dE + \delta w_u \le 0 \, </math>
| |
| | |
| i.e. the change in the subsystem's exergy plus the useful work done ''by'' the subsystem (or, the change in the subsystem's exergy less any work, additional to that done by the pressure reservoir, done ''on'' the system) must be less than or equal to zero.
| |
| | |
| In sum, if a proper ''infinite-reservoir-like'' reference state is chosen as the system surroundings in the real world, then the Second Law predicts a decrease in ''E'' for an irreversible process and no change for a reversible process.
| |
| | |
| : <math>dS_{tot} \ge 0 </math> Is equivalent to <math> dE + \delta w_u \le 0 </math>
| |
| | |
| This expression together with the associated reference state permits a [[design engineer]] working at the macroscopic scale (above the [[thermodynamic limit]]) to utilize the Second Law without directly measuring or considering entropy change in a total isolated system. (''Also, see [[process engineer]]''). Those changes have already been considered by the assumption that the system under consideration can reach equilibrium with the reference state without altering the reference state. An efficiency for a process or collection of processes that compares it to the reversible ideal may also be found (''See [[Exergy efficiency|second law efficiency]]''.)
| |
| | |
| This approach to the Second Law is widely utilized in [[engineering]] practice, [[environmental accounting]], [[systems ecology]], and other disciplines.
| |
| | |
| ==History==
| |
| {{See also|History of entropy}}
| |
| [[File:Sadi Carnot.jpeg|thumb|150px|Nicolas Léonard Sadi Carnot in the traditional uniform of a student of the [[École Polytechnique]].]]
| |
| The first theory of the conversion of heat into mechanical work is due to [[Nicolas Léonard Sadi Carnot]] in 1824. He was the first to realize correctly that the efficiency of this conversion depends on the difference of temperature between an engine and its environment.
| |
| | |
| Recognizing the significance of [[James Prescott Joule]]'s work on the conservation of energy, [[Rudolf Clausius]] was the first to formulate the second law during 1850, in this form: heat does not flow ''spontaneously'' from cold to hot bodies. While common knowledge now, this was contrary to the [[caloric theory]] of heat popular at the time, which considered heat as a fluid. From there he was able to infer the principle of Sadi Carnot and the definition of entropy (1865).
| |
| | |
| Established during the 19th century, the [[Kelvin-Planck statement|Kelvin-Planck statement of the Second Law]] says, "It is impossible for any device that operates on a [[cyclic process|cycle]] to receive heat from a single [[heat reservoir|reservoir]] and produce a net amount of work." This was shown to be equivalent to the statement of Clausius.
| |
| | |
| The [[ergodic hypothesis]] is also important for the [[Boltzmann]] approach. It says that, over long periods of time, the time spent in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e. that all accessible microstates are equally probable over a long period of time. Equivalently, it says that time average and average over the statistical ensemble are the same.
| |
| | |
| It has been shown that not only classical systems but also [[quantum mechanics|quantum mechanical]] ones tend to maximize their entropy over time. Thus the second law follows, given initial conditions with low entropy. More precisely, it has been shown that the local [[von Neumann entropy]] is at its maximum value with a very high probability.<ref>{{Cite journal| last1 = Gemmer | first1 = Jochen| last2 = Otte | first2 = Alexander| last3 = Mahler | first3 = Günter| title = Quantum Approach to a Derivation of the Second Law of Thermodynamics| year = 2001| journal = [[Physical Review Letters]]| volume = 86| issue = 10| pages = 1927–1930
| |
| | doi = 10.1103/PhysRevLett.86.1927| pmid = 11289822| postscript = | bibcode=2001PhRvL..86.1927G|arxiv = quant-ph/0101140 }}</ref> The result is valid for a large class of isolated quantum systems (e.g. a gas in a container). While the full system is pure and therefore does not have any entropy, the [[quantum entanglement|entanglement]] between gas and container gives rise to an increase of the local entropy of the gas. This result is one of the most important achievements of [[quantum thermodynamics]].{{Dubious|date=March 2009}}
| |
| | |
| Today, much effort in the field is attempting to understand why the initial conditions early in the universe were those of low entropy,<ref>{{cite journal |author1=Carroll |author2=Jennifer Chen |title=Does Inflation Provide Natural Initial Conditions for the Universe? |doi=10.1007/s10714-005-0148-2 |year=2335 |pages=1671–1674 |volume=37 |issue=2005 |journal=Gen.Rel.Grav. () ; [[International Journal of Modern Physics D]] D14 () -2340 |arxiv=gr-qc/0505037|bibcode = 2005GReGr..37.1671C }}</ref><ref>{{cite journal |doi=10.1016/j.shpsb.2006.03.005 |title=The arrow of time and the initial conditions of the universe |year=2006 |last1=Wald |first1=R |journal=Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics |volume=37 |issue=3 |pages=394–398}}</ref> as this is seen as the origin of the second law (see below).
| |
| | |
| ===Informal descriptions===
| |
| The second law can be stated in various succinct ways, including:
| |
| | |
| *It is impossible to produce work in the surroundings using a cyclic process connected to a single heat reservoir ([[William Thomson, 1st Baron Kelvin|Kelvin]], 1851).
| |
| *It is impossible to carry out a cyclic process using an engine connected to two heat reservoirs that will have as its only effect the transfer of a quantity of heat from the low-temperature reservoir to the high-temperature reservoir ([[Rudolf Clausius|Clausius]], 1854).
| |
| *If thermodynamic [[work (thermodynamics)|work]] is to be done at a finite rate, [[thermodynamic free energy|free energy]] must be expended. (Stoner, 2000)<ref>{{cite journal|author1=Stoner|title=Inquiries into the Nature of Free Energy and Entropy in Respect to Biochemical Thermodynamics|year=2000|pages=106–141|volume=2|journal=Entropy|arxiv=physics/0004055 |doi=10.3390/e2030106|bibcode = 2000Entrp...2..106S|issue=3 }}</ref>
| |
| | |
| ===Mathematical descriptions===
| |
| [[File:Clausius-1.jpg|thumb|150px|Rudolf Clausius]]
| |
| In 1856, the German physicist [[Rudolf Clausius]] stated what he called the "second fundamental theorem in the [[mechanical theory of heat]]" in the following form:<ref name="Clausius">[[Rudolf Clausius|Clausius, R.]] (1867).</ref>
| |
| : <math>\int \frac{\delta Q}{T} = -N</math>
| |
| | |
| where ''Q'' is heat, ''T'' is temperature and ''N'' is the "equivalence-value" of all uncompensated transformations involved in a cyclical process. Later, in 1865, Clausius would come to define "equivalence-value" as entropy. On the heels of this definition, that same year, the most famous version of the second law was read in a presentation at the Philosophical Society of Zurich on April 24, in which, in the end of his presentation, Clausius concludes:
| |
| | |
| <blockquote style="font-size: 125%">The entropy of the universe tends to a maximum.</blockquote>
| |
| | |
| This statement is the best-known phrasing of the second law. Moreover, owing to the general broadness of the terminology used here, e.g. [[universe]], as well as lack of specific conditions, e.g. open, closed, or isolated, to which this statement applies, many people take this simple statement to mean that the second law of thermodynamics applies virtually to every subject imaginable. This, of course, is not true; this statement is only a simplified version of a more complex description.
| |
| | |
| In terms of time variation, the mathematical statement of the second law for an [[isolated system]] undergoing an arbitrary transformation is:
| |
| | |
| : <math>\frac{dS}{dt} \ge 0</math>
| |
| | |
| where
| |
| | |
| : ''S'' is the entropy of the system and
| |
| : ''t'' is [[time]].
| |
| | |
| The equality sign holds in the case that only reversible processes take place inside the system. If irreversible processes take place (which is the case in real systems in operation) the >-sign holds. An alternative way of formulating of the second law for isolated systems is:
| |
| | |
| : <math>\frac{dS}{dt} = \dot S_{i}</math> with <math> \dot S_{i} \ge 0</math>
| |
| | |
| with <math> \dot S_{i}</math> the sum of the rate of [[entropy production]] by all processes inside the system. The advantage of this formulation is that it shows the effect of the [[entropy production]]. The rate of entropy production is a very important concept since it determines (limits) the efficiency of thermal machines. Multiplied with ambient temperature <math>T_{a}</math> it gives the so-called dissipated energy <math> P_{diss}=T_{a}\dot S_{i}</math>.
| |
| | |
| The expression of the second law for closed systems (so, allowing heat exchange and moving boundaries, but not exchange of matter) is:
| |
| | |
| : <math>\frac{dS}{dt} = \frac{\dot Q}{T}+\dot S_{i}</math> with <math> \dot S_{i} \ge 0</math>
| |
| | |
| Here
| |
| | |
| : <math>\dot Q</math> is the heat flow into the system
| |
| : <math>T</math> is the temperature at the point where the heat enters the system.
| |
| | |
| If heat is supplied to the system at several places we have to take the algebraic sum of the corresponding terms.
| |
| | |
| For open systems (also allowing exchange of matter):
| |
| | |
| : <math>\frac{dS}{dt} = \frac{\dot Q}{T}+\dot S+\dot S_{i}</math> with <math> \dot S_{i} \ge 0</math>
| |
| | |
| Here <math>\dot S</math> is the flow of entropy into the system associated with the flow of matter entering the system. It should not be confused with the time derivative of the entropy. If matter is supplied at several places we have to take the algebraic sum of these contributions.
| |
| | |
| [[Statistical mechanics]] gives an explanation for the second law by postulating that a material is composed of atoms and molecules which are in constant motion. A particular set of positions and velocities for each particle in the system is called a [[microstate (statistical mechanics)|microstate]] of the system and because of the constant motion, the system is constantly changing its microstate. Statistical mechanics postulates that, in equilibrium, each microstate that the system might be in is equally likely to occur, and when this assumption is made, it leads directly to the conclusion that the second law must hold in a statistical sense. That is, the second law will hold on average, with a statistical variation on the order of 1/√N where ''N'' is the number of particles in the system. For everyday (macroscopic) situations, the probability that the second law will be violated is practically zero. However, for systems with a small number of particles, thermodynamic parameters, including the entropy, may show significant statistical deviations from that predicted by the second law. Classical thermodynamic theory does not deal with these statistical variations.
| |
| | |
| ==Derivation from statistical mechanics==
| |
| {{Further|H-theorem}}
| |
| Due to [[Loschmidt's paradox]], derivations of the Second Law have to make an assumption regarding the past, namely that the system is [[Correlation and dependence|uncorrelated]] at some time in the past; this allows for simple probabilistic treatment. This assumption is usually thought as a [[boundary condition]], and thus the second Law is ultimately a consequence of the initial conditions somewhere in the past, probably at the beginning of the universe (the [[Big Bang]]), though [[Boltzmann brain|other scenarios]] have also been suggested.<ref name="Hawking AOT">{{cite journal|last=Hawking|first=SW|title=Arrow of time in cosmology|journal=Phys. Rev. D|year=1985|volume=32|issue=10|pages=2489–2495|doi=10.1103/PhysRevD.32.2489|url=http://prd.aps.org/abstract/PRD/v32/i10/p2489_1|accessdate=2013-02-15|bibcode = 1985PhRvD..32.2489H }}</ref><ref>{{cite book | last = Greene | first = Brian | authorlink = Brian Greene | title = The Fabric of the Cosmos | publisher = Alfred A. Knopf | year = 2004 | pages = 171 | isbn = 0-375-41288-3}}</ref><ref name=Lebowitz>{{cite journal|last=Lebowitz|first=Joel L.|title= Boltzmann's Entropy and Time's Arrow|journal=Physics Today|date=September 1993|volume=46|issue=9|pages=32–38|url=http://users.df.uba.ar/ariel/materias/FT3_2008_1C/papers_pdf/lebowitz_370.pdf|accessdate=2013-02-22}}</ref>
| |
| | |
| Given these assumptions, in statistical mechanics, the Second Law is not a postulate, rather it is a consequence of the [[Statistical mechanics#Fundamental postulate|fundamental postulate]], also known as the equal prior probability postulate, so long as one is clear that simple probability arguments are applied only to the future, while for the past there are auxiliary sources of information which tell us that it was low entropy.{{citation needed|date=August 2012}} The first part of the second law, which states that the entropy of a thermally isolated system can only increase is a trivial consequence of the equal prior probability postulate, if we restrict the notion of the entropy to systems in thermal equilibrium. The entropy of an isolated system in thermal equilibrium containing an amount of energy of <math>E</math> is:
| |
| | |
| : <math>S = k_{\mathrm B} \ln\left[\Omega\left(E\right)\right]\,</math>
| |
| | |
| where <math>\Omega\left(E\right)</math> is the number of quantum states in a small interval between <math>E</math> and <math>E +\delta E</math>. Here <math>\delta E</math> is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of <math>\delta E</math>. However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on <math>\delta E</math>.
| |
| | |
| Suppose we have an isolated system whose macroscopic state is specified by a number of variables. These macroscopic variables can, e.g., refer to the total volume, the positions of pistons in the system, etc. Then <math>\Omega</math> will depend on the values of these variables. If a variable is not fixed, (e.g. we do not clamp a piston in a certain position), then because all the accessible states are equally likely in equilibrium, the free variable in equilibrium will be such that <math>\Omega</math> is maximized as that is the most probable situation in equilibrium.
| |
| | |
| If the variable was initially fixed to some value then upon release and when the new equilibrium has been reached, the fact the variable will adjust itself so that <math>\Omega</math> is maximized, implies that the entropy will have increased or it will have stayed the same (if the value at which the variable was fixed happened to be the equilibrium value).
| |
| | |
| The entropy of a system that is not in equilibrium can be defined as:
| |
| | |
| : <math>S = -k_{\mathrm B}\sum_{j}P_{j}\ln\left(P_{j}\right)</math>
| |
| | |
| [[Entropy|see here]]. Here the <math>P_{j}</math> is the probabilities for the system to be found in the states labeled by the subscript j. In thermal equilibrium, the probabilities for states inside the energy interval <math>\delta E</math> are all equal to <math>1/\Omega</math>, and in that case the general definition coincides with the previous definition of S that applies to the case of thermal equilibrium.
| |
| | |
| Suppose we start from an equilibrium situation and we suddenly remove a constraint on a variable. Then right after we do this, there are a number <math>\Omega</math> of accessible microstates, but equilibrium has not yet been reached, so the actual probabilities of the system being in some accessible state are not yet equal to the prior probability of <math>1/\Omega</math>. We have already seen that in the final equilibrium state, the entropy will have increased or have stayed the same relative to the previous equilibrium state. Boltzmann's [[H-theorem]], however, proves that the entropy will increase continuously as a function of time during the intermediate out of equilibrium state.
| |
| | |
| ===Derivation of the entropy change for reversible processes===
| |
| The second part of the Second Law states that the entropy change of a system undergoing a reversible process is given by:
| |
| | |
| : <math>dS =\frac{\delta Q}{T}</math>
| |
| | |
| where the temperature is defined as:
| |
| | |
| :<math>\frac{1}{k_{\mathrm B} T}\equiv\beta\equiv\frac{d\ln\left[\Omega\left(E\right)\right]}{dE}</math>
| |
| | |
| See [[Microcanonical ensemble|here]] for the justification for this definition. Suppose that the system has some external parameter, x, that can be changed. In general, the energy eigenstates of the system will depend on x. According to the [[adiabatic theorem]] of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in.
| |
| | |
| The generalized force, X, corresponding to the external variable x is defined such that <math>X dx</math> is the work performed by the system if x is increased by an amount dx. E.g., if x is the volume, then X is the pressure. The generalized force for a system known to be in energy eigenstate <math>E_{r}</math> is given by:
| |
| | |
| : <math>X = -\frac{dE_{r}}{dx}</math>
| |
| | |
| Since the system can be in any energy eigenstate within an interval of <math>\delta E</math>, we define the generalized force for the system as the expectation value of the above expression:
| |
| | |
| : <math>X = -\left\langle\frac{dE_{r}}{dx}\right\rangle\,</math>
| |
| | |
| To evaluate the average, we partition the <math>\Omega\left(E\right)</math> energy eigenstates by counting how many of them have a value for <math>\frac{dE_{r}}{dx}</math> within a range between <math>Y</math> and <math>Y + \delta Y</math>. Calling this number <math>\Omega_{Y}\left(E\right)</math>, we have:
| |
| | |
| : <math>\Omega\left(E\right)=\sum_{Y}\Omega_{Y}\left(E\right)\,</math>
| |
| | |
| The average defining the generalized force can now be written:
| |
| | |
| : <math>X = -\frac{1}{\Omega\left(E\right)}\sum_{Y} Y\Omega_{Y}\left(E\right)\,</math>
| |
| | |
| We can relate this to the derivative of the entropy w.r.t. x at constant energy E as follows. Suppose we change x to x + dx. Then <math>\Omega\left(E\right)</math> will change because the energy eigenstates depend on x, causing energy eigenstates to move into or out of the range between <math>E</math> and <math>E+\delta E</math>. Let's focus again on the energy eigenstates for which <math>\frac{dE_{r}}{dx}</math> lies within the range between <math>Y</math> and <math>Y + \delta Y</math>. Since these energy eigenstates increase in energy by Y dx, all such energy eigenstates that are in the interval ranging from E – Y dx to E move from below E to above E. There are
| |
| | |
| : <math>N_{Y}\left(E\right)=\frac{\Omega_{Y}\left(E\right)}{\delta E} Y dx\,</math>
| |
| | |
| such energy eigenstates. If <math>Y dx\leq\delta E</math>, all these energy eigenstates will move into the range between <math>E</math> and <math>E+\delta E</math> and contribute to an increase in <math>\Omega</math>. The number of energy eigenstates that move from below <math>E+\delta E</math> to above <math>E+\delta E</math> is, of course, given by <math>N_{Y}\left(E+\delta E\right)</math>. The difference
| |
| | |
| : <math>N_{Y}\left(E\right) - N_{Y}\left(E+\delta E\right)\,</math>
| |
| | |
| is thus the net contribution to the increase in <math>\Omega</math>. Note that if Y dx is larger than <math>\delta E</math> there will be the energy eigenstates that move from below E to above <math>E+\delta E</math>. They are counted in both <math>N_{Y}\left(E\right)</math> and <math>N_{Y}\left(E+\delta E\right)</math>, therefore the above expression is also valid in that case.
| |
| | |
| Expressing the above expression as a derivative w.r.t. E and summing over Y yields the expression:
| |
| | |
| : <math>\left(\frac{\partial\Omega}{\partial x}\right)_{E} = -\sum_{Y}Y\left(\frac{\partial\Omega_{Y}}{\partial E}\right)_{x}= \left(\frac{\partial\left(\Omega X\right)}{\partial E}\right)_{x}\,</math>
| |
| | |
| The logarithmic derivative of <math>\Omega</math> w.r.t. x is thus given by:
| |
| | |
| : <math>\left(\frac{\partial\ln\left(\Omega\right)}{\partial x}\right)_{E} = \beta X +\left(\frac{\partial X}{\partial E}\right)_{x}\,</math>
| |
| | |
| The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and will thus vanishes in the thermodynamic limit. We have thus found that:
| |
| | |
| : <math>\left(\frac{\partial S}{\partial x}\right)_{E} = \frac{X}{T}\,</math>
| |
| | |
| Combining this with
| |
| | |
| : <math>\left(\frac{\partial S}{\partial E}\right)_{x} = \frac{1}{T}\,</math>
| |
| | |
| Gives:
| |
| | |
| : <math>dS = \left(\frac{\partial S}{\partial E}\right)_{x}dE+\left(\frac{\partial S}{\partial x}\right)_{E}dx = \frac{dE}{T} + \frac{X}{T} dx=\frac{\delta Q}{T}\,</math>
| |
| | |
| ===Derivation for systems described by the canonical ensemble===
| |
| If a system is in thermal contact with a heat bath at some temperature T then, in equilibrium, the probability distribution over the energy eigenvalues are given by the [[canonical ensemble]]:
| |
| | |
| : <math>P_{j}=\frac{\exp\left(-\frac{E_{j}}{k_{\mathrm B} T}\right)}{Z}</math>
| |
| | |
| Here Z is a factor that normalizes the sum of all the probabilities to 1, this function is known as the [[Partition function (statistical mechanics)|partition function]]. We now consider an infinitesimal reversible change in the temperature and in the external parameters on which the energy levels depend. It follows from the general formula for the entropy:
| |
| | |
| : <math>S = -k_{\mathrm B}\sum_{j}P_{j}\ln\left(P_{j}\right)</math>
| |
| | |
| that
| |
| | |
| : <math>dS = -k_{\mathrm B}\sum_{j}\ln\left(P_{j}\right)dP_{j}</math>
| |
| | |
| Inserting the formula for <math>P_{j}</math> for the canonical ensemble in here gives:
| |
| | |
| : <math>dS = \frac{1}{T}\sum_{j}E_{j}dP_{j}=\frac{1}{T}\sum_{j}d\left(E_{j}P_{j}\right) - \frac{1}{T}\sum_{j}P_{j}dE_{j}= \frac{dE + \delta W}{T}=\frac{\delta Q}{T}</math>
| |
| | |
| ===General derivation from unitarity of quantum mechanics===
| |
| The time development operator in [[quantum mechanics|quantum theory]] is [[Unitarity (physics)|unitary]], because the [[Hamiltonian (quantum mechanics)|Hamiltonian]] is [[Hermitian matrix|hermitian]]. Consequently, the [[transition probability]] matrix is [[doubly stochastic matrix|doubly stochastic]], which implies the Second Law of Thermodynamics.<ref name="everett56">[[Hugh Everett]], [http://www.pbs.org/wgbh/nova/manyworlds/pdf/dissertation.pdf "Theory of the Universal Wavefunction"], Thesis, Princeton University, (1956, 1973), Appendix I, pp 121 ff, in particular equation (4.4) at the top of page 127, and the statement on page 29 that "it is known that the [Shannon] entropy [...] is a monotone increasing function of the time."</ref><ref name="dewitt73">[[Bryce Seligman DeWitt]], [[R. Neill Graham]], eds, ''The Many-Worlds Interpretation of Quantum Mechanics'', Princeton Series in Physics, [[Princeton University Press]] (1973), ISBN 0-691-08131-X Contains Everett's thesis: The Theory of the Universal Wavefunction, pp 3–140.</ref> This derivation is quite general, based on the [[Shannon entropy]], and does not require any assumptions beyond unitarity, which is universally accepted. It is a ''consequence'' of the irreversibility or [[Invertible matrix|singular nature]] of the general transition matrix.
| |
| | |
| ==Non-equilibrium states==
| |
| It is only by convention, for the purposes of thermodynamic analysis, that any arbitrary occasion of space-time is said to be in thermodynamic equilibrium. In general, an occasion of space-time found in nature is not in thermodynamic equilibrium, read in the most stringent terms. In looser terms, nothing in the entire universe is or has ever been truly in exact thermodynamic equilibrium.<ref>Grandy, W.T., Jr (2008), p. 151.</ref><ref>Callen, H.B. (1960/1985), p. 15.</ref> If it is assumed, for the purposes of physical analysis, that one is dealing with a system in thermodynamic equilibrium, then statistically it is possible for that system to achieve moments of non-equilibrium. In some statistically unlikely events, hot particles "steal" the energy of cold particles, enough that the cold side gets colder and the hot side gets hotter, for a very brief time.
| |
| | |
| The physics involved in such events is beyond the scope of classical equilibrium thermodynamics, and is the topic of the [[fluctuation theorem]] (not to be confused with the [[fluctuation-dissipation theorem]]). This was first proved by Bochov and Kuzovlev,<ref>Bochov, G.N., Kuzovlev, Y.E. (1981). Nonlinear fluctuation-dissipation relations and stochastic models in nonequilibrium thermodynamics: I. Generalized fluctuation-dissipation theorem, ''Physica'', '''106A''': 443-479. See also [http://arxiv.org/pdf/1106.0589.pdf]</ref> and later by [[Denis Evans|Evans]] and [[Debra Searles|Searles]].<ref name="Evans FT">{{cite journal|last=Evans|first=DJ|coauthors=Searles|title=DJ|journal=Advances in Physics|year=2002|volume=51|issue=7| pages=1529–1585|url=http://rsc.anu.edu.au/~evans/papers/Review_37_with_figs.pdf|accessdate=2013-12-15|bibcode = 2002AdPhy..51.1529E |doi = 10.1080/00018730210155133 }}</ref><ref>Attard, P. (2012). ''Non-Equilibrium Thermodynamics and Statistical Mechanics. Foundations and Applications'', Oxford University Press, Oxford UK, 978-0-19-966276-0, p. 288.</ref> It gives a numerical estimate of the probability that a system away from equilibrium will have a certain change in entropy over a certain amount of time. The theorem is proved with the exact time reversible dynamical equations of motion but assumes the [[Axiom of Causality]], which is equivalent to assuming uncorrelated initial conditions (namely, uncorrelated past). Such events have been observed at a small enough scale where the likelihood of such a thing happening is significant. Quantitative predictions of this theorem have been confirmed in laboratory experiments by use of [[optical tweezers]] apparatus.<ref name="WSME 2002">{{cite journal |doi=10.1103/PhysRevLett.89.050601 |title=Experimental Demonstration of Violations of the Second Law of Thermodynamics for Small Systems and Short Time Scales |year=2002 |last1=Wang |first1=G. |last2=Sevick |first2=E. |last3=Mittag |first3=Emil |last4=Searles |first4=Debra |last5=Evans |first5=Denis |journal=Physical Review Letters |volume=89 |issue=5 |bibcode=2002PhRvL..89e0601W}}</ref>
| |
| | |
| ==Arrow of time==
| |
| {{Further|Entropy (arrow of time)}}
| |
| {{See also|Arrow of time}}
| |
| The second law of thermodynamics is a physical law that is not symmetric to reversal of the time direction.
| |
| | |
| The second law has been proposed to supply an explanation of the difference between moving forward and backwards in time, such as why the cause precedes the effect ([[Arrow of time#The causal arrow of time|the causal arrow of time]]).<ref>{{cite book | first = J.J. et al. | last = Halliwell | title = Physical Origins of Time Asymmetry| publisher = Cambridge | year = 1994| isbn = 0-521-56837-4}} chapter 6</ref>
| |
| | |
| ==Controversies==
| |
| | |
| ===Maxwell's demon===
| |
| {{main|Maxwell's demon}}
| |
| [[File:James-clerk-maxwell3.jpg|thumb|150px|James Clerk Maxwell]]
| |
| [[James Clerk Maxwell]] imagined one container divided into two parts, ''A'' and ''B''. Both parts are filled with the same [[gas]] at equal temperatures and placed next to each other. Observing the [[molecule]]s on both sides, an imaginary [[demon]] guards a trapdoor between the two parts. When a faster-than-average molecule from ''A'' flies towards the trapdoor, the demon opens it, and the molecule will fly from ''A'' to ''B''. The average [[speed]] of the molecules in ''B'' will have increased while in ''A'' they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in ''A'' and increases in ''B'', contrary to the second law of thermodynamics.
| |
| | |
| One of the most famous responses to this question was suggested in 1929 by [[Leó Szilárd]] and later by [[Léon Brillouin]]. Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy.
| |
| | |
| ===Loschmidt's paradox===
| |
| {{main|Loschmidt's paradox}}
| |
| [[Loschmidt's paradox]], also known as the reversibility paradox, is the objection that it should not be possible to deduce an irreversible process from time-symmetric dynamics. This puts the [[time reversal symmetry]] of nearly all known low-level fundamental physical processes at odds with any attempt to infer from them the second law of thermodynamics which describes the behavior of macroscopic systems. Both of these are well-accepted principles in physics, with sound observational and theoretical support, yet they seem to be in conflict; hence the [[paradox]].
| |
| | |
| Due to this paradox, derivations of the Second Law have to make an assumption regarding the past, namely that the system is [[Correlation and dependence|uncorrelated]] at some time in the past, or - equivalently - that the entropy in the past was lower than in the future. This assumption is usually thought as a [[boundary condition]], and thus the second Law is ultimately derived from the initial conditions of the [[Big Bang]].<ref name="Hawking AOT"/><ref>{{cite book | last = Greene | first = Brian | authorlink = Brian Greene | title = The Fabric of the Cosmos | publisher = Alfred A. Knopf | year = 2004 | pages = 161 | isbn = 0-375-41288-3}}</ref>
| |
| | |
| ===Gibbs paradox===
| |
| {{main|Gibbs paradox}}
| |
| In [[statistical mechanics]], a simple derivation of the [[entropy]] of an ideal gas of distinguishable particles based on the [[canonical ensemble]] yields an expression for the entropy which is not [[extensive variable|extensive]] (is not proportional to the amount of gas in question). This leads to an apparent [[physical paradox|paradox]] known as the Gibbs paradox, allowing, for instance, the entropy of closed systems to decrease, violating the second law of thermodynamics.
| |
| | |
| The paradox is averted by recognizing that the identity of the particles does not influence the entropy. In the conventional explanation, this is associated with an indistinguishability of the particles associated with quantum mechanics. However, a growing number of papers now take the perspective that it is merely the definition of entropy that is changed to ignore particle permutation (and thereby avert the paradox). The resulting equation for the entropy (of a classical ideal gas) is extensive, and is known as the [[Sackur-Tetrode equation]].
| |
| | |
| ===Poincaré recurrence theorem===
| |
| The [[Poincaré recurrence theorem]] states that certain systems will, after a sufficiently long time, return to a state very close to the initial state. The Poincaré recurrence time is the length of time elapsed until the recurrence, which is of the order of <math>\sim \exp\left(S/k\right)</math>.<ref>L. Dyson, J. Lindesay and L. Susskind, ''Is There Really a de Sitter/CFT Duality'', [[JHEP]] '''0208''', 45 (2002)</ref> The result applies to physical systems in which energy is conserved. The Recurrence theorem apparently contradicts the Second law of thermodynamics, which says that large dynamical systems evolve irreversibly towards the state with higher entropy, so that if one starts with a low-entropy state, the system will never return to it. There are many possible ways to resolve this paradox, but none of them is universally accepted.{{Citation needed|date=October 2010}} The most reasonable argument is that for typical thermodynamical systems the recurrence time is so large (many many times longer than the lifetime of the universe) that, for all practical purposes, one cannot observe the recurrence.
| |
| | |
| ===Future of the universe===
| |
| {{main|Future of an expanding universe}}
| |
| It has been suggested in the past that since the entropy in the universe is continuously rising, the amount of free energy diminishes and the universe will arrive at a [[Heat death of the universe|heat death]], in which no [[Work (thermodynamics)|work]] can be done and life cannot exist. An expanding universe, however, is not in a thermodynamical equilibrium, and simple considerations leading to the heat death scenario are not valid.
| |
| | |
| Taking the current view of the universe into account, it has been proposed that the universe will probably exhibit a future in which all known energy sources (such as stars) will decay. Nevertheless, it may be the case that work in smaller and smaller energy scales will still be possible, so that "interesting things can continue to happen at ... increasingly low levels of energy".<ref>F.C. Adams and G. Laughlin, A DYING UNIVERSE: The Long Term Fate and Evolution of Astrophysical Objects. Rev.Mod.Phys.69:337-372,1997. astro-ph/9701131v1 [http://arxiv.org/pdf/astro-ph/9701131v1.pdf]</ref>
| |
| | |
| ==Quotations==
| |
| {{Wikiquote}}
| |
| {{quote|The law that entropy always increases holds, I think, the supreme position among the [[laws of Nature]]. If someone points out to you that your pet theory of the [[universe]] is in disagreement with [[Maxwell's equations]] — then so much the worse for Maxwell's equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.|Sir [[Arthur Stanley Eddington]], ''The Nature of the Physical World'' (1927)}}
| |
| {{quote|The tendency for entropy to increase in isolated systems is expressed in the second law of thermodynamics — perhaps the most pessimistic and amoral formulation in all human thought.|[[Gregory Hill (writer)|Gregory Hill]] and [[Kerry Thornley]], ''[[Principia Discordia]]'' (1965)}}
| |
| {{quote|There have been nearly as many formulations of the second law as there have been discussions of it.|Philosopher / Physicist [[Percy Williams Bridgman|P.W. Bridgman]], (1941)}}
| |
| {{quote|Clausius is the author of the sybillic utterance, "The energy of the universe is constant; the entropy of the universe tends to a maximum." The objectives of continuum thermomechanics stop far short of explaining the "universe", but within that theory we may easily derive an explicit statement in some ways reminiscent of Clausius, but referring only to a modest object: an isolated body of finite size.|[[Clifford Truesdell|Truesdell, C.]], Muncaster, R.G. (1980). ''Fundamentals of Maxwell's Kinetic Theory of a Simple Monatomic Gas, Treated as a Branch of Rational Mechanics'', Academic Press, New York, ISBN0-12-701350-4, p.17.}}
| |
| {{quote|The [historically] early appearance of life is certainly an argument in favour of the idea that life is the result of spontaneous self-organization that occurs whenever conditions for it permit. However, we must admit that we remain far from any quantitative theory.|[[Ilya Prigogine|Prigogine, I.]], [[Isabelle Stengers|Stengers, I.]] (1984). ''Order Out of Chaos. Man's New Dialogue with Nature'', Bantam Books, Toronto, ISBN 0-553-34082-4, p. 176.}}
| |
| | |
| ==See also==
| |
| {{colbegin}}
| |
| *[[Clausius–Duhem inequality]]
| |
| *''[[Entropy: A New World View]]''
| |
| *[[History of thermodynamics]]
| |
| *[[Jarzynski equality]]
| |
| *[[Laws of thermodynamics]]
| |
| *[[Maximum entropy thermodynamics]]
| |
| *''[[Reflections on the Motive Power of Fire]]''
| |
| *[[Thermal diode]]
| |
| *[[Relativistic heat conduction]]
| |
| {{colend}}
| |
| | |
| ==References==
| |
| {{reflist|35em}}
| |
| {{refend}}
| |
| | |
| === Bibliography of citations===
| |
| {{refbegin}}
| |
| *Adkins, C.J. (1968/1983). ''Equilibrium Thermodynamics'', (1st edition 1968), third edition 1983, Cambridge University Press, Cambridge UK, ISBN 0-521-25445-0.
| |
| *Bailyn, M. (1994). ''A Survey of Thermodynamics'', American Institute of Physics, New York, ISBN 0-88318-797-3.
| |
| *Buchdahl, H.A. (1966). ''The Concepts of Classical Thermodynamics'', Cambridge University Press, Cambridge UK.
| |
| *[[Herbert Callen|Callen, H.B.]] (1960/1985). ''Thermodynamics and an Introduction to Thermostatistics'', (1st edition 1960) 2nd edition 1985, Wiley, New York, ISBN 0-471-86256-8.
| |
| *{{cite journal|author=C. Carathéodory|author1-link=Constantin Carathéodory
| |
| |title=Untersuchungen über die Grundlagen der Thermodynamik|year=1909|journal=Mathematische Annalen|volume=67|pages=355–386|url=http://gdz.sub.uni-goettingen.de/index.php?id=11&PPN=PPN235181684_0067&DMDID=DMDLOG_0033&L=1
| |
| |quote=Axiom II: In jeder beliebigen Umgebung eines willkürlich vorgeschriebenen Anfangszustandes gibt es Zustände, die durch adiabatische Zustandsänderungen nicht beliebig approximiert werden können. (p.363)}}. A translation may be found [http://neo-classical-physics.info/uploads/3/0/6/5/3065888/caratheodory_-_thermodynamics.pdf here]. Also a mostly reliable [http://books.google.com.au/books?id=xwBRAAAAMAAJ&q=Investigation+into+the+foundations translation is to be found] at Kestin, J. (1976). ''The Second Law of Thermodynamics'', Dowden, Hutchinson & Ross, Stroudsburg PA.
| |
| *[[Nicolas Léonard Sadi Carnot|Carnot, S.]] (1824/1986). [http://www.worldcat.org/title/reflections-on-the-motive-power-of-fire-a-critical-edition-with-the-surviving-scientific-manuscripts-translated-and-edited-by-fox-robert/oclc/812944517&referer=brief_results ''Reflections on the motive power of fire''], Manchester University Press, Manchester UK, ISBN 0719017416. [http://www.archive.org/stream/reflectionsonmot00carnrich#page/n7/mode/2up Also here.]
| |
| *{{cite journal|last=Clausius|first=R.|author1-link=Rudolf Clausius|title=Ueber Die Bewegende Kraft Der Wärme Und Die Gesetze, Welche Sich Daraus Für Die Wärmelehre Selbst Ableiten Lassen|journal=Annalen der Physik|year=1850|volume=79|pages=368–397, 500–524|url=http://gallica.bnf.fr/ark:/12148/bpt6k15164w/f518.image|accessdate=26 June 2012}} Translated into English: {{cite journal|last=Clausius|first=R.|title=On the Moving Force of Heat, and the Laws regardingthe Nature of Heat itself which are deducible therefrom|journal=London, Edinburgh and Dublin Philosophical Magazine and Journal of Science|date=July 1851| volume=2|series=4th|issue=VIII|pages=1–21; 102–119|url=http://archive.org/details/londonedinburghd02lond|accessdate=26 June 2012}}
| |
| *{{cite book|last=Clausius|first=R.|author1-link=Rudolf Clausius|title=The Mechanical Theory of Heat – with its Applications to the Steam Engine and to Physical Properties of Bodies|year=1867|publisher=John van Voorst|location=London|url=http://books.google.com/books?id=8LIEAAAAYAAJ&printsec=frontcover&dq=editions:PwR_Sbkwa8IC&hl=en&sa=X&ei=h6DgT5WnF46e8gSVvbynDQ&ved=0CDYQuwUwAA#v=onepage&q&f=false|accessdate=19 June 2012}}
| |
| *Eu, B.C. (2002). ''Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics'', Kluwer Academic Publishers, Dordrecht, ISBN 1–4020–0788–4.
| |
| *Grandy, W.T., Jr (2008). [http://global.oup.com/academic/product/entropy-and-the-time-evolution-of-macroscopic-systems-9780199546176?cc=au&lang=en& ''Entropy and the Time Evolution of Macroscopic Systems'']. Oxford University Press. ISBN 978-0-19-954617-6.
| |
| *Kondepudi, D., [[Ilya Prigogine|Prigogine, I.]] (1998). ''Modern Thermodynamics: From Heat Engines to Dissipative Structures'', John Wiley & Sons, Chichester, ISBN 0–471–97393–9.
| |
| *Lebon, G., Jou, D., Casas-Vázquez, J. (2008). ''Understanding Non-equilibrium Thermodynamics: Foundations, Applications, Frontiers'', Springer-Verlag, Berlin, e-ISBN 978-3-540-74252-4.
| |
| *{{cite journal|title=The Physics and Mathematics of the Second Law of Thermodynamics|last1=Lieb |first1=E. H. |last2=Yngvason |first2=J.|journal=Physics Reports|volume=310|pages=1–96 |year=1999|doi=10.1016/S0370-1573(98)00082-9|arxiv = cond-mat/9708200 |bibcode = 1999PhR...310....1L }}
| |
| *Münster, A. (1970), ''Classical Thermodynamics'', translated by E.S. Halberstadt, Wiley–Interscience, London, ISBN 0-471-62430-6.
| |
| *[[Max Planck|Planck, M.]] (1926). Über die Begründing des zweiten Hauptsatzes der Thermodynamik, ''S.B. Preuß. Akad. Wiss. phys. math. Kl.'': 453–463.
| |
| *Quinn, T.J. (1983). ''Temperature'', Academic Press, London, ISBN 0-12-569680-9.
| |
| *{{cite journal|last=Thomson|first=W.|author-link=William Thomson, 1st Baron Kelvin|title=On the Dynamical Theory of Heat, with numerical results deduced from Mr Joule’s equivalent of a Thermal Unit, and M. Regnault’s Observations on Steam|journal=Transactions of the Royal Society of Edinburgh|date=March 1851|volume=XX|issue=part II|pages=261–268; 289–298}} Also published in {{cite journal|last=Thomson|first=W.|title=On the Dynamical Theory of Heat, with numerical results deduced from Mr Joule’s equivalent of a Thermal Unit, and M. Regnault’s Observations on Steam|journal=Philos. Mag |date=December 1852 |volume=IV |series=4 |issue=22 |pages=8–21 |url=http://archive.org/details/londonedinburghp04maga |accessdate=25 June 2012}}
| |
| *[[Clifford Truesdell|Truesdell, C.]] (1980). ''The Tragicomical History of Thermodynamics 1822-1854'', Springer, New York, ISBN 0–387–90403–4.
| |
| *[[Mark Zemansky|Zemansky, M.W.]] (1968). ''Heat and Thermodynamics. An Intermediate Textbook'', fifth edition, McGraw-Hill Book Company, New York.
| |
| {{refend}}
| |
| | |
| ==Further reading==
| |
| *Goldstein, Martin, and Inge F., 1993. ''The Refrigerator and the Universe''. Harvard Univ. Press. Chpts. 4–9 contain an introduction to the Second Law, one a bit less technical than this entry. ISBN 978-0-674-75324-2
| |
| *Leff, Harvey S., and Rex, Andrew F. (eds.) 2003. ''Maxwell's Demon 2 : Entropy, classical and quantum information, computing''. Bristol UK; Philadelphia PA: [[Institute of Physics]]. ISBN 978-0-585-49237-7
| |
| *{{Cite book | first = J.J. | last = Halliwell | title = Physical Origins of Time Asymmetry| publisher = Cambridge | year = 1994| isbn = 0-521-56837-4}}(technical).
| |
| *{{cite book |title=Reflections on the Motive Power of Heat and on Machines Fitted to Develop That Power |last=Carnot |first=Sadi |authorlink= |coauthors= [[Robert Henry Thurston|Thurston, Robert Henry]] (editor and translator) |year=1890 |publisher=J. Wiley & Sons |location=New York |isbn= |pages= }} ([http://books.google.com/books?id=tgdJAAAAIAAJ full text of 1897 ed.)]) ([http://www.history.rochester.edu/steam/carnot/1943/ html])
| |
| *Stephen Jay Kline (1999). ''The Low-Down on Entropy and Interpretive Thermodynamics'', La Cañada, CA: DCW Industries. ISBN 1928729010.
| |
| *Kostic, M., ''Revisiting The Second Law of Energy Degradation and Entropy Generation: From Sadi Carnot's Ingenious Reasoning to Holistic Generalization''.
| |
| *AIP Conf. Proc. 1411, pp. 327–350; doi: http://dx.doi.org/10.1063/1.3665247. American Institute of Physics, 2011. ISBN 978-0-7354-0985-9. Abstract at: [http://adsabs.harvard.edu/abs/2011AIPC.1411..327K ]. Full article (24 pages [http://scitation.aip.org/getpdf/servlet/GetPDFServlet?filetype=pdf&id=APCPCS001411000001000327000001&idtype=cvips&doi=10.1063/1.3665247&prog=normal&bypassSSO=1]), also at [http://www.kostic.niu.edu/2ndLaw/Revisiting%20The%20Second%20Law%20of%20Energy%20Degradation%20and%20Entropy%20Generation%20-%20From%20Carnot%20to%20Holistic%20Generalization-4.pdf].
| |
| | |
| ==External links==
| |
| *[[Stanford Encyclopedia of Philosophy]]: "[http://plato.stanford.edu/entries/statphys-statmech/ Philosophy of Statistical Mechanics]" – by Lawrence Sklar.
| |
| *[http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node30.html ''Second law of thermodynamics''] in the MIT Course [http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/notes.html ''Unified Thermodynamics and Propulsion''] from Prof. Z. S. Spakovszky
| |
| *[[E.T. Jaynes]], 1988, "[http://bayes.wustl.edu/etj/articles/ccarnot.pdf The evolution of Carnot's principle,]" in G. J. Erickson and C. R. Smith (eds.)''Maximum-Entropy and Bayesian Methods in Science and Engineering, Vol 1'', p. 267.
| |
| *[http://neo-classical-physics.info/uploads/3/0/6/5/3065888/caratheodory_-_thermodynamics.pdf Caratheodory, C., "Examination of the foundations of thermodynamics," trans. by D. H. Delphenich]
| |
| | |
| {{DEFAULTSORT:Second Law Of Thermodynamics}}
| |
| [[Category:Concepts in physics]]
| |
| [[Category:Laws of thermodynamics|2]]
| |
| [[Category:Non-equilibrium thermodynamics]]
| |
| [[Category:Philosophy of thermal and statistical physics]]
| |