|
|
Line 1: |
Line 1: |
| {{two other uses|the concept of definite integrals in [[calculus]]|the indefinite integral|antiderivative|the set of numbers|integer}}
| | There are free versions of antivirus software packages out there (plus several free trials) that are worth looking into. AVG has both free and paid versions of their antivirus software, as Does PCTools. To discover more, do a search for 'free anti virus software' plus there are hundreds of results. But remember the aged adage: There's no such thing as a free lunch.<br><br>There are thousands of programs which claim to offer the number one protection for PCs. This may result a lot of confusion for the ordinary PC user. It is imperative to understand how to choose the number one anti-virus protection system for computer. There are certain important attributes which must be obtainable in the kind of software that can do a wise job of safeguarding the program.<br><br>While [http://leadingpcsoftware.com/best-antivirus-software/ best antivirus software for windows 7] 8 Antivirus is running, it usually block the ability to run any programs because a way to scare we into thinking that the computer is infected with malware.<br><br>After your computer has become afflicted, the Koobface virus downloads a file called tinyproxy.exe that identifies all of your friends, and sends them afflicted messages as well. This is one quick method to alienate even the greatest of friends. The virus interferes with navigation by denying we access to Google searches or MSN searches plus linking you to lesser acknowledged look machines. It has also been known to corrupt files plus crash your memory. The file may additionally contain a key logger which records the keystrokes to take the passwords, and which involves your online banking passwords.<br><br>If you are uncertain that this can be the ideal anti-virus software for the PC, a free, 30 trial version is accessible for download off of the Avast! website. When the 30 trial commences, if you are satisfied with all the amount of security, we will then register with the program plus continue utilizing the software for an extra year! Free of charge! It doesn't get much better than which!<br><br>The best, fastest, and best method of removing spyware is of course getting the software to remove it for you. There are many kinds of programs available which will do this for we we merely have to figure out that is the number one 1. Finding a system you like is basically the only time consuming piece of removing spyware, after you find a system that functions it only amounts to pressing a button. Out of all of the ways to remove spyware talked about, this really is by far the number one way to do it.<br><br>It might take a extended time before anybody may compare antivirus software comprehensively. That will be really difficult to do because there are thus much of these goods in the market now. There are also various details to consider. A definite plan could be established initially before the factual, realistic plus unbiased method to compare antivirus software is realized. |
| {{Refimprove|date=April 2012}}
| |
| [[File:Integral example.svg|thumb|300px|A definite integral of a function can be represented as the signed area of the region bounded by its graph.|alt=Definite integral example]]
| |
| {{Calculus|Integral}}
| |
| | |
| '''Integration''' is an important concept in [[mathematics]] and, together with its inverse, [[derivative|differentiation]], is one of the two main operations in [[calculus]]. Given a [[function (mathematics)|function]] ''f'' of a [[Real number|real]] [[variable (mathematics)|variable]] ''x'' and an [[interval (mathematics)|interval]] {{nowrap|<nowiki>[</nowiki>''a'', ''b''<nowiki>]</nowiki>}} of the [[real line]], the '''definite integral''' | |
| | |
| : <math>\int_a^b \! f(x)\,dx</math>
| |
| | |
| is defined informally to be the signed [[area (geometry)|area]] of the region in the ''xy''-plane bounded by the [[Graph of a function|graph]] of ''f'', the ''x''-axis, and the vertical lines {{nowrap|''x'' {{=}} ''a''}} and {{nowrap|''x'' {{=}} ''b''}}, such that area above the ''x''-axis adds to the total, and that below the ''x''-axis subtracts from the total. | |
| | |
| The term ''integral'' may also refer to the related notion of the [[antiderivative]], a function ''F'' whose [[derivative]] is the given function ''f''. In this case, it is called an ''indefinite integral'' and is written:
| |
| :<math>F(x) = \int f(x)\,dx.</math>
| |
| However, the integrals discussed in this article are termed ''definite integrals''.
| |
| | |
| The principles of integration were formulated independently by [[Isaac Newton]] and [[Gottfried Leibniz]] in the late 17th century. Through the [[fundamental theorem of calculus]], which they independently developed, integration is connected with differentiation: if ''f'' is a continuous real-valued function defined on a [[closed interval]] {{nowrap|[''a'', ''b'']}}, then, once an antiderivative ''F'' of ''f'' is known, the definite integral of ''f'' over that interval is given by
| |
| | |
| :<math>\int_a^b \! f(x)\,dx = F(b) - F(a).</math>
| |
| | |
| Integrals and derivatives became the basic tools of calculus, with numerous applications in science and [[engineering]]. The founders of the calculus thought of the integral as an infinite sum of rectangles of [[infinitesimal]] width. A rigorous mathematical definition of the integral was given by [[Bernhard Riemann]]. It is based on a limiting procedure which approximates the area of a [[curvilinear]] region by breaking the region into thin vertical slabs. Beginning in the nineteenth century, more sophisticated notions of integrals began to appear, where the type of the function as well as the domain over which the integration is performed has been generalised. A [[line integral]] is defined for functions of two or three variables, and the interval of integration {{nowrap|<nowiki>[</nowiki>''a'', ''b''<nowiki>]</nowiki>}} is replaced by a certain [[curve]] connecting two points on the plane or in the space. In a [[surface integral]], the curve is replaced by a piece of a [[surface]] in the three-dimensional space.
| |
| Integrals of [[differential form]]s play a fundamental role in modern [[differential geometry]]. These generalizations of integrals first arose from the needs of [[physics]], and they play an important role in the formulation of many physical laws, notably those of [[Classical electromagnetism|electrodynamics]]. There are many modern concepts of integration, among these, the most common is based on the abstract mathematical theory known as [[Lebesgue integration]], developed by [[Henri Lebesgue]].
| |
| | |
| {{TOC limit|2}}
| |
| | |
| ==History==
| |
| {{See also|History of calculus}}
| |
| | |
| ===Pre-calculus integration===
| |
| The first documented systematic technique capable of determining integrals is the [[method of exhaustion]] of the [[ancient Greek]] astronomer [[Eudoxus of Cnidus|Eudoxus]] (''ca.'' 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of shapes for which the area or volume was known. This method was further developed and employed by [[Archimedes]] in the 3rd century BC and used to calculate areas for [[parabola]]s and an approximation to the area of a circle. Similar methods were independently developed in China around the 3rd century AD by [[Liu Hui]], who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians [[Zu Chongzhi]] and [[Zu Geng (mathematician)|Zu Geng]] to find the volume of a sphere ({{harvnb|Shea | 2007}}; {{harvnb|Katz|2004|pp=125–126}}).
| |
| | |
| The next significant advances in integral calculus did not begin to appear until the 16th century. At this time the work of [[Bonaventura Cavalieri|Cavalieri]] with his [[Cavalieri's principle|''method of indivisibles'']], and work by [[Pierre de Fermat|Fermat]], began to lay the foundations of modern calculus, with Cavalieri computing the integrals of ''x''<sup>''n''</sup> up to degree {{nowrap|''n'' {{=}} 9}} in [[Cavalieri's quadrature formula]]. Further steps were made in the early 17th century by [[Isaac Barrow|Barrow]] and [[Evangelista Torricelli|Torricelli]], who provided the first hints of a connection between integration and [[Differential calculus|differentiation]]. Barrow provided the first proof of the [[fundamental theorem of calculus]]. [[John Wallis|Wallis]] generalized Cavalieri's method, computing integrals of ''x'' to a general power, including negative powers and fractional powers.
| |
| | |
| ===Newton and Leibniz===
| |
| The major advance in integration came in the 17th century with the independent discovery of the [[fundamental theorem of calculus]] by [[Isaac Newton|Newton]] and [[Gottfried Leibniz|Leibniz]]. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Newton and Leibniz developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions within continuous domains. This framework eventually became modern [[calculus]], whose notation for integrals is drawn directly from the work of Leibniz.
| |
| <!--- Please, do not remove: helpful for verification
| |
| The last sentence originally said 'work of Newton and Leibniz', but for integrals, only Leibniz's notation is used. --->
| |
| | |
| ===Formalizing integrals===
| |
| While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of [[Rigor#Mathematical rigour|rigour]]. [[George Berkeley|Bishop Berkeley]] memorably attacked the vanishing increments used by Newton, calling them "[[The Analyst#Content|ghosts of departed quantities]]". Calculus acquired a firmer footing with the development of [[Limit (mathematics)|limits]]. Integration was first rigorously formalized, using limits, by [[Bernhard Riemann|Riemann]]. Although all bounded piecewise continuous functions are Riemann integrable on a bounded interval, subsequently more general functions were considered—particularly in the context of [[Fourier analysis]]—to which Riemann's definition does not apply, and [[Henri Lebesgue|Lebesgue]] formulated a different definition of integral, founded in [[Measure (mathematics)|measure theory]] (a subfield of [[real analysis]]). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the [[standard part]] of an infinite Riemann sum, based on the [[hyperreal number]] system.
| |
| | |
| ===Historical notation===
| |
| [[Isaac Newton]] used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with <math>\dot{x}</math> or <math>x'\,\!</math>, which Newton used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted.
| |
| | |
| The modern notation for the indefinite integral was introduced by [[Gottfried Leibniz]] in 1675 ({{Harvnb|Burton|1988|loc=p. 359}}; {{Harvnb|Leibniz|1899|loc=p. 154}}). He adapted the [[integral symbol]], '''∫''', from the letter ''ſ'' ([[long s]]), standing for ''summa'' (written as ''ſumma''; Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by [[Joseph Fourier]] in ''Mémoires'' of the French Academy around 1819–20, reprinted in his book of 1822 ({{Harvnb|Cajori|1929|loc=pp. 249–250}}; {{Harvnb|Fourier|1822|loc=§231}}).
| |
| | |
| ==Terminology and notation==
| |
| The simplest case, the integral over ''x'' of a real-valued function ''f''(''x''), is written as
| |
| | |
| :<math> \int f(x)\,dx . </math>
| |
| | |
| The integral sign ∫ represents integration. The ''dx'' indicates that we are integrating over ''x''; ''x'' is called the variable of integration. Inside the ∫...''dx'' is the expression to be integrated, called the ''integrand''. In correct mathematical typography, the ''dx'' is separated from the integrand by a space (as shown). Some authors use an upright ''d'' (that is, d''x'' instead of ''dx''). In this case the integrand is the function ''f''(''x''). Because there is no domain specified, the integral is called an ''indefinite integral''.
| |
| | |
| When integrating over a specified domain, we speak of a ''definite integral''. Integrating over a domain ''D'' is written as
| |
| :<math>\int_D f(x)\,dx ,</math> or <math>\displaystyle \int_a^b f(x)\,dx </math> if the domain is an interval [''a'', ''b''] of ''x''.
| |
| | |
| The domain ''D'' or the interval [''a'', ''b''] is called the ''domain of integration''.
| |
| | |
| If a function has an integral, it is said to be ''integrable''. In general, the integrand may be a function of more than one variable, and the domain of integration may be an area, volume, a higher dimensional region, or even an abstract space that does not have a geometric structure in any usual sense (such as a [[sample space]] in probability theory).
| |
| | |
| In the [[modern Arabic mathematical notation]], which aims at pre-university levels of education in the Arab world and is written from right to left, a reflected integral symbol [[File:ArabicIntegralSign.svg|22px]] is used {{Harv|W3C|2006}}.
| |
| | |
| The variable of integration ''dx'' has different interpretations depending on the theory being used. It can be seen as strictly a notation indicating that ''x'' is a [[bound variable|dummy variable]] of integration; if the integral is seen as a [[Riemann sum]], ''dx'' is a reflection of the weights or widths ''d'' of the intervals of ''x''; in [[Lebesgue integration]] and its extensions, ''dx'' is a [[measure (mathematics)|measure]]; in [[non-standard analysis]], it is an [[infinitesimal]]; or it can be seen as an independent mathematical quantity, a [[differential form]]. More complicated cases may vary the notation slightly. In Leibniz's notation, ''dx'' is interpreted as an infinitesimal change in ''x''. Although Leibniz's interpretation [[The Analyst#Content|lacks rigour]], his integration notation is the most common one in use today.
| |
| | |
| ==Introduction==
| |
| Integrals appear in many practical situations. If a swimming pool is rectangular with a flat bottom, then from its length, width, and depth we can easily determine the volume of water it can contain (to fill it), the area of its surface (to cover it), and the length of its edge (to rope it). But if it is oval with a rounded bottom, all of these quantities call for integrals. Practical approximations may suffice for such trivial examples, but [[precision engineering]] (of any discipline) requires exact and rigorous values for these elements.
| |
| | |
| [[File:Integral approximations.svg|thumb|right|Approximations to integral of √''x'' from 0 to 1, with 5 <span style="color:#fec200">■</span> (yellow) right endpoint partitions and 12 <span style="color:#009246">■</span> (green) left endpoint partitions|alt=Integral approximation example]]
| |
| To start off, consider the curve {{nowrap|''y'' {{=}} ''f''(''x'')}} between {{nowrap|''x'' {{=}} 0}} and {{nowrap|''x'' {{=}} 1}} with {{nowrap|''f''(''x'') {{=}} √''x''}}. We ask:
| |
| :What is the area under the function ''f'', in the interval from 0 to 1?
| |
| and call this (yet unknown) area the '''integral''' of ''f''. The notation for this integral will be
| |
| :<math> \int_0^1 \sqrt x \, dx \,\!.</math>
| |
| | |
| As a first approximation, look at the unit square given by the sides {{nowrap|''x'' {{=}} 0}} to {{nowrap|''x'' {{=}} 1}} and {{nowrap|''y'' {{=}} ''f''(0) {{=}} 0}} and {{nowrap|''y'' {{=}} ''f''(1) {{=}} 1}}. Its area is exactly 1. As it is, the true value of the integral must be somewhat less. Decreasing the width of the approximation rectangles shall give a better result; so cross the interval in five steps, using the approximation points 0, 1/5, 2/5, and so on to 1. Fit a box for each step using the right end height of each curve piece, thus √(1⁄5), √(2⁄5), and so on to {{nowrap|√1 {{=}} 1}}. Summing the areas of these rectangles, we get a better approximation for the sought integral, namely
| |
| :<math>\textstyle \sqrt {\frac {1} {5}} \left ( \frac {1} {5} - 0 \right ) + \sqrt {\frac {2} {5}} \left ( \frac {2} {5} - \frac {1} {5} \right ) + \cdots + \sqrt {\frac {5} {5}} \left ( \frac {5} {5} - \frac {4} {5} \right ) \approx 0.7497.\,\!</math>
| |
| | |
| We are taking a sum of finitely many function values of ''f'', multiplied with the differences of two subsequent approximation points. We can easily see that the approximation is still too large. Using more steps produces a closer approximation, but will never be exact: replacing the 5 subintervals by twelve in the same way, but with the left end height of each piece, we will get an approximate value for the area of 0.6203, which is too small. The key idea is the transition from adding ''finitely many'' differences of approximation points multiplied by their respective function values to using infinitely many fine, or ''[[infinitesimal]]'' steps.
| |
| | |
| As for the ''actual calculation of integrals'', the [[fundamental theorem of calculus]], due to Newton and Leibniz, is the fundamental link between the operations of [[Derivative|differentiating]] and integrating. Applied to the square root curve, ''f''(''x'') = ''x''<sup>1/2</sup>, it says to look at the [[antiderivative]] {{nowrap|''F''(''x'') {{=}} (2/3)''x''<sup>3/2</sup>}}, and simply take ''F''(1) − ''F''(0), where 0 and 1 are the boundaries of the [[interval (mathematics)|interval]] [0,1]. So the ''exact'' value of the area under the curve is computed formally as
| |
| :<math> \int_0^1 \sqrt x \,dx = \int_0^1 x^{1/2} \,dx = F(1)- F(0) = \frac{2}{3}.</math>
| |
| | |
| (This is a case of a general rule, that for {{nowrap|''f''(''x'') {{=}} ''x''<sup>''q''</sup>}}, with {{nowrap|''q'' ≠ −1}}, the related function, the so-called antiderivative is {{nowrap|''F''(''x'') {{=}} ''x''<sup>''q'' + 1</sup>/(''q'' + 1).}})
| |
| | |
| The notation
| |
| :<math> \int f(x) \, dx \,\! </math>
| |
| conceives the integral as a weighted sum, denoted by the elongated ''s'', of function values, ''f''(''x''), multiplied by infinitesimal step widths, the so-called ''differentials'', denoted by ''dx''. The multiplication sign is usually omitted.
| |
| | |
| <!-- Note: Today's limits of integration were not part of the notation until much later, due to Fourier. -->Historically, after the failure of early efforts to rigorously interpret infinitesimals, Riemann formally defined integrals as a [[limit (mathematics)|limit]] of weighted sums, so that the ''dx'' suggested the limit of a difference (namely, the interval width). Shortcomings of Riemann's dependence on intervals and continuity motivated newer definitions, especially the [[Lebesgue integration|Lebesgue integral]], which is founded on an ability to extend the idea of "measure" in much more flexible ways. Thus the notation
| |
| :<math> \int_A f(x) \, d\mu \,\!</math>
| |
| refers to a weighted sum in which the function values are partitioned, with μ measuring the weight to be assigned to each value. Here ''A'' denotes the region of integration.
| |
| | |
| [[Differential geometry]], with its "calculus on [[manifold]]s", gives the familiar notation yet another interpretation. Now ''f''(''x'') and ''dx'' become a [[differential form]], {{nowrap|ω {{=}} ''f''(''x'') ''dx''}}, a new [[differential operator]] ''d'', known as the [[exterior derivative]] is introduced, and the fundamental theorem becomes the more general [[Stokes' theorem]],
| |
| :<math> \int_{A} d\omega = \int_{\part A} \omega , \,\!</math>
| |
| from which [[Green's theorem]], the [[divergence theorem]], and the [[fundamental theorem of calculus]] follow.
| |
| | |
| More recently, infinitesimals have reappeared with rigor, through modern innovations such as [[non-standard analysis]]. Not only do these methods vindicate the intuitions of the pioneers; they also lead to new mathematics.
| |
| | |
| Although there are differences between these conceptions of integral, there is considerable overlap. Thus, the area of the surface of the oval swimming pool can be handled as a geometric ellipse, a sum of infinitesimals, a Riemann integral, a Lebesgue integral, or as a manifold with a differential form. The calculated result will be the same for all.
| |
| {{multiple image
| |
| <!-- Essential parameters -->
| |
| | align = center
| |
| | direction = horizontal
| |
| | width = 300
| |
| <!-- Extra parameters -->
| |
| | header = Darboux sums
| |
| | header_align = center
| |
| | header_background =
| |
| | footer =
| |
| | footer_align =
| |
| | footer_background =
| |
| | background color =
| |
| | |
| |image1=Riemann Integration and Darboux Upper Sums.gif
| |
| |width1=300
| |
| |caption1=<div class="center" style="width:auto; margin-left:auto; margin-right:auto;">Darboux upper sums of the function y = x<sup>2</sup></div>
| |
| |alt1=Upper Darboux sum example
| |
| | |
| |image2=Riemann Integration and Darboux Lower Sums.gif
| |
| |width2=300
| |
| |caption2=<div class="center" style="width:auto; margin-left:auto; margin-right:auto;">Darboux lower sums of the function y = x<sup>2</sup></div>
| |
| |alt2=Lower Darboux sum example
| |
| }}
| |
| | |
| ==Formal definitions==
| |
| {{multiple image
| |
| | align = right
| |
| | direction = vertical
| |
| | width = 200
| |
| | |
| | image1 = Integral Riemann sum.png
| |
| | alt1 = Riemann integral approximation example
| |
| | caption1 = <div class="center" style="width:auto; margin-left:auto; margin-right:auto;">Integral example with irregular partitions (largest marked in red)</div>
| |
| | |
| | image2 = Riemann sum convergence.png
| |
| | alt2 = Riemann sum convergence
| |
| | caption2 = <div class="center" style="width:auto; margin-left:auto; margin-right:auto;">Riemann sums converging</div>
| |
| }}
| |
| | |
| There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but also occasionally for pedagogical reasons. The most commonly used definitions of integral are Riemann integrals and Lebesgue integrals.
| |
| | |
| ===Riemann integral===
| |
| {{Main|Riemann integral}}
| |
| | |
| The Riemann integral is defined in terms of [[Riemann sum]]s of functions with respect to ''tagged partitions'' of an interval. Let [''a'',''b''] be a [[Interval (mathematics)|closed interval]] of the real line; then a ''tagged partition'' of [''a'',''b''] is a finite sequence
| |
| | |
| :<math> a = x_0 \le t_1 \le x_1 \le t_2 \le x_2 \le \cdots \le x_{n-1} \le t_n \le x_n = b . \,\!</math>
| |
| | |
| This partitions the interval [''a'',''b''] into ''n'' sub-intervals {{nowrap|[''x''<sub>''i''−1</sub>, ''x''<sub>''i''</sub>]}} indexed by ''i'', each of which is "tagged" with a distinguished point {{nowrap|''t''<sub>''i''</sub> ∈ [''x''<sub>''i''−1</sub>, ''x''<sub>''i''</sub>]}}. A ''Riemann sum'' of a function ''f'' with respect to such a tagged partition is defined as
| |
| :<math>\sum_{i=1}^{n} f(t_i) \Delta_i ; </math>
| |
| thus each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the sub-interval width. Let {{nowrap|Δ<sub>''i''</sub> {{=}} ''x''<sub>''i''</sub>−''x''<sub>''i''−1</sub>}} be the width of sub-interval ''i''; then the ''mesh'' of such a tagged partition is the width of the largest sub-interval formed by the partition, {{nowrap|max<sub>''i''{{=}}1…''n''</sub> Δ<sub>''i''</sub>}}. The ''Riemann integral'' of a function ''f'' over the interval [''a'',''b''] is equal to ''S'' if:
| |
| :For all {{nowrap|ε > 0}} there exists {{nowrap|δ > 0}} such that, for any tagged partition [''a'',''b''] with mesh less than δ, we have
| |
| ::<math>\left| S - \sum_{i=1}^{n} f(t_i)\Delta_i \right| < \varepsilon.</math>
| |
| When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum becomes an upper (respectively, lower) [[Darboux integral|Darboux sum]], suggesting the close connection between the Riemann integral and the [[Darboux integral]].
| |
| | |
| ===Lebesgue integral===
| |
| {{Main|Lebesgue integration}}
| |
| [[Image:RandLintegrals.png|thumb|250px|Riemann–Darboux's integration (top) and Lebesgue integration (bottom)|alt=Comparison of Riemann and Lebesgue integrals]]
| |
| | |
| It is often of interest, both in theory and applications, to be able to pass to the limit under the integral. For instance, a sequence of functions can frequently be constructed that approximate, in a suitable sense, the solution to a problem. Then the integral of the solution function should be the limit of the integrals of the approximations. However, many functions that can be obtained as limits are not Riemann integrable, and so such limit theorems do not hold with the Riemann integral. Therefore it is of great importance to have a definition of the integral that allows a wider class of functions to be integrated {{Harv|Rudin|1987}}.
| |
| | |
| Such an integral is the Lebesgue integral, that exploits the following fact to enlarge the class of integrable functions: if the values of a function are rearranged over the domain, the integral of a function should remain the same. Thus [[Henri Lebesgue]] introduced the integral bearing his name, explaining this integral thus in a letter to [[Paul Montel]]:
| |
| {{quote|I have to pay a certain sum, which I have collected in my pocket. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. This is the Riemann integral. But I can proceed differently. After I have taken all the money out of my pocket I order the bills and coins according to identical values and then I pay the several heaps one after the other to the creditor. This is my integral.}}
| |
| :<small>''Source'': {{harv|Siegmund-Schultze|2008}}</small>
| |
| | |
| As {{Harvtxt|Folland|1984|loc=p. 56}} puts it, "To compute the Riemann integral of ''f'', one partitions the domain [''a'',''b''] into subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of ''f''". The definition of the Lebesgue integral thus begins with a [[measure (mathematics)|measure]], μ. In the simplest case, the [[Lebesgue measure]] μ(''A'') of an interval {{nowrap|''A'' {{=}} [''a'',''b'']}} is its width, ''b'' − ''a'', so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals.
| |
| | |
| Using the "partitioning the range of ''f''" philosophy, the integral of a non-negative function {{nowrap|''f'' : '''R''' → '''R'''}} should be the sum over ''t'' of the areas between a thin horizontal strip between {{nowrap|1=''y'' = ''t''}} and {{nowrap|1=''y'' = ''t'' + ''dt''}}. This area is just {{nowrap|μ{ ''x'' : ''f''(''x'') > ''t''} ''dt''}}. Let {{nowrap|1=''f''<sup>∗</sup>(''t'') = μ{ ''x'' : ''f''(''x'') > ''t''}}}. The Lebesgue integral of ''f'' is then defined by {{harv|Lieb|Loss|2001}}
| |
| :<math>\int f = \int_0^\infty f^*(t)\,dt</math>
| |
| where the integral on the right is an ordinary improper Riemann integral (''f''<sup>∗</sup> is a strictly decreasing positive function, and therefore has a well-defined improper Riemann integral). For a suitable class of functions (the [[measurable function]]s) this defines the Lebesgue integral.
| |
| | |
| A general measurable function ''f'' is Lebesgue integrable if the area between the graph of ''f'' and the ''x''-axis is finite:
| |
| :<math>\int_E |f|\,d\mu < + \infty.</math>
| |
| In that case, the integral is, as in the Riemannian case, the difference between the area above the ''x''-axis and the area below the ''x''-axis:
| |
| :<math>\int_E f \,d\mu = \int_E f^+ \,d\mu - \int_E f^- \,d\mu</math>
| |
| where
| |
| :<math>\begin{align}
| |
| f^+(x)&=\max(\{f(x),0\}) &=&\begin{cases}
| |
| f(x), & \text{if } f(x) > 0, \\
| |
| 0, & \text{otherwise,}
| |
| \end{cases}\\
| |
| f^-(x) &=\max(\{-f(x),0\})&=& \begin{cases}
| |
| -f(x), & \text{if } f(x) < 0, \\
| |
| 0, & \text{otherwise.}
| |
| \end{cases}
| |
| \end{align}</math>
| |
| | |
| ===Other integrals===
| |
| Although the Riemann and Lebesgue integrals are the most widely used definitions of the integral, a number of others exist, including:
| |
| * The [[Darboux integral]] which is equivalent to a [[Riemann integral]], meaning that a function is Darboux-integrable if and only if it is Riemann-integrable, and the values of the two integrals, if they exist, are equal. Darboux integrals have the advantage of being simpler to define than Riemann integrals.
| |
| * The [[Riemann–Stieltjes integral]], an extension of the Riemann integral.
| |
| * The [[Lebesgue–Stieltjes integration|Lebesgue–Stieltjes integral]], further developed by [[Johann Radon]], which generalizes the [[Riemann–Stieltjes integral|Riemann–Stieltjes]] and [[Lebesgue integration|Lebesgue integrals]].
| |
| * The [[Daniell integral]], which subsumes the [[Lebesgue integration|Lebesgue integral]] and [[Lebesgue–Stieltjes integration|Lebesgue–Stieltjes integral]] without the dependence on [[measure (mathematics)|measure]]s.
| |
| * The [[Haar integral]], used for integration on locally compact topological groups, introduced by [[Alfréd Haar]] in 1933.
| |
| * The [[Henstock–Kurzweil integral]], variously defined by [[Arnaud Denjoy]], [[Oskar Perron]], and (most elegantly, as the gauge integral) [[Jaroslav Kurzweil]], and developed by [[Ralph Henstock]].
| |
| * The [[Itō calculus|Itō integral]] and [[Stratonovich integral]], which define integration with respect to [[semimartingale]]s such as [[Wiener process|Brownian motion]].
| |
| <!--* The [[Darboux integral]], equivalent to the Riemann integral.-->
| |
| <!--* The [[Haar integral]], which is the Lebesgue integral with [[Haar measure]].-->
| |
| * The [[Young integral]], which is a kind of Riemann–Stieltjes integral with respect to certain functions of [[Bounded variation|unbounded variation]].
| |
| * The [[rough path]] integral defined for functions equipped with some additional "rough path" structure, generalizing stochastic integration against both [[semimartingale]]s and processes such as the [[fractional Brownian motion]].
| |
| | |
| ==Properties==
| |
| | |
| ===Linearity===
| |
| *The collection of Riemann integrable functions on a closed interval [''a'', ''b''] forms a [[vector space]] under the operations of [[pointwise addition]] and multiplication by a scalar, and the operation of integration
| |
| ::<math> f \mapsto \int_a^b f(x) \; dx</math>
| |
| <!--- redundant
| |
| for an integrable [[function (mathematics)|function]] ''f'' on [''a'', ''b'']
| |
| --->
| |
| :is a [[linear functional]] on this vector space. Thus, firstly, the collection of integrable functions is closed under taking [[linear combination]]s; and, secondly, the integral of a linear combination is the linear combination of the integrals,
| |
| | |
| <!--- leftover from the past text; redundant
| |
| :For example, in Riemann integration, if ''f'' and ''g'' are [[real number|real-valued]] integrable functions on a [[closed set|closed]] and [[bounded set|bounded]] [[interval (mathematics)|interval]] [''a'', ''b''], and ''α'' and ''β'' are real numbers, then the function ''αf'' + ''βg'' defined by (''αf'' + ''βg'')(''x'') = ''αf''(''x'') + ''βg''(''x'') for all ''x'' in [''a'', ''b''] is integrable, with
| |
| --->
| |
| ::<math> \int_a^b (\alpha f + \beta g)(x) \, dx = \alpha \int_a^b f(x) \,dx + \beta \int_a^b g(x) \, dx. \,</math>
| |
| | |
| *Similarly, the set of [[real number|real]]-valued Lebesgue integrable functions on a given [[Measure (mathematics)|measure space]] ''E'' with measure ''μ'' is closed under taking linear combinations and hence form a vector space, and the Lebesgue integral
| |
| | |
| :: <math> f\mapsto \int_E f \, d\mu </math>
| |
| | |
| :is a linear functional on this vector space, so that
| |
| | |
| ::<math> \int_E (\alpha f + \beta g) \, d\mu = \alpha \int_E f \, d\mu + \beta \int_E g \, d\mu. </math>
| |
| | |
| *More generally, consider the vector space of all [[measurable function]]s on a measure space (''E'',''μ''), taking values in a [[Locally compact space|locally compact]] [[complete metric space|complete]] [[topological vector space]] ''V'' over a locally compact [[Topological ring|topological field]] ''K'', ''f'' : ''E'' → ''V''. Then one may define an abstract integration map assigning to each function ''f'' an element of ''V'' or the symbol ''∞'',
| |
| ::<math> f\mapsto\int_E f \,d\mu, \,</math>
| |
| :that is compatible with linear combinations. In this situation the linearity holds for the subspace of functions whose integral is an element of ''V'' (i.e. "finite"). The most important special cases arise when ''K'' is '''R''', '''C''', or a finite extension of the field '''Q'''<sub>''p''</sub> of [[p-adic number]]s, and ''V'' is a finite-dimensional vector space over ''K'', and when ''K''='''C''' and ''V'' is a complex [[Hilbert space]].
| |
| | |
| Linearity, together with some natural continuity properties and normalisation for a certain class of "simple" functions, may be used to give an alternative definition of the integral. This is the approach of [[Daniell integral|Daniell]] for the case of real-valued functions on a set ''X'', generalized by [[Nicolas Bourbaki]] to functions with values in a locally compact topological vector space. See {{Harv|Hildebrandt|1953}} for an axiomatic characterisation of the integral.
| |
| | |
| ===Inequalities for integrals===
| |
| A number of general inequalities hold for Riemann-integrable [[function (mathematics)|functions]] defined on a [[closed set|closed]] and [[bounded set|bounded]] [[interval (mathematics)|interval]] [''a'', ''b''] and can be generalized to other notions of integral (Lebesgue and Daniell).
| |
| | |
| * ''Upper and lower bounds.'' An integrable function ''f'' on [''a'', ''b''], is necessarily [[bounded function|bounded]] on that interval. Thus there are [[real number]]s ''m'' and ''M'' so that ''m'' ≤ ''f'' (''x'') ≤ ''M'' for all ''x'' in [''a'', ''b'']. Since the lower and upper sums of ''f'' over [''a'', ''b''] are therefore bounded by, respectively, ''m''(''b'' − ''a'') and ''M''(''b'' − ''a''), it follows that
| |
| :: <math> m(b - a) \leq \int_a^b f(x) \, dx \leq M(b - a). </math>
| |
| | |
| * ''Inequalities between functions.'' If ''f''(''x'') ≤ ''g''(''x'') for each ''x'' in [''a'', ''b''] then each of the upper and lower sums of ''f'' is bounded above by the upper and lower sums, respectively, of ''g''. Thus
| |
| :: <math> \int_a^b f(x) \, dx \leq \int_a^b g(x) \, dx. </math>
| |
| :This is a generalization of the above inequalities, as ''M''(''b'' − ''a'') is the integral of the constant function with value ''M'' over [''a'', ''b''].
| |
| :In addition, if the inequality between functions is strict, then the inequality between integrals is also strict. That is, if ''f''(''x'') < ''g''(''x'') for each ''x'' in [''a'', ''b''], then
| |
| :: <math> \int_a^b f(x) \, dx < \int_a^b g(x) \, dx. </math>
| |
| | |
| * ''Subintervals.'' If [''c'', ''d''] is a subinterval of [''a'', ''b''] and ''f''(''x'') is non-negative for all ''x'', then
| |
| :: <math> \int_c^d f(x) \, dx \leq \int_a^b f(x) \, dx. </math>
| |
| | |
| * ''Products and absolute values of functions.'' If ''f'' and ''g'' are two functions then we may consider their [[pointwise product]]s and powers, and [[absolute value]]s:
| |
| :: <math>
| |
| (fg)(x)= f(x) g(x), \; f^2 (x) = (f(x))^2, \; |f| (x) = |f(x)|.\,</math>
| |
| :If ''f'' is Riemann-integrable on [''a'', ''b''] then the same is true for |''f''|, and
| |
| :: <math>\left| \int_a^b f(x) \, dx \right| \leq \int_a^b | f(x) | \, dx. </math>
| |
| :Moreover, if ''f'' and ''g'' are both Riemann-integrable then ''f'' <sup>2</sup>, ''g'' <sup>2</sup>, and ''fg'' are also Riemann-integrable, and
| |
| :: <math>\left( \int_a^b (fg)(x) \, dx \right)^2 \leq \left( \int_a^b f(x)^2 \, dx \right) \left( \int_a^b g(x)^2 \, dx \right). </math>
| |
| :This inequality, known as the [[Cauchy–Schwarz inequality]], plays a prominent role in [[Hilbert space]] theory, where the left hand side is interpreted as the [[Inner product space|inner product]] of two [[Square-integrable function|square-integrable]] functions ''f'' and ''g'' on the interval [''a'', ''b''].
| |
| | |
| * ''Hölder's inequality.'' Suppose that ''p'' and ''q'' are two real numbers, 1 ≤ ''p'', ''q'' ≤ ∞ with 1/''p'' + 1/''q'' = 1, and ''f'' and ''g'' are two Riemann-integrable functions. Then the functions |''f''|<sup>''p''</sup> and |''g''|<sup>''q''</sup> are also integrable and the following [[Hölder's inequality]] holds:
| |
| :<math>\left|\int f(x)g(x)\,dx\right| \leq
| |
| \left(\int \left|f(x)\right|^p\,dx \right)^{1/p} \left(\int\left|g(x)\right|^q\,dx\right)^{1/q}.</math>
| |
| :For ''p'' = ''q'' = 2, Hölder's inequality becomes the Cauchy–Schwarz inequality.
| |
| | |
| * ''Minkowski inequality''. Suppose that ''p'' ≥ 1 is a real number and ''f'' and ''g'' are Riemann-integrable functions. Then |''f''|<sup>''p''</sup>, |''g''|<sup>''p''</sup> and |''f'' + ''g''|<sup>''p''</sup> are also Riemann integrable and the following [[Minkowski inequality]] holds:
| |
| :<math>\left(\int \left|f(x)+g(x)\right|^p\,dx \right)^{1/p} \leq
| |
| \left(\int \left|f(x)\right|^p\,dx \right)^{1/p} +
| |
| \left(\int \left|g(x)\right|^p\,dx \right)^{1/p}.</math>
| |
| : An analogue of this inequality for Lebesgue integral is used in construction of [[Lp space|L<sup>p</sup> spaces]].
| |
| | |
| ===Conventions===
| |
| In this section ''f'' is a [[real number|real-]]valued Riemann-integrable [[function (mathematics)|function]]. The integral
| |
| :<math> \int_a^b f(x) \, dx </math>
| |
| over an interval [''a'', ''b''] is defined if ''a'' < ''b''. This means that the upper and lower sums of the function ''f'' are evaluated on a partition {{nowrap|''a'' {{=}} ''x''<sub>0</sub> ≤ ''x''<sub>1</sub> ≤ . . . ≤ ''x''<sub>''n''</sub> {{=}} ''b''}} whose values ''x''<sub>''i''</sub> are increasing. Geometrically, this signifies that integration takes place "left to right", evaluating ''f'' within intervals [''x''<sub> ''i''</sub> , ''x''<sub> ''i'' +1</sub>] where an interval with a higher index lies to the right of one with a lower index. The values ''a'' and ''b'', the end-points of the [[interval (mathematics)|interval]], are called the [[limits of integration]] of ''f''. Integrals can also be defined if {{nowrap|''a'' > ''b''}}:
| |
| | |
| * ''Reversing limits of integration.'' If {{nowrap|''a'' > ''b''}} then define
| |
| :: <math>\int_a^b f(x) \, dx = - \int_b^a f(x) \, dx. </math>
| |
| This, with {{nowrap|''a'' {{=}} ''b''}}, implies:
| |
| * ''Integrals over intervals of length zero.'' If ''a'' is a [[real number]] then
| |
| :: <math>\int_a^a f(x) \, dx = 0. </math>
| |
| | |
| The first convention is necessary in consideration of taking integrals over subintervals of {{nowrap|[''a'', ''b'']}}; the second says that an integral taken over a degenerate interval, or a [[Point (geometry)|point]], should be [[0 (number)|zero]]. One reason for the first convention is that the integrability of ''f'' on an interval {{nowrap|[''a'', ''b'']}} implies that ''f'' is integrable on any subinterval {{nowrap|[''c'', ''d'']}}, but in particular integrals have the property that:
| |
| | |
| * ''Additivity of integration on intervals.'' If ''c'' is any [[element (mathematics)|element]] of [''a'', ''b''], then
| |
| :: <math> \int_a^b f(x) \, dx = \int_a^c f(x) \, dx + \int_c^b f(x) \, dx.</math>
| |
| With the first convention the resulting relation
| |
| : <math>\begin{align}
| |
| \int_a^c f(x) \, dx &{}= \int_a^b f(x) \, dx - \int_c^b f(x) \, dx \\
| |
| &{} = \int_a^b f(x) \, dx + \int_b^c f(x) \, dx
| |
| \end{align}</math>
| |
| is then well-defined for any cyclic permutation of ''a'', ''b'', and ''c''.
| |
| | |
| Instead of viewing the above as conventions, one can also adopt the point of view that integration is performed of differential forms on [[Orientability|''oriented'' manifolds]] only. If ''M'' is such an oriented ''m''-dimensional manifold, and ''M''' is the same manifold with opposed orientation and ''ω'' is an ''m''-form, then one has:
| |
| : <math>\int_M \omega = - \int_{M'} \omega \,.</math>
| |
| These conventions correspond to interpreting the integrand as a differential form, integrated over a [[Chain (algebraic topology)|chain]]. In [[measure theory]], by contrast, one interprets the integrand as a function ''f'' with respect to a measure <math>\mu,</math> and integrates over a subset ''A,'' without any notion of orientation; one writes <math>\textstyle{\int_A f\,d\mu = \int_{[a,b]} f\,d\mu}</math> to indicate integration over a subset ''A.'' This is a minor distinction in one dimension, but becomes subtler on higher dimensional manifolds; see [[Differential form#Relation with measures|Differential form: Relation with measures]] for details.
| |
| | |
| ==Fundamental theorem of calculus==
| |
| {{Main|Fundamental theorem of calculus}}
| |
| | |
| The ''fundamental theorem of calculus'' is the statement that [[derivative|differentiation]] and integration are inverse operations: if a [[continuous function]] is first integrated and then differentiated, the original function is retrieved. An important consequence, sometimes called the ''second fundamental theorem of calculus'', allows one to compute integrals by using an antiderivative of the function to be integrated.
| |
| | |
| ===Statements of theorems===
| |
| | |
| ====Fundamental theorem of calculus====
| |
| | |
| Let ''f'' be a continuous real-valued function defined on a [[Interval (mathematics)#Terminology|closed interval]] [''a'', ''b'']. Let ''F'' be the function defined, for all ''x'' in [''a'', ''b''], by
| |
| :<math>F(x) = \int_a^x f(t)\, dt.</math>
| |
| Then, ''F'' is continuous on [''a'', ''b''], differentiable on the open interval {{nowrap|(''a'', ''b'')}}, and
| |
| | |
| :<math>F'(x) = f(x)</math>
| |
| | |
| for all ''x'' in (''a'', ''b'').
| |
| | |
| ====Second fundamental theorem of calculus====
| |
| | |
| Let ''f'' be a real-valued function defined on a [[closed interval]] [''a'', ''b''] that admits an [[antiderivative]] ''F'' on {{nowrap|[''a'', ''b'']}}. That is, ''f'' and ''F'' are functions such that for all ''x'' in {{nowrap|[''a'', ''b'']}},
| |
| | |
| :<math>f(x) = F'(x).</math>
| |
| | |
| If ''f'' is integrable on {{nowrap|[''a'', ''b'']}} then
| |
| | |
| :<math>\int_a^b f(x)\,dx = F(b) - F(a).</math>
| |
| | |
| ==Extensions==
| |
| | |
| ===Improper integrals===
| |
| {{Main|Improper integral}}
| |
| [[File:Improper integral.svg|right|thumb|The [[improper integral]]<br/><math>\int_{0}^{\infty} \frac{dx}{(x+1)\sqrt{x}} = \pi</math><br/> has unbounded intervals for both domain and range.]]
| |
| A "proper" Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these conditions is not satisfied. In some cases such integrals may be defined by considering the [[limit (mathematics)|limit]] of a [[sequence]] of proper [[Riemann integral]]s on progressively larger intervals.
| |
| | |
| If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity.
| |
| :<math>\int_{a}^{\infty} f(x)\,dx = \lim_{b \to \infty} \int_{a}^{b} f(x)\,dx</math>
| |
| If the integrand is only defined or finite on a half-open interval, for instance {{nowrap|<nowiki>(</nowiki>''a'', ''b''<nowiki>]</nowiki>}}, then again a limit may provide a finite result.
| |
| :<math>\int_{a}^{b} f(x)\,dx = \lim_{\epsilon \to 0} \int_{a+\epsilon}^{b} f(x)\,dx</math>
| |
| | |
| That is, the improper integral is the [[limit (mathematics)|limit]] of proper integrals as one endpoint of the interval of integration approaches either a specified [[real number]], or ∞, or −∞. In more complicated cases, limits are required at both endpoints, or at interior points.
| |
| | |
| Consider, for example, the function <math>1/((x+1)\sqrt{x})</math> integrated from 0 to ∞ (shown right). At the lower bound, as ''x'' goes to 0 the function goes to ∞, and the upper bound is itself ∞, though the function goes to 0. Thus this is a doubly improper integral. Integrated, say, from 1 to 3, an ordinary Riemann sum suffices to produce a result of π/6. To integrate from 1 to ∞, a Riemann sum is not possible. However, any finite upper bound, say ''t'' (with {{nowrap|''t'' > 1}}), gives a well-defined result, <math>2\arctan (\sqrt{t}) - \pi/2</math>. This has a finite limit as ''t'' goes to infinity, namely π/2. Similarly, the integral from 1/3 to 1 allows a Riemann sum as well, coincidentally again producing π/6. Replacing 1/3 by an arbitrary positive value ''s'' (with {{nowrap|''s'' < 1}}) is equally safe, giving <math>\pi/2 - 2\arctan (\sqrt{s})</math>. This, too, has a finite limit as ''s'' goes to zero, namely π/2. Combining the limits of the two fragments, the result of this improper integral is
| |
| :<math>\begin{align}
| |
| \int_{0}^{\infty} \frac{dx}{(x+1)\sqrt{x}} &{} = \lim_{s \to 0} \int_{s}^{1} \frac{dx}{(x+1)\sqrt{x}}
| |
| + \lim_{t \to \infty} \int_{1}^{t} \frac{dx}{(x+1)\sqrt{x}} \\
| |
| &{} = \lim_{s \to 0} \left(\frac{\pi}{2} - 2 \arctan{\sqrt{s}} \right)
| |
| + \lim_{t \to \infty} \left(2 \arctan{\sqrt{t}} - \frac{\pi}{2} \right) \\
| |
| &{} = \frac{\pi}{2} + \left(\pi - \frac{\pi}{2} \right) \\
| |
| &{} = \frac{\pi}{2} + \frac{\pi}{2} \\
| |
| &{} = \pi .
| |
| \end{align}</math>
| |
| This process does not guarantee success; a limit might fail to exist, or might be unbounded. For example, over the bounded interval from 0 to 1 the integral of 1/''x'' does not converge; and over the unbounded interval from 1 to ∞ the integral of <math>1/\sqrt{x}</math> does not converge.
| |
| | |
| [[File:Improper integral unbounded internally.svg|right|thumb|The [[improper integral]]<br/><math>\int_{-1}^{1} \frac{dx}{\sqrt[3]{x^2}} = 6</math><br/> is unbounded internally, but both left and right limits exist.]]
| |
| It might also happen that an integrand is unbounded at an interior point, in which case the integral must be split at that point. For the integral as a whole to converge, the limit integrals on both sides must exist and must be bounded. For example:
| |
| :<math>\begin{align}
| |
| \int_{-1}^{1} \frac{dx}{\sqrt[3]{x^2}} &{} = \lim_{s \to 0} \int_{-1}^{-s} \frac{dx}{\sqrt[3]{x^2}}
| |
| + \lim_{t \to 0} \int_{t}^{1} \frac{dx}{\sqrt[3]{x^2}} \\
| |
| &{} = \lim_{s \to 0} 3(1-\sqrt[3]{s}) + \lim_{t \to 0} 3(1-\sqrt[3]{t}) \\
| |
| &{} = 3 + 3 \\
| |
| &{} = 6.
| |
| \end{align}</math>
| |
| But the similar integral
| |
| :<math> \int_{-1}^{1} \frac{dx}{x} \,\!</math>
| |
| cannot be assigned a value in this way, as the integrals above and below zero do not independently converge. (However, see [[Cauchy principal value]].)
| |
| | |
| ===Multiple integration===
| |
| {{Main|Multiple integral}}
| |
| [[File:Volume under surface.png|right|thumb|Double integral as volume under a surface.]]
| |
| Integrals can be taken over regions other than intervals. In general, an integral over a [[Set (mathematics)|set]] ''E'' of a function ''f'' is written:
| |
| | |
| :<math>\int_E f(x) \, dx.</math>
| |
| | |
| Here ''x'' need not be a real number, but can be another suitable quantity, for instance, a [[Vector (geometric)|vector]] in '''R'''<sup>3</sup>. [[Fubini's theorem]] shows that such integrals can be rewritten as an ''[[Multiple integral|iterated integral]]''. In other words, the integral can be calculated by integrating one coordinate at a time.
| |
| | |
| Just as the definite integral of a positive function of one variable represents the [[area]] of the region between the graph of the function and the ''x''-axis, the ''double integral'' of a positive function of two variables represents the [[volume]] of the region between the surface defined by the function and the plane which contains its [[domain (mathematics)|domain]]. (The same volume can be obtained via the ''triple integral'' — the integral of a function in three variables — of the constant function ''f''(''x'', ''y'', ''z'') = 1 over the above mentioned region between the surface and the plane.) If the number of variables is higher, then the integral represents a [[Four-dimensional space|hypervolume]], a volume of a solid of more than three dimensions that cannot be graphed.
| |
| | |
| For example, the volume of the [[cuboid]] of sides 4 × 6 × 5 may be obtained in two ways:
| |
| * By the double integral
| |
| :: <math>\iint_D 5 \ dx\, dy</math>
| |
| : of the function ''f''(''x'', ''y'') = 5 calculated in the region ''D'' in the ''xy''-plane which is the base of the cuboid. For example, if a rectangular base of such a cuboid is given via the ''xy'' inequalities 3 ≤ ''x'' ≤ 7, 4 ≤ ''y'' ≤ 10, our above double integral now reads
| |
| | |
| ::<math>\int_4^{10}\left[ \int_3^7 \ 5 \ dx\right] dy.</math>
| |
| | |
| :From here, integration is conducted with respect to either ''x'' or ''y'' first; in this example, integration is first done with respect to ''x'' as the interval corresponding to ''x'' is the inner integral. Once the first integration is completed via the <math>F(b) - F(a)</math> method or otherwise, the result is again integrated with respect to the other variable. The result will equate to the volume under the surface.
| |
| | |
| * By the triple integral
| |
| ::<math>\iiint_\text{cuboid} 1 \, dx\, dy\, dz</math>
| |
| :of the constant function 1 calculated on the cuboid itself.
| |
| | |
| ===Line integrals===
| |
| {{Main|Line integral}}
| |
| [[File:Line-Integral.gif|right|thumb|A line integral sums together elements along a curve.]]
| |
| The concept of an integral can be extended to more general domains of integration, such as curved lines and surfaces. Such integrals are known as line integrals and surface integrals respectively. These have important applications in physics, as when dealing with [[vector field]]s.
| |
| | |
| A ''line integral'' (sometimes called a ''path integral'') is an integral where the [[function (mathematics)|function]] to be integrated is evaluated along a [[curve]]. Various different line integrals are in use. In the case of a closed curve it is also called a ''contour integral''.
| |
| | |
| The function to be integrated may be a [[scalar field]] or a [[vector field]]. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly [[arc length]] or, for a vector field, the [[Inner product space|scalar product]] of the vector field with a [[Differential (infinitesimal)|differential]] vector in the curve). This weighting distinguishes the line integral from simpler integrals defined on [[interval (mathematics)|interval]]s. Many simple formulas in physics have natural continuous analogs in terms of line integrals; for example, the fact that [[Mechanical work|work]] is equal to [[force]], ''F'', multiplied by displacement, ''s'', may be expressed (in terms of vector quantities) as:
| |
| :<math>W=\mathbf F\cdot\mathbf s.</math>
| |
| For an object moving along a path ''C'' in a [[vector field]] '''F''' such as an [[electric field]] or [[gravitational field]], the total work done by the field on the object is obtained by summing up the differential work done in moving from '''s''' to {{nowrap|'''s''' + ''d'''''s'''}}. This gives the line integral
| |
| :<math>W=\int_C \mathbf F\cdot d\mathbf s.</math>
| |
| | |
| ===Surface integrals===
| |
| {{Main|Surface integral}}
| |
| [[File:Surface integral illustration.png|right|thumb|The definition of surface integral relies on splitting the surface into small surface elements.]]
| |
| A ''surface integral'' is a definite integral taken over a [[surface]] (which may be a curved set in [[space]]); it can be thought of as the [[Multiple integral|double integral]] analog of the [[line integral]]. The function to be integrated may be a [[scalar field]] or a [[vector field]]. The value of the surface integral is the sum of the field at all points on the surface. This can be achieved by splitting the surface into surface elements, which provide the partitioning for Riemann sums.
| |
| | |
| For an example of applications of surface integrals, consider a vector field ''v'' on a surface ''S''; that is, for each point '''x''' in ''S'', '''v'''('''x''') is a vector. Imagine that we have a fluid flowing through ''S'', such that '''v'''('''x''') determines the velocity of the fluid at ''x''. The [[flux]] is defined as the quantity of fluid flowing through ''S'' in unit amount of time. To find the flux, we need to take the [[dot product]] of '''v''' with the unit [[Normal (geometry)|surface normal]] to ''S'' at each point, which will give us a scalar field, which we integrate over the surface:
| |
| :<math>\int_S {\mathbf v}\cdot \,d{\mathbf {S}}.</math>
| |
| The fluid flux in this example may be from a physical fluid such as water or air, or from electrical or magnetic flux. Thus surface integrals have applications in [[physics]], particularly with the [[classical theory]] of [[electromagnetism]].
| |
| | |
| ===Integrals of differential forms===
| |
| {{Main|differential form}}
| |
| | |
| A [[differential form]] is a mathematical concept in the fields of [[multivariable calculus]], [[differential topology]] and [[tensor]]s. The modern notation for the differential form, as well as the idea of the differential forms as being the [[Exterior algebra|wedge products]] of [[exterior derivative]]s forming an [[exterior algebra]], was introduced by [[Élie Cartan]].
| |
| | |
| We initially work in an [[open set]] in '''R'''<sup>''n''</sup>.
| |
| A 0-form is defined to be a [[smooth function]] ''f''.
| |
| When we integrate a [[function (mathematics)|function]] ''f'' over an ''m''-[[dimension]]al subspace ''S'' of '''R'''<sup>''n''</sup>, we write it as
| |
| :<math>\int_S f\,dx^1 \ldots dx^m.</math>
| |
| | |
| (The superscripts are indices, not exponents.) We can consider ''dx''<sup>1</sup> through ''dx''<sup>''n''</sup> to be formal objects themselves, rather than tags appended to make integrals look like [[Riemann sum]]s. Alternatively, we can view them as [[One-form|covectors]], and thus a [[measure (mathematics)|measure]] of "density" (hence integrable in a general sense). We call the {{nowrap|''dx''<sup>1</sup>, …, ''dx<sup>n</sup>''}} ''basic'' [[one-form|1-''forms'']].
| |
| | |
| We define the [[Exterior algebra|wedge product]], "∧", a bilinear "multiplication" operator on these elements, with the ''alternating'' property that
| |
| | |
| :<math>dx^a \wedge dx^a = 0 \,\!</math>
| |
| | |
| for all indices ''a''. Alternation along with linearity and associativity implies {{nowrap|''dx''<sup>''b''</sup>∧''dx''<sup>''a''</sup> {{=}} −''dx''<sup>''a''</sup>∧''dx''<sup>''b''</sup>}}. This also ensures that the result of the wedge product has an [[Orientation (mathematics)|orientation]].
| |
| | |
| We define the set of all these products to be ''basic'' 2-''forms'', and similarly we define the set of products of the form ''dx''<sup>''a''</sup>∧''dx''<sup>''b''</sup>∧''dx''<sup>''c''</sup> to be ''basic'' 3-''forms''. A general ''k''-form is then a weighted sum of basic ''k-''forms, where the weights are the smooth functions ''f''. Together these form a [[vector space]] with basic ''k''-forms as the basis vectors, and 0-forms (smooth functions) as the field of scalars. The wedge product then extends to ''k''-forms in the natural way. Over '''R'''<sup>''n''</sup> at most ''n'' covectors can be linearly independent, thus a ''k-''form with {{nowrap|''k'' > ''n''}} will always be zero, by the alternating property.
| |
| | |
| In addition to the wedge product, there is also the [[exterior derivative]] operator ''d''. This operator maps ''k''-forms to (''k''+1)-forms. For a ''k''-form ω = ''f'' ''dx<sup>a</sup>'' over '''R'''<sup>''n''</sup>, we define the action of ''d'' by:
| |
| | |
| :<math>d\omega = \sum_{i=1}^n \frac{\partial f}{\partial x_i} dx^i \wedge dx^a.</math>
| |
| | |
| with extension to general ''k''-forms occurring linearly.
| |
| | |
| This more general approach allows for a more natural coordinate-free approach to integration on [[manifold]]s. It also allows for a natural generalisation of the [[fundamental theorem of calculus]], called [[Stokes' theorem]], which we may state as
| |
| | |
| :<math>\int_{\Omega} d\omega = \int_{\partial\Omega} \omega \,\!</math>
| |
| | |
| where ω is a general ''k''-form, and ∂Ω denotes the [[boundary (topology)|boundary]] of the region Ω. Thus, in the case that ω is a 0-form and Ω is a closed interval of the real line, this reduces to the [[fundamental theorem of calculus]]. In the case that ω is a 1-form and Ω is a two-dimensional region in the plane, the theorem reduces to [[Green's theorem]]. Similarly, using 2-forms, and 3-forms and [[Hodge dual]]ity, we can arrive at [[Stokes' theorem]] and the [[divergence theorem]]. In this way we can see that differential forms provide a powerful unifying view of integration.
| |
| | |
| ===Summations===
| |
| The discrete equivalent of integration is [[summation]]. Summations and integrals can be put on the same foundations using the theory of [[Lebesgue integral]]s or [[time scale calculus]].
| |
| | |
| ==Methods for computing integrals==
| |
| | |
| ===Analytical===
| |
| The most basic technique for computing definite integrals of one real variable is based on the [[fundamental theorem of calculus]]. Let ''f''(''x'') be the function of ''x'' to be integrated over a given interval [''a'', ''b'']. Then, find an antiderivative of ''f''; that is, a function ''F'' such that ''F' '' = ''f'' on the interval. Provided the integrand and integral have no [[Mathematical singularity|singularities]] on the path of integration, by the fundamental theorem of calculus,
| |
| | |
| :<math>\textstyle\int_a^b f(x)\,dx = F(b)-F(a).</math>
| |
| | |
| The integral is not actually the antiderivative, but the fundamental theorem provides a way to use antiderivatives to evaluate definite integrals.
| |
| | |
| The most difficult step is usually to find the antiderivative of ''f''. It is rarely possible to glance at a function and write down its antiderivative. More often, it is necessary to use one of the many techniques that have been developed to evaluate integrals. Most of these techniques rewrite one integral as a different one which is hopefully more tractable. Techniques include:
| |
| * [[Integration by substitution]]
| |
| * [[Integration by parts]]
| |
| * [[Inverse function integration]]
| |
| * [[Order of integration (calculus)|Changing the order of integration]]
| |
| * [[trigonometric substitution|Integration by trigonometric substitution]]
| |
| * [[Partial fractions in integration|Integration by partial fractions]]
| |
| * [[Integration by reduction formulae]]
| |
| * [[Integration using parametric derivatives]]
| |
| * [[Integration using Euler's formula]]
| |
| * [[Differentiation under the integral sign]]
| |
| * [[Methods of contour integration|Contour integration]]
| |
| Alternative methods exist to compute more complex integrals. Many [[nonelementary integral]]s can be expanded in a [[Taylor series]] and integrated term by term. Occasionally, the resulting infinite series can be summed analytically. The method of convolution using [[Meijer G-function]]s can also be used, assuming that the integrand can be written as a product of Meijer G-functions. There are also many less common ways of calculating definite integrals; for instance, [[Parseval's identity]] can be used to transform an integral over a rectangular region into an infinite sum. Occasionally, an integral can be evaluated by a trick; for an example of this, see [[Gaussian integral]].
| |
| | |
| Computations of volumes of [[solid of revolution|solids of revolution]] can usually be done with [[disk integration]] or [[shell integration]].
| |
| | |
| Specific results which have been worked out by various techniques are collected in the [[Lists of integrals|list of integrals]].
| |
| | |
| ===Symbolic===
| |
| {{Main|Symbolic integration}}
| |
| | |
| Many problems in mathematics, physics, and engineering involve integration where an explicit formula for the integral is desired. Extensive [[Lists of integrals|tables of integrals]] have been compiled and published over the years for this purpose. With the spread of [[computer]]s, many professionals, educators, and students have turned to [[computer algebra system]]s that are specifically designed to perform difficult or tedious tasks, including integration. Symbolic integration has been one of the motivations for the development of the first such systems, like [[Macsyma]].
| |
| | |
| A major mathematical difficulty in symbolic integration is that in many cases, a closed formula for the antiderivative of a rather simple-looking function does not exist. For instance, it is known that the antiderivatives of the functions exp(''x''<sup>2</sup>), ''x''<sup>''x''</sup> and {{nowrap|(sin ''x'')/''x''}} cannot be expressed in the closed form involving only [[rational function|rational]] and [[exponential function|exponential]] functions, [[logarithm]], [[trigonometric function|trigonometric]] and [[inverse trigonometric function]]s, and the operations of multiplication and composition; in other words, none of the three given functions is integrable in [[elementary function]]s, which are the functions which may be built from rational functions, [[Root of a function|roots of a polynomial]], logarithm, and exponential functions. The [[Risch algorithm]] provides a general criterion to determine whether the antiderivative of an elementary function is elementary, and, if it is, to compute it. Unfortunately, it turns out that functions with closed expressions of antiderivatives are the exception rather than the rule. Consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function. On the positive side, if the 'building blocks' for antiderivatives are fixed in advance, it may be still be possible to decide whether the antiderivative of a given function can be expressed using these blocks and operations of multiplication and composition, and to find the symbolic answer whenever it exists. The [[Risch algorithm]], implemented in [[Mathematica]] and other [[computer algebra system]]s, does just that for functions and antiderivatives built from rational functions, [[Nth root|radicals]], logarithm, and exponential functions.
| |
| | |
| Some special integrands occur often enough to warrant special study. In particular, it may be useful to have, in the set of antiderivatives, the [[special functions]] of [[physics]] (like the [[Associated Legendre function|Legendre functions]], the [[hypergeometric function]], the [[Gamma function]], the [[Incomplete Gamma function]] and so on — see [[Symbolic integration]] for more details). Extending the Risch's algorithm to include such functions is possible but challenging and has been an active research subject.
| |
| | |
| More recently a new approach has emerged, using [[D-finite function|''D''-finite function]], which are the solutions of [[linear differential equation]]s with polynomial coefficients. Most of the elementary and special functions are ''D''-finite and the integral of a ''D''-finite function is also a ''D''-finite function. This provide an algorithm to express the antiderivative of a ''D''-finite function as the solution of a differential equation.
| |
| | |
| This theory allows also to compute a definite integrals of a ''D''-function as the sum of a series given by the first coefficients and an algorithm to compute any coefficient.<ref>http://algo.inria.fr/chyzak/mgfun.html</ref>
| |
| | |
| ===Numerical===
| |
| {{Main|Numerical integration}}
| |
| | |
| The integrals encountered in a basic calculus course are deliberately chosen for simplicity; those found in real applications are not always so accommodating. Some integrals cannot be found exactly, some require special functions which themselves are a challenge to compute, and others are so complex that finding the exact answer is too slow. This motivates the study and application of numerical methods for approximating integrals, which today use [[Floating point|floating-point arithmetic]] on digital electronic computers. Many of the ideas arose much earlier, for hand calculations; but the speed of general-purpose computers like the [[ENIAC]] created a need for improvements.
| |
| | |
| The goals of numerical integration are accuracy, reliability, efficiency, and generality. Sophisticated methods can vastly outperform a naive method by all four measures ({{Harvnb|Dahlquist|Björck|2008}}; {{Harvnb|Kahaner|Moler|Nash|1989}}; {{Harvnb|Stoer|Bulirsch|2002}}). Consider, for example, the integral
| |
| :<math> \int_{-2}^{2} \tfrac{1}{5} \left( \tfrac{1}{100}(322 + 3 x (98 + x (37 + x))) - 24 \frac{x}{1+x^2} \right) dx </math>
| |
| which has the exact answer {{nowrap|94/25 {{=}} 3.76}}. (In ordinary practice the answer is not known in advance, so an important task — not explored here — is to decide when an approximation is good enough.) A “calculus book” approach divides the integration range into, say, 16 equal pieces, and computes function values.
| |
| :{| cellpadding="0" cellspacing="0" class="wikitable" style="text-align:center;background-color:white"
| |
| |+ Spaced function values
| |
| |-
| |
| ! ''x''
| |
| | colspan="2" | −2.00 || colspan="2" | −1.50 || colspan="2" | −1.00 || colspan="2" | −0.50 || colspan="2" | 0.00 || colspan="2" | 0.50 || colspan="2" | 1.00 || colspan="2" | 1.50 || colspan="2" | 2.00
| |
| |- style="font-size:80%"
| |
| ! style="font-size:125%" | ''f''(''x'')
| |
| | colspan="2" | 2.22800 || colspan="2" | 2.45663 || colspan="2" | 2.67200 || colspan="2" | 2.32475 || colspan="2" | 0.64400 || colspan="2" | −0.92575 || colspan="2" | −0.94000 || colspan="2" | −0.16963 || colspan="2" | 0.83600
| |
| |-
| |
| ! ''x''
| |
| |
| |
| | colspan="2" | −1.75 || colspan="2" | −1.25 || colspan="2" | −0.75 || colspan="2" | −0.25 || colspan="2" | 0.25 || colspan="2" | 0.75 || colspan="2" | 1.25 || colspan="2" | 1.75 ||
| |
| |- style="font-size:80%"
| |
| ! style="font-size:125%" | ''f''(''x'')
| |
| |
| |
| | colspan="2" | 2.33041 || colspan="2" | 2.58562 || colspan="2" | 2.62934 || colspan="2" | 1.64019 || colspan="2" | −0.32444 || colspan="2" | −1.09159 || colspan="2" | −0.60387 || colspan="2" | 0.31734 ||
| |
| |- style="background-color:#aaa"
| |
| | || || || || || || || || || || || || || || || || || || <!-- extra row improves column spacing -->
| |
| |}
| |
| [[File:Numerical quadrature 4up.png|thumb|right|Numerical quadrature methods: <span style="color:#bc1e47">■</span> Rectangle, <span style="color:#fec200">■</span> Trapezoid, <span style="color:#0081cd">■</span> Romberg, <span style="color:#009246">■</span> Gauss]]
| |
| Using the left end of each piece, the [[rectangle method]] sums 16 function values and multiplies by the step width, ''h'', here 0.25, to get an approximate value of 3.94325 for the integral. The accuracy is not impressive, but calculus formally uses pieces of infinitesimal width, so initially this may seem little cause for concern. Indeed, repeatedly doubling the number of steps eventually produces an approximation of 3.76001. However, 2<sup>18</sup> pieces are required, a great computational expense for such little accuracy; and a reach for greater accuracy can force steps so small that arithmetic precision becomes an obstacle.
| |
| | |
| A better approach replaces the horizontal tops of the rectangles with slanted tops touching the function at the ends of each piece. This [[trapezium rule]] is almost as easy to calculate; it sums all 17 function values, but weights the first and last by one half, and again multiplies by the step width. This immediately improves the approximation to 3.76925, which is noticeably more accurate. Furthermore, only 2<sup>10</sup> pieces are needed to achieve 3.76000, substantially less computation than the rectangle method for comparable accuracy.
| |
| | |
| [[Romberg's method]] builds on the trapezoid method to great effect. First, the step lengths are halved incrementally, giving trapezoid approximations denoted by ''T''(''h''<sub>0</sub>), ''T''(''h''<sub>1</sub>), and so on, where ''h''<sub>''k''+1</sub> is half of ''h''<sub>''k''</sub>. For each new step size, only half the new function values need to be computed; the others carry over from the previous size (as shown in the table above). But the really powerful idea is to [[Interpolation|interpolate]] a polynomial through the approximations, and extrapolate to ''T''(0). With this method a numerically ''exact'' answer here requires only four pieces (five function values)! The [[Lagrange polynomial]] interpolating {{nowrap|{''h''<sub>''k''</sub>,''T''(''h''<sub>''k''</sub>)}<sub>''k'' {{=}} 0…2</sub> {{=}} {(4.00,6.128), (2.00,4.352), (1.00,3.908)}}} is {{nowrap|3.76 + 0.148''h''<sup>2</sup>}}, producing the extrapolated value 3.76 at {{nowrap|''h'' {{=}} 0}}.
| |
| | |
| [[Gaussian quadrature]] often requires noticeably less work for superior accuracy. In this example, it can compute the function values at just two ''x'' positions, ±2⁄√3, then double each value and sum to get the numerically exact answer. The explanation for this dramatic success lies in error analysis, and a little luck. An ''n-''point Gaussian method is exact for polynomials of degree up to 2''n''−1. The function in this example is a degree 3 polynomial, plus a term that cancels because the chosen endpoints are symmetric around zero. (Cancellation also benefits the Romberg method.)
| |
| | |
| Shifting the range left a little, so the integral is from −2.25 to 1.75, removes the symmetry. Nevertheless, the trapezoid method is rather slow, the polynomial interpolation method of Romberg is acceptable, and the Gaussian method requires the least work — if the number of points is known in advance. As well, rational interpolation can use the same trapezoid evaluations as the Romberg method to greater effect.
| |
| | |
| :{| class="wikitable" style="background-color:white;text-align:center"
| |
| |+ Quadrature method cost comparison
| |
| |-
| |
| ! style="text-align:right" | Method
| |
| | '''Trapezoid''' || '''Romberg''' || '''Rational''' || '''Gauss'''
| |
| |-
| |
| ! style="text-align:right" | Points
| |
| | 1048577 || 257 || 129 || 36
| |
| |-
| |
| ! style="text-align:right" | Rel. Err.
| |
| | −5.3×10<sup>−13</sup> || −6.3×10<sup>−15</sup> || 8.8×10<sup>−15</sup> || 3.1×10<sup>−15</sup>
| |
| |-
| |
| ! style="text-align:right" | Value
| |
| | colspan="4" | <math>\textstyle \int_{-2.25}^{1.75} f(x)\,dx = 4.1639019006585897075\ldots</math>
| |
| |}
| |
| | |
| In practice, each method must use extra evaluations to ensure an error bound on an unknown function; this tends to offset some of the advantage of the pure Gaussian method, and motivates the popular [[Gauss–Kronrod quadrature formula]]e. Symmetry can still be exploited by splitting this integral into two ranges, from −2.25 to −1.75 (no symmetry), and from −1.75 to 1.75 (symmetry). More broadly, [[adaptive quadrature]] partitions a range into pieces based on function properties, so that data points are concentrated where they are needed most.
| |
| | |
| [[Simpson's rule]], named for [[Thomas Simpson]] (1710–1761), uses a parabolic curve to approximate integrals. In many cases, it is more accurate than the [[trapezoidal rule]] and others. The rule states that
| |
| :<math> \int_a^b f(x) \, dx \approx \frac{b-a}{6}\left[f(a) + 4f\left(\frac{a+b}{2}\right)+f(b)\right],</math>
| |
| with an error of
| |
| :<math> \left|-\frac{(b-a)^5}{2880} f^{(4)}(\xi)\right|.</math>
| |
| | |
| The computation of higher-dimensional integrals (for example, volume calculations) makes important use of such alternatives as [[Monte Carlo integration]].
| |
| | |
| A calculus text is no substitute for numerical analysis, but the reverse is also true. Even the best adaptive numerical code sometimes requires a user to help with the more demanding integrals. For example, improper integrals may require a change of variable or methods that can avoid infinite function values, and known properties like symmetry and periodicity may provide critical leverage.
| |
| | |
| ===Mechanical===
| |
| The area of an arbitrary two-dimensional shape can be determined using a measuring instrument called [[planimeter]]. The volume of irregular objects can be measured with precision by the fluid [[displacement (fluid)|displaced]] as the object is submerged; see [[Archimedes]]'s [[Eureka (word)|Eureka]].
| |
| | |
| ===Geometrical===
| |
| {{main|Quadrature (mathematics)}}
| |
| Area can be found via [[geometrical]] [[compass-and-straightedge constructions]] of an equivalent [[square (geometry)|square]], e.g., [[squaring the circle]].
| |
| | |
| ==Some important definite integrals==
| |
| Mathematicians have used definite integrals as a tool to define identities. Among these identities is the definition of the [[Euler–Mascheroni constant]]:
| |
| :<math>\gamma = \int_1^ \infty\left({1\over\lfloor x\rfloor}-{1\over x}\right)\,dx \, ,</math>
| |
| | |
| the [[Gamma function]]:
| |
| :<math> \Gamma(z) = \int_0^\infty e^{-t} t^{z-1} dt,</math>
| |
| | |
| the [[Fourier transform]] which is widely used in physics:
| |
| : <math>F(\xi) = \int_{-\infty}^{\infty} f(x) e^{-2\pi i x\xi}\,dx,</math>
| |
| | |
| [[Laplace transform]] which is widely used in engineering:
| |
| : <math>F(s) = \int_0^{\infty} f(t) e^{-st} \,dt,</math>
| |
| | |
| and the [[Gaussian Integral]] fundamental to the [[Normal Distribution]] used in [[probability]] and [[statistics]]:
| |
| | |
| :<math>\int_{-\infty}^{\infty} e^{-x^2}\,dx = \sqrt{\pi}.</math>
| |
| | |
| ==See also==
| |
| {{Portal|Mathematics}}
| |
| <div style="-moz-column-count:2; column-count:2;">
| |
| * [[Antiderivative]]
| |
| * [[Darboux integral]]
| |
| * [[Henstock–Kurzweil integral]]
| |
| * [[Integral equation]]
| |
| * [[Integral symbol]]
| |
| * [[Integration by parts]]
| |
| * [[Lebesgue integration]]
| |
| * [[Lists of integrals]] – integrals of the most common functions
| |
| * [[Multiple integral]]
| |
| * [[Numerical integration]]
| |
| * [[Riemann integral]]
| |
| * [[Riemann sum]]
| |
| * [[Riemann–Stieltjes integral]]
| |
| * [[Symbolic integration]]
| |
| </div>
| |
| | |
| ==Notes==
| |
| {{Reflist}}
| |
| | |
| ==References==
| |
| {{columns-list|2|
| |
| * {{Citation | last=Apostol | first=Tom M. | author-link=Tom M. Apostol | title=Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra | year=1967 | edition=2nd | publisher=[[John Wiley & Sons|Wiley]] | isbn=978-0-471-00005-1}}
| |
| * {{Citation | last=Bourbaki| first=Nicolas | author-link=Nicolas Bourbaki | title=Integration I | year=2004 | publisher=[[Springer Science+Business Media|Springer Verlag]] | isbn=3-540-41129-1}}. In particular chapters III and IV.
| |
| * {{Citation | last=Burton | first=David M. | title=The History of Mathematics: An Introduction | edition=6th | year=2005<!--November 8--> | publisher=[[McGraw-Hill]] | isbn=978-0-07-305189-5 | page=359}}
| |
| * {{Citation | last=Cajori | first=Florian | author-link=Florian Cajori | title=A History Of Mathematical Notations Volume II | year=1929 | publisher=[[Open Court Publishing Company|Open Court Publishing]] | url=http://www.archive.org/details/historyofmathema027671mbp | isbn=978-0-486-67766-8 | pages=247–252}}
| |
| * {{Citation | last1=Dahlquist | first1=Germund | author1-link=Germund Dahlquist | last2=Björck | first2=Åke | title=Numerical Methods in Scientific Computing, Volume I | publisher=[[Society for Industrial and Applied Mathematics|SIAM]] | location=Philadelphia | year=2008 | url=http://www.mai.liu.se/~akbjo/NMbook.html | chapter=Chapter 5: Numerical Integration}}
| |
| * {{Citation | last = Folland | first = Gerald B.| title=Real Analysis: Modern Techniques and Their Applications | edition=1st | publisher=[[John Wiley & Sons]] | year = 1984 | isbn=978-0-471-80958-6 }}
| |
| * {{Citation | last=Fourier | first=Jean Baptiste Joseph | author-link=Joseph Fourier | title=Théorie analytique de la chaleur | year=1822 | publisher=Chez Firmin Didot, père et fils | url=http://books.google.com/books?id=TDQJAAAAIAAJ | page=§231}}<br />Available in translation as {{citation | last=Fourier | first=Joseph | title=The analytical theory of heat | year=1878<!--original 1822--> | publisher=[[Cambridge University Press]] | url=http://www.archive.org/details/analyticaltheory00fourrich | others=Freeman, Alexander (trans.) | pages=200–201}}
| |
| * {{Citation | editor-last=Heath | editor-first=T. L. | editor-link=T. L. Heath | title = The Works of Archimedes | year = 2002 | publisher = [[Dover Publications|Dover]] | isbn = 978-0-486-42084-4 | url = http://www.archive.org/details/worksofarchimede029517mbp }}<br/>(Originally published by [[Cambridge University Press]], 1897, based on J. L. Heiberg's Greek version.)
| |
| * {{Citation | last=Hildebrandt | first=T. H. | author-link= | title=Integration in abstract spaces | journal=[[Bulletin of the American Mathematical Society]] | volume=59 | year=1953 | pages=111–139 | url=http://projecteuclid.org/euclid.bams/1183517761 | issn=0273-0979 | issue=2 | doi=10.1090/S0002-9904-1953-09694-X}}
| |
| * {{Citation | last1=Kahaner | first1=David | last2=Moler | first2=Cleve | author2-link=Cleve Moler | last3=Nash | first3=Stephen | title=Numerical Methods and Software | year=1989 | publisher=[[Prentice Hall]] | chapter=Chapter 5: Numerical Quadrature | isbn=978-0-13-627258-8 }}
| |
| *{{Citation | last1=Katz | first1=Victor J. | title=A History of Mathematics, Brief Version | publisher=[[Addison-Wesley]] | isbn=978-0-321-16193-2 | year=2004}}
| |
| * {{Citation | last=Leibniz | first=Gottfried Wilhelm | author-link=Gottfried Wilhelm Leibniz | title=Der Briefwechsel von Gottfried Wilhelm Leibniz mit Mathematikern. Erster Band | editor-last=Gerhardt | editor-first=Karl Immanuel | place=Berlin | publisher=Mayer & Müller | year=1899 | url=http://name.umdl.umich.edu/AAX2762.0001.001}}
| |
| <!--
| |
| * ''Der Briefwechsel von Gottfried Wilhelm Leibniz mit Mathematikern. Erster Band. Hrsg. von C. I. Gerhardt. Mit Unterstützung der Königl. Preussischen Akademie der Wissenschaften.''<br />Leibniz, Gottfried Wilhelm, Freiherr von, 1646–1716., Gerhardt, Karl Immanuel, ed. 1816–1899, Berlin: Mayer & Müller, 1899. [http://name.umdl.umich.edu/AAX2762.0001.001]
| |
| -->
| |
| * {{citation|title=Analysis|first1=Elliott|last1=Lieb|authorlink1=Elliott Lieb|first2=Michael|last2=Loss|authorlink2=Michael Loss|year=2001 | isbn = 978-0821827833 | edition = 2 | publisher=AMS Chelsea}}
| |
| * {{Citation | last=Miller | first=Jeff | title=Earliest Uses of Symbols of Calculus | url=http://jeff560.tripod.com/calculus.html | accessdate=2009-11-22}}
| |
| * {{Citation | last1=O’Connor | first1=J. J. | last2=Robertson | first2=E. F. | title=A history of the calculus | year=1996 | url=http://www-history.mcs.st-andrews.ac.uk/HistTopics/The_rise_of_calculus.html | accessdate=2007-07-09 }}
| |
| * {{Citation | last=Rudin | first=Walter | author-link=Walter Rudin | title=Real and Complex Analysis | year=1987 | edition=International | publisher=[[McGraw-Hill]] | chapter=Chapter 1: Abstract Integration | isbn=978-0-07-100276-9}}
| |
| * {{Citation | last=Saks | first=Stanisław | author-link=Stanisław Saks | title=Theory of the integral | url=http://matwbn.icm.edu.pl/kstresc.php?tom=7&wyd=10&jez= | edition= English translation by L. C. Young. With two additional notes by Stefan Banach. Second revised | publisher= Dover | place=New York | year=1964 }}
| |
| *{{citation | last1=Shea | first1=Marilyn | title=Biography of Zu Chongzhi |date = May 2007| url=http://hua.umf.maine.edu/China/astronomy/tianpage/0014ZuChongzhi9296bw.html | publisher=University of Maine | accessdate=9 January 2009}}
| |
| * {{citation|last=Siegmund-Schultze|first=Reinhard|chapter=Henri Lebesgue|title=Princeton Companion to Mathematics|editors=Timothy Gowers, June Barrow-Green, Imre Leader|year=2008|publisher=Princeton University Press}}.
| |
| * {{Citation | last1=Stoer | first1=Josef | last2=Bulirsch | first2=Roland | year=2002 | title=Introduction to Numerical Analysis | edition=3rd | publisher=[[Springer Science+Business Media|Springer]] | chapter=Chapter 3: Topics in Integration | isbn=978-0-387-95452-3 }}.
| |
| * {{Citation | author=W3C | year=2006<!--January--> | title=Arabic mathematical notation<!--W3C Interest Group Note 31--> | url=http://www.w3.org/TR/arabic-math/}}
| |
| }}
| |
| | |
| ==External links==
| |
| {{Wikibooks|Calculus}}
| |
| * {{springer|title=Integral|id=p/i051340}}
| |
| * [http://mathworld.wolfram.com/RiemannSum.html Riemann Sum] by [[Wolfram Research]]
| |
| * [http://www.khanacademy.org/video/introduction-to-definite-integrals?playlist=Calculus Introduction to definite integrals] by [[Khan Academy]]
| |
| | |
| ===Online books===
| |
| * Keisler, H. Jerome, [http://www.math.wisc.edu/~keisler/calc.html Elementary Calculus: An Approach Using Infinitesimals], University of Wisconsin
| |
| * Stroyan, K.D., [http://www.math.uiowa.edu/~stroyan/InfsmlCalculus/InfsmlCalc.htm A Brief Introduction to Infinitesimal Calculus], University of Iowa
| |
| * Mauch, Sean, [http://www.its.caltech.edu/~sean/book/unabridged.html ''Sean's Applied Math Book''], CIT, an online textbook that includes a complete introduction to calculus
| |
| * Crowell, Benjamin, [http://www.lightandmatter.com/calc/ ''Calculus''], Fullerton College, an online textbook
| |
| * Garrett, Paul, [http://www.math.umn.edu/~garrett/calculus/ Notes on First-Year Calculus]
| |
| * Hussain, Faraz, [http://www.understandingcalculus.com Understanding Calculus], an online textbook
| |
| * Kowalk, W.P., [http://einstein.informatik.uni-oldenburg.de/20910.html ''Integration Theory''], University of Oldenburg. A new concept to an old problem. Online textbook
| |
| * Sloughter, Dan, [http://math.furman.edu/~dcs/book Difference Equations to Differential Equations], an introduction to calculus
| |
| * [http://numericalmethods.eng.usf.edu/topics/integration.html Numerical Methods of Integration] at ''Holistic Numerical Methods Institute''
| |
| * P.S. Wang, [http://www.lcs.mit.edu/publications/specpub.php?id=660 Evaluation of Definite Integrals by Symbolic Manipulation] (1972) — a cookbook of definite integral techniques
| |
| | |
| {{integral}}
| |
| | |
| [[Category:Integrals|*]]
| |
| [[Category:Functions and mappings]]
| |
| [[Category:Linear operators in calculus]]
| |
| | |
| {{Link FA|mk}}
| |
| {{Link FA|ca}}
| |
| {{Link FA|eu}}
| |
| [[de:Integralrechnung]]
| |