Degree Lintner

From formulasearchengine
Revision as of 05:33, 12 July 2012 by en>Dicklyon (hyphen fixes)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Uncertainty quantification (UQ) is the science of quantitative characterization and reduction of uncertainties in applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if we exactly knew the speed, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.

Many problems in the natural sciences and engineering are also rife with sources of uncertainty. Computer experiments on computer simulations are the most common approach to study problems in uncertainty quantification.[1][2][3]

Sources of uncertainty

Uncertainty can enter mathematical models and experimental measurements in various contexts. One way to categorize the sources of uncertainty is to consider:[4]

  • Parameter uncertainty, which comes from the model parameters that are inputs to the computer model (mathematical model) but whose exact values are unknown to experimentalists and cannot be controlled in physical experiments. Examples are the local free-fall acceleration in a falling object experiment, and various material properties in a finite element analysis for engineering.
  • Structural uncertainty, aka model inadequacy, model bias, or model discrepancy, which comes from the lack of knowledge of the underlying true physics. It depends on how accurately a mathematical model describes the true system for a real-life situation, considering the fact that models are almost always only approximations to reality. One example is when modeling the process of a falling object using the free-fall model; the model itself is inaccurate since there always exists air friction. In this case, even if there is no unknown parameter in the model, a discrepancy is still expected between the model and true physics.
  • Algorithmic uncertainty, aka numerical uncertainty, which comes from numerical errors and numerical approximations per implementation of the computer model. Most models are too complicated to solve exactly. For example the finite element method or finite difference method may be used to approximate the solution of a partial differential equation, which, however, introduces numerical errors. Other examples are numerical integration and infinite sum truncation that are necessary approximations in numerical implementation.
  • Parametric variability, which comes from the variability of input variables of the model. For example, the dimensions of a work piece in a process of manufacture may not be exactly as designed and instructed, which would cause variability in its performance.
  • Experimental uncertainty, aka observation error, which comes from the variability of experimental measurements. The experimental uncertainty is inevitable and can be noticed by repeating a measurement for many times using exactly the same settings for all inputs/variables.
  • Interpolation uncertainty, which comes a lack of available data collected from computer model simulations and/or experimental measurements. For other input settings that don't have simulation data or experimental measurements, one must interpolate or extrapolate in order to predict the corresponding responses.

Another way of categorization is to classify uncertainty into two categories:[5][6]

  • Aleatoric uncertainty, aka statistical uncertainty, which is representative of unknowns that differ each time we run the same experiment. For an example of simulating the take-off of an airplane, even if we could exactly control the wind speeds along the run way, if we let 10 planes of the same make start, their trajectories would still differ due to fabrication differences. Similarly, if all we knew is that the average wind speed is the same, letting the same plane start 10 times would still yield different trajectories because we do not know the exact wind speed at every point of the runway, only its average. Aleatoric uncertainties are therefore something an experimenter cannot do anything about: they exist, and they cannot be suppressed by more accurate measurements.
  • Epistemic uncertainty, aka systematic uncertainty, which is due to things we could in principle know but don't in practice. This may be because we have not measured a quantity sufficiently accurately, or because our model neglects certain effects, or because particular data are deliberately hidden. An example of a source of this uncertainty would be the drag of a feather in an experiment designed to measure the acceleration of gravity near the earth's surface. The commonly used gravitational acceleration of 9.8 m/s^2 ignores the effects of air resistance, but the air resistance for the feather could be measured and incorporated into the experiment to reduce the systematic uncertainty in the calculation of the gravitational acceleration. The latter activities would then leave only aleatoric (statistical) uncertainties (wind velocity, feather orientation, etc) present in the experiment.

In real life applications, both kinds of uncertainties are often present. Uncertainty quantification intends to work toward reducing epistemic uncertainties to aleatoric uncertainties. The quantification for the aleatoric uncertainties is relatively straightforward to perform. Techniques such as Monte Carlo method are frequently used. A probability distribution can be represented by its moments (in the Gaussian case, the mean and covariance suffice, although it should be noted that, in general, even knowledge of all moments to arbitrarily high order still does not specify the distribution function uniquely), or more recently, by techniques such as Karhunen–Loève and polynomial chaos expansions. To evaluate epistemic uncertainties, the efforts are made to gain better knowledge of the system, process or mechanism. Methods such as fuzzy logic or evidence theory (Dempster–Shafer theory – a generalization of the Bayesian theory of subjective probability) are used.

Two types of uncertainty quantification problems

There are two major types of problems in uncertainty quantification: one is the forward propagation of uncertainty and the other is the inverse assessment of model uncertainty and parameter uncertainty. There has been a proliferation of research on the former problem and a majority of uncertainty analysis techniques were developed for it. On the other hand, the latter problem is drawing increasing attention in the engineering design community, since uncertainty quantification of a model and the subsequence predictions of the true system response(s) are of great interest in both robust design and engineering design making.

Forward uncertainty propagation

DTZ's public sale group in Singapore auctions all forms of residential, workplace and retail properties, outlets, homes, lodges, boarding homes, industrial buildings and development websites. Auctions are at present held as soon as a month.

We will not only get you a property at a rock-backside price but also in an space that you've got longed for. You simply must chill out back after giving us the accountability. We will assure you 100% satisfaction. Since we now have been working in the Singapore actual property market for a very long time, we know the place you may get the best property at the right price. You will also be extremely benefited by choosing us, as we may even let you know about the precise time to invest in the Singapore actual property market.

The Hexacube is offering new ec launch singapore business property for sale Singapore investors want to contemplate. Residents of the realm will likely appreciate that they'll customize the business area that they wish to purchase as properly. This venture represents one of the crucial expansive buildings offered in Singapore up to now. Many investors will possible want to try how they will customise the property that they do determine to buy by means of here. This location has offered folks the prospect that they should understand extra about how this course of can work as well.

Singapore has been beckoning to traders ever since the value of properties in Singapore started sky rocketing just a few years again. Many businesses have their places of work in Singapore and prefer to own their own workplace area within the country once they decide to have a everlasting office. Rentals in Singapore in the corporate sector can make sense for some time until a business has discovered a agency footing. Finding Commercial Property Singapore takes a variety of time and effort but might be very rewarding in the long term.

is changing into a rising pattern among Singaporeans as the standard of living is increasing over time and more Singaporeans have abundance of capital to invest on properties. Investing in the personal properties in Singapore I would like to applaud you for arising with such a book which covers the secrets and techniques and tips of among the profitable Singapore property buyers. I believe many novice investors will profit quite a bit from studying and making use of some of the tips shared by the gurus." – Woo Chee Hoe Special bonus for consumers of Secrets of Singapore Property Gurus Actually, I can't consider one other resource on the market that teaches you all the points above about Singapore property at such a low value. Can you? Condominium For Sale (D09) – Yong An Park For Lease

In 12 months 2013, c ommercial retails, shoebox residences and mass market properties continued to be the celebrities of the property market. Models are snapped up in report time and at document breaking prices. Builders are having fun with overwhelming demand and patrons need more. We feel that these segments of the property market are booming is a repercussion of the property cooling measures no.6 and no. 7. With additional buyer's stamp responsibility imposed on residential properties, buyers change their focus to commercial and industrial properties. I imagine every property purchasers need their property funding to understand in value. Uncertainty propagation is the quantification of uncertainties in system output(s) propagated from uncertain inputs. It focuses on the influence on the outputs from the parametric variability listed in the sources of uncertainty. The targets of uncertainty propagation analysis can be:

  • To evaluate low-order moments of the outputs, i.e. mean and variance.
  • To evaluate the reliability of the outputs. This is especially useful in reliability engineering where outputs of a system are usually closely related to the performance of the system.
  • To assess the complete probability distribution of the outputs. This is useful in the scenario of utility optimization where the complete distribution is used to calculate the utility.

Inverse uncertainty quantification

Inverse uncertainty quantification is that, given some experimental measurements of a system and some computer simulation results from its mathematical model, to estimate the discrepancy between the experiment and the mathematical model (which is called bias correction), and to estimate the values of unknown parameters in the model if there is any (which is called parameter calibration or simply calibration). Generally it is a much more difficult problem than uncertainty propagation; however it is of great importance since it is typically implemented in a model updating process. There are several scenarios in inverse uncertainty quantification:

The outcome of bias correction, including an updated model (prediction mean) and prediction confidence interval. Drawn and granted permission to use by Dr. Paul D. Arendt from Northwestern University, IL, USA

Bias correction only

It considers an inaccurate model but without any unknown parameter. The target is to assess the model inadequacy. The general model updating formulation for bias correction is:

ye(x)=ym(x)+δ(x)+ε

where ye(x) denotes the experimental measurements as a function of several input variables x, ym(x) denotes the computer model (mathematical model) response, δ(x) denotes the additive discrepancy function (aka bias function), and ε denotes the experimental uncertainty. The objective is to estimate the discrepancy function δ(x), and as a by-product, the resulting updated model is ym(x)+δ(x). A prediction confidence interval is provided with the updated model as the quantification of the uncertainty.

Parameter calibration only

It considers a model that fully describes the underlying physics but with one or more unknown parameters. The general model updating formulation for calibration is:

ye(x)=ym(x,θ*)+ε

where ym(x,θ) denotes the computer model response that depends on several unknown model paramters θ, and θ* denotes the true values of the unknown parameters in the course of experiments. The objective is to either estimate θ*, or to come up with a probability distribution of θ* that encompasses the best knowledge of the true parameter values.

Bias correction and parameter calibration

It considers an inaccurate model with one or more unknown parameters, and its model updating formulation combines the two together:

ye(x)=ym(x,θ*)+δ(x)+ε

It is the most comprehensive model updating formulation that includes all possible sources of uncertainty, and it requires the most effort to solve.

Selective methodologies for uncertainty quantification

Much research has been done to solve uncertainty quantification problems, though a majority of them deal with uncertainty propagation. During the past one to two decades, a number of approaches for inverse uncertainty quantification problems have also been developed and have proved to be useful for most small- to medium-scale problems.

Methodologies for forward uncertainty propagation

Existing uncertainty propagation approaches include probabilistic approaches and non-probabilistic approaches. There are basically five categories of probabilistic approaches for uncertainty propagation:[7]

  • Simulation-based methods: Monte Carlo simulations, importance sampling, adaptive sampling, etc.
  • Local expansion-based methods: Taylor series, perturbation method, etc. These methods have advantages when dealing with relatively small input variability and outputs that don't express high nonlinearity. These linear or linearized methods are detailed in the article Uncertainty propagation.
  • Functional expansion-based methods: Neumann expansion, orthogonal or Karhunen-Loeve expansions (KLE), with polynomial chaos expansion (PCE) and wavelet expansions as special cases.
  • Most probable point (MPP)-based methods: first-order reliability method (FORM) and second-order reliability method (SORM).
  • Numerical integration-based methods: Full factorial numerical integration (FFNI) and dimension reduction (DR).

For non-probabilistic approaches, interval analysis [8] , Fuzzy theory, possibility theory and evidence theory are among the most widely used.

The probabilistic approach is considered as the most rigorous approach to uncertainty analysis in engineering design due to its consistency with the theory of decision analysis. Its cornerstone is the calculation of probability density functions for sampling statistics.[9] This can be performed rigorously for random variables that are obtainable as transformations of Gaussian variables, leading to exact confidence intervals.

Methodologies for inverse uncertainty quantification

Existing methodologies for inverse uncertainty quantification are mostly under the Bayesian framework. The most intriguing direction is to aim at solving problems with both bias correction and parameter calibration. The challenges of such problems include not only the influences from model inadequacy and parameter uncertainty, but also the lack of data from both computer simulations and experiments. A common situation is that the input settings are not the same over experiments and simulations.

Modular Bayesian approach

An approach to inverse uncertainty quantification is the modular Bayesian approach.[4][10] The modular Bayesian approach derives its name from its four-module procedure. Apart from the current available data, a prior distribution of unknown parameters should be assigned.

Module 1: Gaussian process modeling for the computer model

To address the issue from lack of simulation results, the computer model is replaced with a Gaussian Process (GP) model

ym(x,θ)𝒢𝒫(hm()Tβm,σm2Rm(,))

where

Rm((x,θ),(x,θ))=exp{k=1dωkm(xkxk)2}exp{k=1rωd+km(θkθk)2}.

d is the dimension of input variables, and r is the dimension of unknown parameters.While hm() is pre-defined, {βm,σm,ωkm,k=1,,d+r}, known as hyperparameters of the GP model, need to be estimated via maximum likelihood estimation (MLE). This module can be considered as a generalized Kriging method.

Module 2: Gaussian process modeling for the discrepancy function

Similarly with the first module, the discrepancy function is replaced with a GP model

δ(x)𝒢𝒫(hδ()Tβδ,σδ2Rδ(,))

where

Rδ(x,x)=exp{k=1dωkδ(xkxk)2}.

Together with the prior distribution of unknown parameters, and data from both computer models and experiments, one can derive the maximum likelihood estimates for {βδ,σδ,ωkδ,k=1,,d}. At the same time, βm from Module 1 gets updated as well.

Module 3: Posterior distribution of unknown parameters

Bayes' theorem is applied to calculate the posterior distribution of the unknown parameters:

p(θ|data,ϕ)p(data|θ,ϕ)p(θ)

where ϕ includes all the fixed hyperparameters in previous modules.

Module 4: Prediction of the experimental response and discrepancy function

Fully Bayesian approach

Fully Bayesian approach requires that not only the priors for unknown parameters θ but also the priors for the other hyperparameters ϕ should be assigned. It follows the following steps:[11]

  1. Derive the posterior distribution p(θ,ϕ|data);
  2. Integrate ϕ out and obtain p(θ|data). This single step accomplishes the calibration;
  3. Prediction of the experimental response and discrepancy function.

However, the approach has significant drawbacks:

  • For most cases, p(θ,ϕ|data) is a highly intractable function of ϕ. Hence the integration becomes very troublesome. Moreover, if priors for the other hyperparameters ϕ are not carefully chosen, the complexity in numerical integration increases even more.
  • In the prediction stage, the prediction (which should at least include the expected value of system responses) also requires numerical integration. Markov chain Monte Carlo (MCMC) is often used for integration; however it is computationally expensive.

The fully Bayesian approach requires a huge amount of calculations and may not yet be practical for dealing with the most complicated modelling situations.[11]

Known issues

The theories and methodologies for uncertainty propagation are much better established, compared with inverse uncertainty quantification. For the latter, several difficulties remain unsolved:

  1. Dimensionality issue: The computational cost increases dramatically with the dimensionality of the problem, i.e. the number of input variables and/or the number of unknown parameters.
  2. Identifiability issue:[12] Multiple combinations of unknown parameters and discrepancy function can yield the same experimental prediction. Hence different values of parameters cannot be distinguished/identified.

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

Further reading

  1. Jerome Sacks, William J. Welch, Toby J. Mitchell and Henry P. Wynn, Design and Analysis of Computer Experiments, Statistical Science, Vol. 4, No. 4 (Nov., 1989), pp. 409-423
  2. Ronald L. Iman, Jon C. Helton, An Investigation of Uncertainty and Sensitivity Analysis Techniques for Computer Models, Risk Analysis, Volume 8, Issue 1, pages 71–90, March 1988, DOI: 10.1111/j.1539-6924.1988.tb01155.x
  3. W.E. Walker, P. Harremoës, J. Rotmans, J.P. van der Sluijs, M.B.A. van Asselt, P. Janssen and M.P. Krayer von Krauss, Defining Uncertainty: A Conceptual Basis for Uncertainty Management in Model-Based Decision Support, Integrated Assessment, Volume 4, Issue 1, 2003, DOI: 10.1076/iaij.4.1.5.16466
  4. 4.0 4.1 Marc C. Kennedy, Anthony O'Hagan, Bayesian calibration of computer models, Journal of the Royal Statistical Society, Series B Volume 63, Issue 3, pages 425–464, 2001
  5. Armen Der Kiureghiana, Ove Ditlevsen, Aleatory or epistemic? Does it matter?, Structural Safety, Volume 31, Issue 2, March 2009, Pages 105–112
  6. Hermann G. Matthies, Quantifying uncertainty: modern computational representation of probability and applications, Extreme Man-Made and Natural Hazards in Dynamics of Structures, NATO Security through Science Series, 2007, 105-135, DOI: 10.1007/978-1-4020-5656-7_4
  7. S. H. Lee and W. Chen, A comparative study of uncertainty propagation methods for black-box-type problems, Structural and Multidisciplinary Optimization Volume 37, Number 3 (2009), 239-253, DOI: 10.1007/s00158-008-0234-7
  8. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  9. Arnaut, L. R. Measurement uncertainty in reverberation chambers - I. Sample statistics. Technical report TQE 2, 2nd. ed., sec. 3.1, National Physical Laboratory, 2008.
  10. Marc C. Kennedy, Anthony O'Hagan, Supplementary Details on Bayesian Calibration of Computer Models, Sheffield, University of Sheffield: 1-13, 2000
  11. 11.0 11.1 F. Liu, M.J. Bayarriy and J.O.Bergerz, Modularization in Bayesian Analysis, with Emphasis on Analysis of Computer Models, Bayesian Analysis (2009) 4, Number 1, pp. 119-150, DOI:10.1214/09-BA404
  12. Arendt, P., W. Chen, and D. Apley, Improving Identifiability in Model Calibration Using Multiple Responses, DETC2011-48623, ASME International Design Engineering Technical Conferences, August 28–31, Washington, D.C., 2011