Methods for uncertainty analysis

Uncertainty analysis can be done in two general ways:

  • quantitatively, by trying to estimate in numerical terms the magnitude of uncertainties in the final results (and if appropriate at key stages in the analysis); and
  • qualitatively, by describing and/or categorising the main uncertainties inherent in the analysis.

While quantitative analysis may generally be considered superior, it suffers from number of disadvantages.  In particular, not all sources of uncertainty may be quantifiable with any degree of reliability, especially those related to issue-framing or value-based judgements.  Quantitative measures may therefore bias the description of uncertainty towards the more computational components of the assessment.  Many users are also unfamiliar with the concepts and methods used to quantify uncertainties, making it difficult to communicate the results effectively.  In many cases, therefore, qualitative measures continue to be useful either alone or in combination with quantitative analysis.

 

Quantitative methods

A range of methods for quantitative analysis of uncertainty have been deveoped.  These vary in their complexity but all have the capability to represent uncertainty at each stage in the analysis and to show how uncertainties propagate throughout the analytical chain.  Techniques include: sensitivity analysis, Taylor Series Approximation (mathematical approximation), Monte Carlo sampling (simulation approach) and Bayesian statistical modeling. 

  1. Sensitivity analysis.   This assesses how changing inputs to a model or an analysis can affect the results.  It is undertaken by repeatedly rerunning the analysis, incrementally changing target variables on each occasion, either one at a time or in combination  It thus allows the relative effects of different parameters to be identified and assessed.  It is especially useful for analysing the assumptions made in assessment scenarios (e.g. by changing the emissions or to explote best and worst case situations).   The main limitation of sensitivity analysis is that it becomes extremely complex if uncertainties arise through interactions between a large number of variables, because of the large number of permutations that might need to be considered.
  2. Taylor Series Approximation.  This is a mathematical technique to approximate the underlying distribution that characterises uncertainty in a process.  Once such an approximation is found, it is computationally inexpensive and so is useful when dealing with  large and complex models for which more sophisticated methods may be infeasible.
  3. Monte Carlo simulation.  This is a reiterative process of analysis, which uses repeated samples from probability distributions as the inputs for models.  It thus generates a distribution of outputs which reflects the uncertainties in the models. Monte Carlo simulation is a very useful technique when the assessment concerns the probability of exceeding a specified (e.g. safe) limit, or where models are highly non-linear, but it can be computationally expensive.
  4. Bayesian statistical modeling.  The preceding approaches all apply to deterministic models. In practice, many of the parameters in these models have been estimated from data but are then, at least initially, treated as known and fixed entities in the models. Stochastic models estimate the parameters from the data and fit the model in one step, thereby directly incorporating uncertainty in the parameter estimates due to imperfections in the data.  Bayesian modelling goes one step further and incorporates additional uncertainty in the parameters from other sources (expressed in terms of probability distributions).

Further details on these quantitative approaches to assessing uncertainty can be found in the Toolkit section of this Toolbox.  A number of computational packages are also listed below, which provide the capability to apply these techniques.

 

Qualitative methods

By their nature, qualitative methods of uncertainty analysis tend to be less formalised than quantitative methods.  On the one hand this is a significant disadvantage, for it means that results are not always easy to compare between different studies or analysts; on the other hand, it makes these approaches far more flexible and adaptable to circumstance.  As a result, qualitative methods can be devised according to need, so that they may be used to evaluate almost any aspect of uncertainty, at any stage in the analysis, in almost any context. 

To be informative, however, qualitative methods must meet a number of criteria:

  1. they must be based on a clear conceptual framework of uncertainty, and clear criteria;
  2. they must be (at least internally) reproducible - i.e. would generate the same results whenapplied to the same information in the same context;
  3. they must be interpretable for the end-users - i.e. they should be expressed in terms that are both familiar and meaningful to the end-user;
  4. they should focus on uncertainties in the assessment that influence the utility of the results and/or might affect the decisions that might be made as a consequence. 

A number of techniques can be used in this context.   A very useful approach is to employ simple, relative measures of uncertainty, expressed in terms of 'the degree of confidence'.  One example of this is the IPCC (2005) system, sumarised in Table 1 below.  This can be applied to each step in the assessment (and each of the main models or outcomes), in order to show where within the analysis the main uncertainties lie.  Ideally, estimates of confidence are made not by a single individual, but by a panel of assessors, who are either involved in the assessment, or provided with relevant details of, the assumptions made and the data sources and procedures used.  

Table 1.  The IPCC level of confidence scale

Level of confidence Degree of confidence in being correct
Very high At least 9 out of 10 chance
High About 8 out of 10
Medium About 5 out of 10
Low About 2 out of 10
Very low Less than 1 out of 10

 

Results of using a scorecard such as this can be reported textually.  For the purpose of communication, however, scales of this type can usefully be represented diagrammatically in the form of an 'uncertainty profile' (see example below), showing the level of confidence in the outcomes from each stage in the analysis.  This helps to show how uncertainties may both propagate and dissipate through the analysis, and enables comparison between different pathways, scenarios or assessment methods. 

397

Figure 1.  Example of an uncertainty profile for assessment of health effects of waste management under two policy scenarios

References: 

Monte Carlo simulation

Metropolis, N. and Ulam, S. 1949  The Monte Carlo method.  Journal of the American Statistical Association 44, 335-341.

Sobol, I. M.A  1949  Primer for the Monte Carlo method. Boca Raton, FL: CRC Press.

Rubinstein, R. Y. and Kroese, D. P. 2007 Simulation and the Monte Carlo method (2nd ed.). New York: John Wiley & Sons.

MS Excel Tutorial on Monte Carlo Method

 

Bayesian statistical analysis

Gelman, A., Carlin, J.B, Stern, H. and Rubin, D.B.  2003 Bayesian data analysis. New York:  Chapman and Hall.

Gilks, W. 1996 Markov Chain Monte Carlo in practice. New York: Chapman & Hall.

  

Taylor Series Expansion

MacLeod, M., A.J. Fraser and Mackay, D. 2002  Evaluating and expressing the propagation of uncertainty in chemical fate and bioaccumulation models. Environmental Toxicology and Chemistry 21(4), 700-709.

Morgan, M.G. and Henrion, M. 1990 Uncertainty: a guide to dealing with uncertainty in quantitative risk and policy analysis. Cambridge: Cambridge University Press.

 

Qualitative methods

IPCC 2005 Guidance notes for lead authors on the IPCC fourth assessment report on adressing uncertainty. Intergovernmental Panel on Climate Change.