Sensitivity analysis (SA), in particular global sensitivity analysis (GSA), is now regarded as a discipline coming of age, primarily for understanding and quantifying how model results and associated inferences depend on its parameters and assumptions. Indeed, GSA is seen as a key part of good modelling practice. However, inappropriate SA, such as insufficient convergence of sensitivity metrics, can lead to untrustworthy results and associated inferences.
Good practice SA should also consider the robustness of results and inferences to choices in methods and assumptions relating to the procedure. Moreover, computationally expensive models are common in various fields including environmental domains, where model runtimes are long due to the nature of the model itself, and/or software platform and legacy issues. To extract, using GSA, the most accurate information from a computationally expensive model, there may be a need for increased computational efficiency. Primary considerations here are sampling methods that provide efficient but adequate coverage of parameter space and estimation algorithms for sensitivity indices that are computationally efficient. An essential aspect in the procedure is adopting methods that monitor and assess the convergence of sensitivity metrics.
The thesis reviews the different categories of GSA methods, and then lays out the various factors and choices therein that can impact the robustness of a GSA exercise. It argues that the overall level of assurance, or practical trustworthiness, of results obtained is engendered from consideration of robustness with respect to the individual choices made for each impact factor. Such consideration would minimally involve transparent justification of individual choices made in the GSA exercise but, wherever feasible, include assessment of the impacts on results of plausible alternative choices. Satisfactory convergence plays a key role in contributing to the level of assurance, and hence the ultimate effectiveness of the GSA can be enhanced if choices are made to achieve that convergence. The thesis examines several of these impact factors, primary ones being the GSA method/estimator, the sampling method and the convergence monitoring method, the latter being essential for ensuring robustness.
The motivation of the thesis is to gain a further understanding and quantitative appreciation of elements that shape and guide the results and computational efficiency of a GSA exercise. This is undertaken through comparative analysis of estimators of GSA sensitivity measures, sampling methods and error estimation of sensitivity metrics in various settings using well-established test functions. Although quasi-Monte Carlo Sobol’ sampling can be a good choice computationally, it has error spike issues which are addressed here through a new Column Shift resampling method. We also explore an Active Subspace based GSA method, which is demonstrated to be more informative and computationally efficient than those based on the variance-based Sobol’ method. Given that GSA can be computationally demanding, the thesis aims to explore ways that GSA can be more computationally efficient by addressing how convergence can be monitored and assessed, analysing and improving sampling methods that provide a high convergence rate with low error in sensitivity measures, and analysing and comparing GSA methods, including their algorithm settings.