Browsing by Subject "Uncertainty Quantification"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item A Hierarchical History Matching Method and its Applications(2012-02-14) Yin, JichaoModern reservoir management typically involves simulations of geological models to predict future recovery estimates, providing the economic assessment of different field development strategies. Integrating reservoir data is a vital step in developing reliable reservoir performance models. Currently, most effective strategies for traditional manual history matching commonly follow a structured approach with a sequence of adjustments from global to regional parameters, followed by local changes in model properties. In contrast, many of the recent automatic history matching methods utilize parameter sensitivities or gradients to directly update the fine-scale reservoir properties, often ignoring geological inconsistency. Therefore, there is need for combining elements of all of these scales in a seamless manner. We present a hierarchical streamline-assisted history matching, with a framework of global-local updates. A probabilistic approach, consisting of design of experiments, response surface methodology and the genetic algorithm, is used to understand the uncertainty in the large-scale static and dynamic parameters. This global update step is followed by a streamline-based model calibration for high resolution reservoir heterogeneity. This local update step assimilates dynamic production data. We apply the genetic global calibration to unconventional shale gas reservoir specifically we include stimulated reservoir volume as a constraint term in the data integration to improve history matching and reduce prediction uncertainty. We introduce a novel approach for efficiently computing well drainage volumes for shale gas wells with multistage fractures and fracture clusters, and we will filter stochastic shale gas reservoir models by comparing the computed drainage volume with the measured SRV within specified confidence limits. Finally, we demonstrate the value of integrating downhole temperature measurements as coarse-scale constraint during streamline-based history matching of dynamic production data. We first derive coarse-scale permeability trends in the reservoir from temperature data. The coarse information are then downscaled into fine scale permeability by sequential Gaussian simulation with block kriging, and updated by local-scale streamline-based history matching. he power and utility of our approaches have been demonstrated using both synthetic and field examples.Item Adjoint-Based Uncertainty Quantification and Sensitivity Analysis for Reactor Depletion Calculations(2013-08-02) Stripling, Hayes FranklinDepletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.Item Comparative Deterministic and Probabilistic Modeling in Geotechnics: Applications to Stabilization of Organic Soils, Determination of Unknown Foundations for Bridge Scour, and One-Dimensional Diffusion Processes(2013-08-08) Yousefpour, NeginThis study presents different aspects on the use of deterministic methods including Artificial Neural Networks (ANNs), and linear and nonlinear regression, as well as probabilistic methods including Bayesian inference and Monte Carlo methods to develop reliable solutions for challenging problems in geotechnics. This study addresses the theoretical and computational advantages and limitations of these methods in application to: 1) prediction of the stiffness and strength of stabilized organic soils, 2) determination of unknown foundations for bridges vulnerable to scour, and 3) uncertainty quantification for one-dimensional diffusion processes. ANNs were successfully implemented in this study to develop nonlinear models for the mechanical properties of stabilized organic soils. ANN models were able to learn from the training examples and then generalize the trend to make predictions for the stiffness and strength of stabilized organic soils. A stepwise parameter selection and a sensitivity analysis method were implemented to identify the most relevant factors for the prediction of the stiffness and strength. Also, the variations of the stiffness and strength with respect to each factor were investigated. A deterministic and a probabilistic approach were proposed to evaluate the characteristics of unknown foundations of bridges subjected to scour. The proposed methods were successfully implemented and validated by collecting data for bridges in the Bryan District. ANN models were developed and trained using the database of bridges to predict the foundation type and embedment depth. The probabilistic Bayesian approach generated probability distributions for the foundation and soil characteristics and was able to capture the uncertainty in the predictions. The parametric and numerical uncertainties in the one-dimensional diffusion process were evaluated under varying observation conditions. The inverse problem was solved using Bayesian inference formulated by both the analytical and numerical solutions of the ordinary differential equation of diffusion. The numerical uncertainty was evaluated by comparing the mean and standard deviation of the posterior realizations of the process corresponding to the analytical and numerical solutions of the forward problem. It was shown that higher correlation in the structure of the observations increased both parametric and numerical uncertainties, whereas increasing the number of data dramatically decreased the uncertainties in the diffusion process.Item History matching and uncertainty quantificiation using sampling method(2009-05-15) Ma, XianlinUncertainty quantification involves sampling the reservoir parameters correctly from a posterior probability function that is conditioned to both static and dynamic data. Rigorous sampling methods like Markov Chain Monte Carlo (MCMC) are known to sample from the distribution but can be computationally prohibitive for high resolution reservoir models. Approximate sampling methods are more efficient but less rigorous for nonlinear inverse problems. There is a need for an efficient and rigorous approach to uncertainty quantification for the nonlinear inverse problems. First, we propose a two-stage MCMC approach using sensitivities for quantifying uncertainty in history matching geological models. In the first stage, we compute the acceptance probability for a proposed change in reservoir parameters based on a linearized approximation to flow simulation in a small neighborhood of the previously computed dynamic data. In the second stage, those proposals that passed a selected criterion of the first stage are assessed by running full flow simulations to assure the rigorousness. Second, we propose a two-stage MCMC approach using response surface models for quantifying uncertainty. The formulation allows us to history match three-phase flow simultaneously. The built response exists independently of expensive flow simulation, and provides efficient samples for the reservoir simulation and MCMC in the second stage. Third, we propose a two-stage MCMC approach using upscaling and non-parametric regressions for quantifying uncertainty. A coarse grid model acts as a surrogate for the fine grid model by flow-based upscaling. The response correction of the coarse-scale model is performed by error modeling via the non-parametric regression to approximate the response of the computationally expensive fine-scale model. Our proposed two-stage sampling approaches are computationally efficient and rigorous with a significantly higher acceptance rate compared to traditional MCMC algorithms. Finally, we developed a coarsening algorithm to determine an optimal reservoir simulation grid by grouping fine scale layers in such a way that the heterogeneity measure of a defined static property is minimized within the layers. The optimal number of layers is then selected based on a statistical analysis. The power and utility of our approaches have been demonstrated using both synthetic and field examples.Item Phoenix: A Reactor Burnup Code With Uncertainty Quantification(2014-12-15) Spence, Grant RCodes for accurately simulating the core composition changes for nuclear reactors have developed as computing technology developed. The desire to understand neutronics, material compositions, and reactor parameters as a function of time has been, and will continue to be, an area of great interest in nuclear research. Several methods have been developed to simulate reactor burnup; however, quantifying the uncertainty in reactor burnup simulations is in its relative infancy. This research developed a fundamentally different approach to calculate burnup simulation uncertainty using perturbations and regression methods. In this work, a computer software package called PHOENIX was developed that simulates reactor burnup and provides a quantitative prediction of the systematic uncertainty associated with simulation modeling parameters. PHOENIX is a ?linkage? code that connects the Monte Carlo N-Particle transport code MCNP6 to the buildup and depletion code ORIGEN-S. A verification and validation analysis was performed on four different reactor configurations using PHOENIX. The validation analysis consisted of two separate components: a code-to-code validation with MONTEBURNS 2.0 and a perturbation validation analysis using two different perturbation methods. Each analysis observed differences in reactor parameters and gram compositions for a selected isotopic suite, and compared them to a pre-determined validation criteria. For the code-to-code validation component, every reactor configuration simulated in PHOENIX produced reactor parameter values within five percent of the values provided by MONTEBURNS 2.0. A majority of the isotopes simulated in each code also produced gram quantities with differences of less than five percent. Similarly, the perturbation validation analysis confirmed that the simulation parameters produced by PHOENIX using each perturbation method contained differences of less than five percent for a majority of the cases. The outlying instances where a reactor parameter or isotopic composition did not pass validation criteria are explained in detail. The results from the validation analysis showed that PHOENIX produces valid estimates of reactor core compositions throughout burnup.Item Quantification of Uncertainties Due to Opacities in a Laser-Driven Radiative-Shock Problem(2013-03-28) Hetzler, Adam CThis research presents new physics-based methods to estimate predictive uncertainty stemming from uncertainty in the material opacities in radiative transfer computations of key quantities of interest (QOIs). New methods are needed because it is infeasible to apply standard uncertainty-propagation techniques to the O(105) uncertain opacities in a realistic simulation. The new approach toward uncertainty quantification applies the uncertainty analysis to the physical parameters in the underlying model used to calculate the opacities. This set of uncertain parameters is much smaller (O(102)) than the number of opacities. To further reduce the dimension of the set of parameters to be rigorously explored, we use additional screening applied at two different levels of the calculational hierarchy: first, physics-based screening eliminates the physical parameters that are unimportant from underlying physics models a priori; then, sensitivity analysis in simplified versions of the complex problem of interest screens out parameters that are not important to the QOIs. We employ a Bayesian Multivariate Adaptive Regression Spline (BMARS) emulator for this sensitivity analysis. The high dimension of the input space and large number of samples test the efficacy of these methods on larger problems. Ultimately, we want to perform uncertainty quantification on the large, complex problem with the reduced set of parameters. Results of this research demonstrate that the QOIs for target problems agree at for different parameter screening criteria and varying sample sizes. Since the QOIs agree, we have gained confidence in our results using the multiple screening criteria and sample sizes.Item The Method of Manufactured Universes for Testing Uncertainty Quantification Methods(2011-02-22) Stripling, Hayes FranklinThe Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which "experimental" data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented for this research manufactures a particle-transport "universe," models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new "experiments" within the manufactured reality. To test further the responses of these UQ methods, we conduct exercises with "experimental" replicates, "measurement" error, and choices of physical inputs that reduce the accuracy of the diffusion model's approximation of our manufactured laws. Our first application of MMU was rich in areas for exploration and highly informative. In the case of the Gaussian process code, we found that the fundamental statistical formulation was not appropriate for our functional data, but that the code allows a knowledgable user to vary parameters within this formulation to tailor its behavior for a specific problem. The Bayesian MARS formulation was a more natural emulator given our manufactured laws, and we used the MMU framework to develop further a calibration method and to characterize the diffusion model discrepancy. Overall, we conclude that an MMU exercise with a properly designed universe (that is, one that is an adequate representation of some real-world problem) will provide the modeler with an added understanding of the interaction between a given UQ method and his/her more complex problem of interest. The modeler can then apply this added understanding and make more informed predictive statements.