# Browsing by Subject "Uncertainty quantification"

Now showing 1 - 16 of 16

###### Results Per Page

###### Sort Options

Item Bayesian learning methods for potential energy parameter inference in coarse-grained models of atomistic systems(2015-05) Wright, Eric Thomas; Moser, Robert deLancey; Rossky, Peter J.; Demkowicz, Leszek; Oden, J. T.; Elber, Ron; Prudhomme, SergeShow more The present work addresses issues related to the derivation of reduced models of atomistic systems, their statistical calibration, and their relation to atomistic models of materials. The reduced model, known in the chemical physics community as a coarse-grained model, is calibrated within a Bayesian framework. Particular attention is given to developing likelihood functions, assigning priors on coarse-grained model parameters, and using data from molecular dynamics representations of atomistic systems to calibrate coarse-grained models such that certain physically relevant atomistic observables are accurately reproduced. The developed Bayesian framework is then applied in three case studies of increasing complexity and practical application. A freely jointed chain model is considered first for illustrative purposes. The next example entails the construction of a coarse-grained model for a liquid heptane system, with the explicit design goal of accurately predicting a vapor-liquid transfer free energy. Finally, a coarse-grained model is developed for an alkylthiophene polymer that has been shown to have practical use in certain types of photovoltaic cells. The development therein employs Bayesian decision theory to select an optimal CG potential energy function. Subsequently, this model is subjected to validation tests in a prediction scenario that is relevant to the performance of a polyalkylthiophene-based solar cell.Show more Item A computational framework for the solution of infinite-dimensional Bayesian statistical inverse problems with application to global seismic inversion(2015-08) Martin, James Robert, Ph. D.; Ghattas, Omar N.; Biros, George; Demkowicz, Leszek; Fomel, Sergey; Marzouk, Youssef; Moser, RobertShow more Quantifying uncertainties in large-scale forward and inverse PDE simulations has emerged as a central challenge facing the field of computational science and engineering. The promise of modeling and simulation for prediction, design, and control cannot be fully realized unless uncertainties in models are rigorously quantified, since this uncertainty can potentially overwhelm the computed result. While statistical inverse problems can be solved today for smaller models with a handful of uncertain parameters, this task is computationally intractable using contemporary algorithms for complex systems characterized by large-scale simulations and high-dimensional parameter spaces. In this dissertation, I address issues regarding the theoretical formulation, numerical approximation, and algorithms for solution of infinite-dimensional Bayesian statistical inverse problems, and apply the entire framework to a problem in global seismic wave propagation. Classical (deterministic) approaches to solving inverse problems attempt to recover the “best-fit” parameters that match given observation data, as measured in a particular metric. In the statistical inverse problem, we go one step further to return not only a point estimate of the best medium properties, but also a complete statistical description of the uncertain parameters. The result is a posterior probability distribution that describes our state of knowledge after learning from the available data, and provides a complete description of parameter uncertainty. In this dissertation, a computational framework for such problems is described that wraps around the existing forward solvers, as long as they are appropriately equipped, for a given physical problem. Then a collection of tools, insights and numerical methods may be applied to solve the problem, and interrogate the resulting posterior distribution, which describes our final state of knowledge. We demonstrate the framework with numerical examples, including inference of a heterogeneous compressional wavespeed field for a problem in global seismic wave propagation with 10⁶ parameters.Show more Item Coupled flow systems, adjoint techniques and uncertainty quantification(2012-08) Garg, Vikram Vinod, 1985-; Carey, Graham F.; Prudhomme, Serge M.; Dawson, Clint N.; Gamba, Irene; Ghattas, Omar; Oden, J. Tinsley; Carey, VarisShow more Coupled systems are ubiquitous in modern engineering and science. Such systems can encompass fluid dynamics, structural mechanics, chemical species transport and electrostatic effects among other components, all of which can be coupled in many different ways. In addition, such models are usually multiscale, making their numerical simulation challenging, and necessitating the use of adaptive modeling techniques. The multiscale, multiphysics models of electrosomotic flow (EOF) constitute a particularly challenging coupled flow system. A special feature of such models is that the coupling between the electric physics and hydrodynamics is via the boundary. Numerical simulations of coupled systems are typically targeted towards specific Quantities of Interest (QoIs). Adjoint-based approaches offer the possibility of QoI targeted adaptive mesh refinement and efficient parameter sensitivity analysis. The formulation of appropriate adjoint problems for EOF models is particularly challenging, due to the coupling of physics via the boundary as opposed to the interior of the domain. The well-posedness of the adjoint problem for such models is also non-trivial. One contribution of this dissertation is the derivation of an appropriate adjoint problem for slip EOF models, and the development of penalty-based, adjoint-consistent variational formulations of these models. We demonstrate the use of these formulations in the simulation of EOF flows in straight and T-shaped microchannels, in conjunction with goal-oriented mesh refinement and adjoint sensitivity analysis. Complex computational models may exhibit uncertain behavior due to various reasons, ranging from uncertainty in experimentally measured model parameters to imperfections in device geometry. The last decade has seen a growing interest in the field of Uncertainty Quantification (UQ), which seeks to determine the effect of input uncertainties on the system QoIs. Monte Carlo methods remain a popular computational approach for UQ due to their ease of use and "embarassingly parallel" nature. However, a major drawback of such methods is their slow convergence rate. The second contribution of this work is the introduction of a new Monte Carlo method which utilizes local sensitivity information to build accurate surrogate models. This new method, called the Local Sensitivity Derivative Enhanced Monte Carlo (LSDEMC) method can converge at a faster rate than plain Monte Carlo, especially for problems with a low to moderate number of uncertain parameters. Adjoint-based sensitivity analysis methods enable the computation of sensitivity derivatives at virtually no extra cost after the forward solve. Thus, the LSDEMC method, in conjuction with adjoint sensitivity derivative techniques can offer a robust and efficient alternative for UQ of complex systems. The efficiency of Monte Carlo methods can be further enhanced by using stratified sampling schemes such as Latin Hypercube Sampling (LHS). However, the non-incremental nature of LHS has been identified as one of the main obstacles in its application to certain classes of complex physical systems. Current incremental LHS strategies restrict the user to at least doubling the size of an existing LHS set to retain the convergence properties of LHS. The third contribution of this research is the development of a new Hierachical LHS algorithm, that creates designs which can be used to perform LHS studies in a more flexibly incremental setting, taking a step towards adaptive LHS methods.Show more Item Error analysis for radiation transport(2013-12) Tencer, John Thomas; Howell, John R.Show more All relevant sources of error in the numerical solution of the radiative transport equation are considered. Common spatial discretization methods are discussed for completeness. The application of these methods to the radiative transport equation is not substantially different than for any other partial differential equation. Several of the most prevalent angular approximations within the heat transfer community are implemented and compared. Three model problems are proposed. The relative accuracy of each of the angular approximations is assessed for a range of optical thickness and scattering albedo. The model problems represent a range of application spaces. The quantified comparison of these approximations on the basis of accuracy over such a wide parameter space is one of the contributions of this work. The major original contribution of this work involves the treatment of errors associated with the energy-dependence of intensity. The full spectrum correlated-k distribution (FSK) method has received recent attention as being a good compromise between computational expense and accuracy. Two approaches are taken towards quantifying the error associated with the FSK method. The Multi-Source Full Spectrum k–Distribution (MSFSK) method makes use of the convenient property that the FSK method is exact for homogeneous media. It involves a line-by-line solution on a coarse grid and a number of k-distribution solutions on subdomains to effectively increase the grid resolution. This yields highly accurate solutions on fine grids and a known rate of convergence as the number of subdomains increases. The stochastic full spectrum k-distribution (SFSK) method is a more general approach to estimating the error in k-distribution solutions. The FSK method relies on a spectral reordering and scaling which greatly simplify the spectral dependence of the absorption coefficient. This reordering is not necessarily consistent across the entire domain which results in errors. The SFSK method involves treating the absorption line blackbody distribution function not as deterministic but rather as a stochastic process. The mean, covariance, and correlation structure are all fit empirically to data from a high resolution spectral database. The standard deviation of the heat flux prediction is found to be a good error estimator for the k-distribution method.Show more Item Hessian-based response surface approximations for uncertainty quantification in large-scale statistical inverse problems, with applications to groundwater flow(2013-08) Flath, Hannah Pearl; Ghattas, Omar N.Show more Subsurface flow phenomena characterize many important societal issues in energy and the environment. A key feature of these problems is that subsurface properties are uncertain, due to the sparsity of direct observations of the subsurface. The Bayesian formulation of this inverse problem provides a systematic framework for inferring uncertainty in the properties given uncertainties in the data, the forward model, and prior knowledge of the properties. We address the problem: given noisy measurements of the head, the pdf describing the noise, prior information in the form of a pdf of the hydraulic conductivity, and a groundwater flow model relating the head to the hydraulic conductivity, find the posterior probability density function (pdf) of the parameters describing the hydraulic conductivity field. Unfortunately, conventional sampling of this pdf to compute statistical moments is intractable for problems governed by large-scale forward models and high-dimensional parameter spaces. We construct a Gaussian process surrogate of the posterior pdf based on Bayesian interpolation between a set of "training" points. We employ a greedy algorithm to find the training points by solving a sequence of optimization problems where each new training point is placed at the maximizer of the error in the approximation. Scalable Newton optimization methods solve this "optimal" training point problem. We tailor the Gaussian process surrogate to the curvature of the underlying posterior pdf according to the Hessian of the log posterior at a subset of training points, made computationally tractable by a low-rank approximation of the data misfit Hessian. A Gaussian mixture approximation of the posterior is extracted from the Gaussian process surrogate, and used as a proposal in a Markov chain Monte Carlo method for sampling both the surrogate as well as the true posterior. The Gaussian process surrogate is used as a first stage approximation in a two-stage delayed acceptance MCMC method. We provide evidence for the viability of the low-rank approximation of the Hessian through numerical experiments on a large scale atmospheric contaminant transport problem and analysis of an infinite dimensional model problem. We provide similar results for our groundwater problem. We then present results from the proposed MCMC algorithms.Show more Item Modeling and uncertainty quantification of non-contact scanning thermal microscopy(2016-05) Huang, Yu, M.S. in Engineering; Shi, Li, Ph.D.; Murthy, JayathiShow more Since its introduction, Scanning Thermal Microscopy (SThM) has been widely used to measure surface temperature and thermal properties of nano-scale materials and structures with high spatial resolution. However, discrepancy exits between the temperature read by the SThM probe and the actual temperature of sample measured. In addition, the temperature of the measured sample can be affected by the presence of the SThM probe. In this thesis work, we used Ansys Fluent to develop a SThM model to establish calibration between the temperature read by the SThM probe and the actual temperature of measurement. The effects of the probe on the temperature of sample is also quantified. We use Bayesian inference to calibrate the unknown thermal conductivities of the polymer (substrate). This model is validated by comparing its predictions with experiment observations. We also quantify the uncertainties in the Quantity of Interest (QoI), the probe tip temperature, due to the uncertainty in the simulation input parameters. This is accomplished by using a generalized polynomial chaos (gPC) formalism. A response surface relating the QoI to model inputs is constructed through stochastic collocation. A Smolyak sparse grid is used to reduce the computation expense. The response surface is sampled based on the PDFs of the input parameters to obtain the PDF of the QoI. We find the uncertainty in the cross-plane thermal conductivity of the liquid polymer and the diameter of the probe tip have large contributions to the overall uncertainty in the QoI.Show more Item Multiscale Simulation and Uncertainty Quantification Techniques for Richards' Equation in Heterogeneous Media(2012-10-19) Kang, Seul KiShow more In this dissertation, we develop multiscale finite element methods and uncertainty quantification technique for Richards' equation, a mathematical model to describe fluid flow in unsaturated porous media. Both coarse-level and fine-level numerical computation techniques are presented. To develop an accurate coarse-scale numerical method, we need to construct an effective multiscale map that is able to capture the multiscale features of the large-scale solution without resolving the small scale details. With a careful choice of the coarse spaces for multiscale finite element methods, we can significantly reduce errors. We introduce several methods to construct coarse spaces for multiscale finite element methods. A coarse space based on local spectral problems is also presented. The construction of coarse spaces begins with an initial choice of multiscale basis functions supported in coarse regions. These basis functions are complemented using weighted local spectral eigenfunctions. These newly constructed basis functions can capture the small scale features of the solution within a coarse-grid block and give us an accurate coarse-scale solution. However, it is expensive to compute the local basis functions for each parameter value for a nonlinear equation. To overcome this difficulty, local reduced basis method is discussed, which provides smaller dimension spaces with which to compute the basis functions. Robust solution techniques for Richards' equation at a fine scale are discussed. We construct iterative solvers for Richards' equation, whose number of iterations is independent of the contrast. We employ two-level domain decomposition pre-conditioners to solve linear systems arising in approximation of problems with high contrast. We show that, by using the local spectral coarse space for the preconditioners, the number of iterations for these solvers is independent of the physical properties of the media. Several numerical experiments are given to support the theoretical results. Last, we present numerical methods for uncertainty quantification applications for Richards' equation. Numerical methods combined with stochastic solution techniques are proposed to sample conductivities of porous media given in integrated data. Our proposed algorithm is based on upscaling techniques and the Markov chain Monte Carlo method. Sampling results are presented to prove the efficiency and accuracy of our algorithm.Show more Item On goal-oriented error estimation and adaptivity for nonlinear systems with uncertain data and application to flow problems(2014-12) Bryant, Corey Michael; Prudhomme, Serge M.; Dawson, Clinton N.Show more The objective of this work is to develop a posteriori error estimates and adaptive strategies for the numerical solution to nonlinear systems of partial differential equations with uncertain data. Areas of application cover problems in fluid mechanics including a Bayesian model selection study of turbulence comparing different uncertainty models. Accounting for uncertainties in model parameters may significantly increase the computational time when simulating complex problems. The premise is that using error estimates and adaptively refining the solution process can reduce the cost of such simulations while preserving their accuracy within some tolerance. New insights for goal-oriented error estimation for deterministic nonlinear problems are first presented. Linearization of the adjoint problems and quantities of interest introduces higher-order terms in the error representation that are generally neglected. Their effects on goal-oriented adaptive strategies are investigated in detail here. Contributions on that subject include extensions of well-known theoretical results for linear problems to the nonlinear setting, computational studies in support of these results, and an extensive comparative study of goal-oriented adaptive schemes that do, and do not, include the higher-order terms. Approaches for goal-oriented error estimation for PDEs with uncertain coefficients have already been presented, but lack the capability of distinguishing between the different sources of error. A novel approach is proposed here, that decomposes the error estimate into contributions from the physical discretization and the uncertainty approximation. Theoretical bounds are proven and numerical examples are presented to verify that the approach identifies the predominant source of the error in a surrogate model. Adaptive strategies, that use this error decomposition and refine the approximation space accordingly, are designed and tested. All methodologies are demonstrated on benchmark flow problems: Stokes lid-driven cavity, 1D Burger’s equation, 2D incompressible flows at low Reynolds numbers. The procedure is also applied to an uncertainty quantification study of RANS turbulence models in channel flows. Adaptive surrogate models are constructed to make parameter uncertainty propagation more efficient. Using surrogate models and adaptivity in a Bayesian model selection procedure, it is shown that significant computational savings can be gained over the full RANS model while maintaining similar accuracy in the predictions.Show more Item On the representation of model inadequacy : a stochastic operator approach(2016-05) Morrison, Rebecca Elizabeth; Moser, Robert deLancey; Oden, John Tinsley; Ghattas, Omar; Henkelman, Graeme; Oliver, Todd A; Simmons, Christopher SShow more Mathematical models of physical systems are subject to many sources of uncertainty such as measurement errors and uncertain initial and boundary conditions. After accounting for these uncertainties, it is often revealed that there remains some discrepancy between the model output and the observations; if so, the model is said to be inadequate. In practice, the inadequate model may be the best that is available or tractable, and so despite its inadequacy the model may be used to make predictions of unobserved quantities. In this case, a representation of the inadequacy is necessary, so the impact of the observed discrepancy can be determined. We investigate this problem in the context of chemical kinetics and propose a new technique to account for model inadequacy that is both probabilistic and physically meaningful. Chemical reactions are generally modeled by a set of nonlinear ordinary differential equations (ODEs) for the concentrations of the species and temperature. In this work, a stochastic inadequacy operator S is introduced which includes three parts. The first is represented by a random matrix which is embedded within the ODEs of the concentrations. The matrix is required to satisfy several physical constraints, and its most general form exhibits some useful properties, such as having only non-positive eigenvalues. The second is a smaller but specific set of nonlinear terms that also modifies the species’ concentrations, and the third is an operator that properly accounts for changes to the energy equation due to the previous changes. The entries of S are governed by probability distributions, which in turn are characterized by a set of hyperparameters. The model parameters and hyperparameters are calibrated using high-dimensional hierarchical Bayesian inference, with data from a range of initial conditions. This allows the use of the inadequacy operator on a wide range of scenarios, rather than correcting any particular realization of the model with a corresponding data set. We apply the method to typical problems in chemical kinetics including the reaction mechanisms of hydrogen and methane combustion. We also study how the inadequacy representation affects an unobserved quantity of interest— the flamespeed of a one-dimensional hydrogen laminar flame.Show more Item Parametric uncertainty and sensitivity methods for reacting flows(2014-05) Braman, Kalen Elvin; Raman, VenkatShow more A Bayesian framework for quantification of uncertainties has been used to quantify the uncertainty introduced by chemistry models. This framework adopts a probabilistic view to describe the state of knowledge of the chemistry model parameters and simulation results. Given experimental data, this method updates the model parameters' values and uncertainties and propagates that parametric uncertainty into simulations. This study focuses on syngas, a combination in various ratios of H2 and CO, which is the product of coal gasification. Coal gasification promises to reduce emissions by replacing the burning of coal with the less polluting burning of syngas. Despite the simplicity of syngas chemistry models, they nonetheless fail to accurately predict burning rates at high pressure. Three syngas models have been calibrated using laminar flame speed measurements. After calibration the resulting uncertainty in the parameters is propagated forward into the simulation of laminar flame speeds. The model evidence is then used to compare candidate models. Sensitivity studies, in addition to Bayesian methods, can be used to assess chemistry models. Sensitivity studies provide a measure of how responsive target quantities of interest (QoIs) are to changes in the parameters. The adjoint equations have been derived for laminar, incompressible, variable density reacting flow and applied to hydrogen flame simulations. From the adjoint solution, the sensitivity of the QoI to the chemistry model parameters has been calculated. The results indicate the most sensitive parameters for flame tip temperature and NOx emission. Such information can be used in the development of new experiments by pointing out which are the critical chemistry model parameters. Finally, a broader goal for chemistry model development is set through the adjoint methodology. A new quantity, termed field sensitivity, is introduced to guide chemistry model development. Field sensitivity describes how information of perturbations in flowfields propagates to specified QoIs. The field sensitivity, mathematically shown as equivalent to finding the adjoint of the primal governing equations, is obtained for laminar hydrogen flame simulations using three different chemistry models. Results show that even when the primal solution is sufficiently close for the three mechanisms, the field sensitivity can vary.Show more Item Predicting multibody assembly of proteins(2014-08) Rasheed, Md. Muhibur; Bajaj, ChandrajitShow more This thesis addresses the multi-body assembly (MBA) problem in the context of protein assemblies. [...] In this thesis, we chose the protein assembly domain because accurate and reliable computational modeling, simulation and prediction of such assemblies would clearly accelerate discoveries in understanding of the complexities of metabolic pathways, identifying the molecular basis for normal health and diseases, and in the designing of new drugs and other therapeutics. [...] [We developed] F²Dock (Fast Fourier Docking) which includes a multi-term function which includes both a statistical thermodynamic approximation of molecular free energy as well as several of knowledge-based terms. Parameters of the scoring model were learned based on a large set of positive/negative examples, and when tested on 176 protein complexes of various types, showed excellent accuracy in ranking correct configurations higher (F² Dock ranks the correcti solution as the top ranked one in 22/176 cases, which is better than other unsupervised prediction software on the same benchmark). Most of the protein-protein interaction scoring terms can be expressed as integrals over the occupied volume, boundary, or a set of discrete points (atom locations), of distance dependent decaying kernels. We developed a dynamic adaptive grid (DAG) data structure which computes smooth surface and volumetric representations of a protein complex in O(m log m) time, where m is the number of atoms assuming that the smallest feature size h is [theta](r[subscript max]) where r[subscript max] is the radius of the largest atom; updates in O(log m) time; and uses O(m)memory. We also developed the dynamic packing grids (DPG) data structure which supports quasi-constant time updates (O(log w)) and spherical neighborhood queries (O(log log w)), where w is the word-size in the RAM. DPG and DAG together results in O(k) time approximation of scoring terms where k << m is the size of the contact region between proteins. [...] [W]e consider the symmetric spherical shell assembly case, where multiple copies of identical proteins tile the surface of a sphere. Though this is a restricted subclass of MBA, it is an important one since it would accelerate development of drugs and antibodies to prevent viruses from forming capsids, which have such spherical symmetry in nature. We proved that it is possible to characterize the space of possible symmetric spherical layouts using a small number of representative local arrangements (called tiles), and their global configurations (tiling). We further show that the tilings, and the mapping of proteins to tilings on arbitrary sized shells is parameterized by 3 discrete parameters and 6 continuous degrees of freedom; and the 3 discrete DOF can be restricted to a constant number of cases if the size of the shell is known (in terms of the number of protein n). We also consider the case where a coarse model of the whole complex of proteins are available. We show that even when such coarse models do not show atomic positions, they can be sufficient to identify a general location for each protein and its neighbors, and thereby restricts the configurational space. We developed an iterative refinement search protocol that leverages such multi-resolution structural data to predict accurate high resolution model of protein complexes, and successfully applied the protocol to model gp120, a protein on the spike of HIV and currently the most feasible target for anti-HIV drug design.Show more Item Quantitative PAT with unknown ultrasound speed : uncertainty characterization and reconstruction methods(2015-05) Vallélian, Sarah Catherine; Ren, Kui; Ghattas, Omar; Müller, Peter; Tsai, Yen-Hsi; Ward, RachelShow more Quantitative photoacoustic tomography (QPAT) is a hybrid medical imaging modality that combines high-resolution ultrasound tomography with high-contrast optical tomography. The objective of QPAT is to recover certain optical properties of heterogeneous media from measured ultrasound signals, generated by the photoacoustic effect, on the surfaces of the media. Mathematically, QPAT is an inverse problem where we intend to reconstruct physical parameters in a set of partial differential equations from partial knowledge of the solution of the equations. A rather complete mathematical theory for the QPAT inverse problem has been developed in the literature for the case where the speed of ultrasound inside the underlying medium is known. In practice, however, the ultrasound speed is usually not exactly known for the medium to be imaged. Using an approximated ultrasound speed in the reconstructions often yields images which contain severe artifacts. There is little study as yet to systematically investigate this issue of unknown ultrasound speed in QPAT reconstructions. The objective of this dissertation is exactly to investigate this important issue of QPAT with unknown ultrasound speed. The first part of this dissertation addresses the question of how an incorrect ultrasound speed affects the quality of the reconstructed images in QPAT. We prove stability estimates in certain settings which bound the error in the reconstructions by the uncertainty in the ultrasound speed. We also study the problem numerically by adopting a statistical framework and applying tools in uncertainty quantification to systematically characterize artifacts arising from the parameter mismatch. In the second part of this dissertation, we propose an alternative reconstruction algorithm for QPAT which does not assume knowledge of the ultrasound speed map a priori, but rather reconstructs it alongside the original optical parameters of interest using data from multiple illumination sources. We explain the advantage of this simultaneous reconstruction approach compared to the usual two-step approach to QPAT and demonstrate numerically the feasibility of our algorithm.Show more Item Reservoir description with well-log-based and core-calibrated petrophysical rock classification(2013-08) Xu, Chicheng; Torres-Verdín, CarlosShow more Rock type is a key concept in modern reservoir characterization that straddles multiple scales and bridges multiple disciplines. Reservoir rock classification (or simply rock typing) has been recognized as one of the most effective description tools to facilitate large-scale reservoir modeling and simulation. This dissertation aims to integrate core data and well logs to enhance reservoir description by classifying reservoir rocks in a geologically and petrophysically consistent manner. The main objective is to develop scientific approaches for utilizing multi-physics rock data at different time and length scales to describe reservoir rock-fluid systems. Emphasis is placed on transferring physical understanding of rock types from limited ground-truthing core data to abundant well logs using fast log simulations in a multi-layered earth model. Bimodal log-normal pore-size distribution functions derived from mercury injection capillary pressure (MICP) data are first introduced to characterize complex pore systems in carbonate and tight-gas sandstone reservoirs. Six pore-system attributes are interpreted and integrated to define petrophysical orthogonality or dissimilarity between two pore systems of bimodal log-normal distributions. A simple three-dimensional (3D) cubic pore network model constrained by nuclear magnetic resonance (NMR) and MICP data is developed to quantify fluid distributions and phase connectivity for predicting saturation-dependent relative permeability during two-phase drainage. There is rich petrophysical information in spatial fluid distributions resulting from vertical fluid flow on a geologic time scale and radial mud-filtrate invasion on a drilling time scale. Log attributes elicited by such fluid distributions are captured to quantify dynamic reservoir petrophysical properties and define reservoir flow capacity. A new rock classification workflow that reconciles reservoir saturation-height behavior and mud-filtrate for more accurate dynamic reservoir modeling is developed and verified in both clastic and carbonate fields. Rock types vary and mix at the sub-foot scale in heterogeneous reservoirs due to depositional control or diagenetic overprints. Conventional well logs are limited in their ability to probe the details of each individual bed or rock type as seen from outcrops or cores. A bottom-up Bayesian rock typing method is developed to efficiently test multiple working hypotheses against well logs to quantify uncertainty of rock types and their associated petrophysical properties in thinly bedded reservoirs. Concomitantly, a top-down reservoir description workflow is implemented to characterize intermixed or hybrid rock classes from flow-unit scale (or seismic scale) down to the pore scale based on a multi-scale orthogonal rock class decomposition approach. Correlations between petrophysical rock types and geological facies in reservoirs originating from deltaic and turbidite depositional systems are investigated in detail. Emphasis is placed on the cause-and-effect relationship between pore geometry and rock geological attributes such as grain size and bed thickness. Well log responses to those geological attributes and associated pore geometries are subjected to numerical log simulations. Sensitivity of various physical logs to petrophysical orthogonality between rock classes is investigated to identify the most diagnostic log attributes for log-based rock typing. Field cases of different reservoir types from various geological settings are used to verify the application of petrophysical rock classification to assist reservoir characterization, including facies interpretation, permeability prediction, saturation-height analysis, dynamic petrophysical modeling, uncertainty quantification, petrophysical upscaling, and production forecasting.Show more Item Toward a predictive model of tumor growth(2011-05) Hawkins-Daarud, Andrea Jeanine; Oden, J. Tinsley (John Tinsley), 1936-; Babuska, Ivo; Ghattas, Omar; Zaman, Muhammad; Cristini, Vittorio; Prudhomme, SergeShow more In this work, an attempt is made to lay out a framework in which models of tumor growth can be built, calibrated, validated, and differentiated in their level of goodness in such a manner that all the uncertainties associated with each step of the modeling process can be accounted for in the final model prediction. The study can be divided into four basic parts. The first involves the development of a general family of mathematical models of interacting species representing the various constituents of living tissue, which generalizes those previously available in the literature. In this theory, surface effects are introduced by incorporating in the Helmholtz free ` gradients of the volume fractions of the interacting species, thus providing a generalization of the Cahn-Hilliard theory of phase change in binary media and leading to fourth-order, coupled systems of nonlinear evolution equations. A subset of these governing equations is selected as the primary class of models of tumor growth considered in this work. The second component of this study focuses on the emerging and fundamentally important issue of predictive modeling, the study of model calibration, validation, and quantification of uncertainty in predictions of target outputs of models. The Bayesian framework suggested by Babuska, Nobile, and Tempone is employed to embed the calibration and validation processes within the framework of statistical inverse theory. Extensions of the theory are developed which are regarded as necessary for certain scenarios in these methods to models of tumor growth. The third part of the study focuses on the numerical approximation of the diffuse-interface models of tumor growth and on the numerical implementations of the statistical inverse methods at the core of the validation process. A class of mixed finite element models is developed for the considered mass-conservation models of tumor growth. A family of time marching schemes is developed and applied to representative problems of tumor evolution. Finally, in the fourth component of this investigation, a collection of synthetic examples, mostly in two-dimensions, is considered to provide a proof-of-concept of the theory and methods developed in this work.Show more Item Uncertainty propagation and conjunction assessment for resident space objects(2015-12) Vittaldev, Vivek; Russell, Ryan Paul, 1976-; Erwin, Richard S; Akella, Maruthi R; Bettadpur, Srinivas V; Humphreys, Todd EShow more Presently, the catalog of Resident Space Objects (RSOs) in Earth orbit tracked by the U.S. Space Surveillance Network (SSN) is greater than 21,000 objects. The size of the catalog continues to grow due to an increasing number of launches, improved tracking capabilities, and in some cases, collisions. Simply propagating the states of these RSOs is a computational burden, while additionally propagating the uncertainty distributions of the RSOs and computing collision probabilities increases the computational burden by at least an order of magnitude. Tools are developed that propagate the uncertainty of RSOs with Gaussian initial uncertainty from epoch until a close approach. The number of possible elements in the form of a precomputed library, in a Gaussian Mixture Model (GMM) has been increased and the strategy for multivariate problems has been formalized. The accuracy of a GMM is increased by propagating each element by a Polynomial Chaos Expansion (PCE). Both techniques reduce the number of function evaluations required for uncertainty propagation and result in a sliding scale where accuracy can be improved at the cost of increased computation time. A parallel implementation of the accurate benchmark Monte Carlo (MC) technique has been developed on the Graphics Processing Unit (GPU) that is capable of using samples from any uncertainty propagation technique to compute the collision probability. The GPU MC tool delivers up to two orders of magnitude speedups compared to a serial CPU implementation. Finally, a CPU implementation of the collision probability computations using Cartesian coordinates requires orders of magnitude fewer function evaluations compared to a MC run. Fast computation of the inherent nonlinear growth of the uncertainty distribution in orbital mechanics and accurately computing the collision probability is essential for maintaining a future space catalog and for preventing an uncontrolled growth in the debris population. The uncertainty propagation and collision probability computation methods and algorithms developed here are capable of running on personal workstations and stand to benefit users ranging from national space surveillance agencies to private satellite operators. The developed techniques are also applicable for many general uncertainty quantification and nonlinear estimation problems.Show more Item Uncertainty Quantification and Calibration in Well Construction Cost Estimates(2013-08-05) Valdes Machado, AlejandroShow more The feasibility and success of petroleum development projects depend to a large degree on well construction costs. Well construction cost estimates often contain high levels of uncertainty. In many cases, these costs have been estimated using deterministic methods that do not reliably account for uncertainty, leading to biased estimates. The primary objective of this work was to improve the reliability of deterministic well construction cost estimates by incorporating probabilistic methods into the estimation process. The method uses historical well cost estimates and actual well costs to develop probabilistic correction factors that can be applied to future well cost estimates. These factors can be applied to the entire well cost or to individual cost components. Application of the methodology to estimation of well construction costs for horizontal wells in a shale gas play resulted in well cost estimates that were well calibrated probabilistically. Overall, average estimated well cost using this methodology was significantly more accurate than average estimated well cost using deterministic methods. Systematic use of this methodology can provide for more accurate and efficient allocation of capital for drilling campaigns, which should have significant impacts on reservoir development and profitability.Show more