Browsing by Subject "Monte Carlo method"
Now showing 1 - 20 of 28
Results Per Page
Sort Options
Item Acceleration of quasi-Monte Carlo approximations(Texas Tech University, 2001-05) Severino, Joseph S,In this paper, a new method is presented for numerically approximating integrals using quasi-Monte Carlo sequences. By applying a least-squares smoothing procedure to the sequences, a faster rate of convergence is achieved thus reducing the number of nodes required for the same degree of accuracy. This acceleration method is applied to four particular integrals in a variety of dimensions. The first integral is a smooth exponential function and the second has a discontinuous integrand. The last two integrals deal with specific problems in mathematical finance. One computes the price for a European call option, while the other finds the present value of a mortgage-backed security over thirty years.Item An analysis of model parameter uncertainty on online model-based applications(2012-05) Chen, Yingying; Hoo, Karlene A.; Mann, Uzi; Vaughn, Mark W.; Wang, Xiaochang; Stefanov, Zdravko I.It is important to predict the behavior of an engineering process accurately and timely. The predictions are usually achieved using a first-principles-based model that describes the complex phenomena embodied in the process. However, no model is an exact representation of the complex process for multiple reasons. The primary goal of this research is to investigate one of the possible reasons, the uncertainty of the model parameters from the viewpoint of their effect on the accuracy of the model’s predictions. Other secondary goals of this research are updating the uncertain parameter values and determination of robust estimates of the uncertain parameters to improve the accuracy of a model. The methodologies applied to understand propagation of the uncertain parameters through a model are Latin hypercube sampling coupled with Hammersley sequencing (LHHS). These methods are selected because of their efficiency and effectiveness when there are multiple uncertain parameters in a model. Real processes experience unmeasured and unplanned disturbances. Even though a model may come arbitrarily close to estimating the output of the process, because of these types of disturbances there always will be process/model mismatch. This study addresses this issue by investigating updating of the model uncertain parameters to minimize this mismatch. The updating methods designed in this research come from the class of particle filters (also referred to as sequential Monte Carlo filters); they include a Markov chain Monte Carlo filter and an ensemble Kalman filter. As the number of uncertain parameters increase so does the computational burden. While updating is one solution to improve model accuracy another potential solution is to determine a robust estimate of the uncertain parameter using the theory of robust statistics. This research will provide the theoretical proof that the maximum likelihood estimate is the best statistic to provide a robust estimate. The operational side of this research focuses on online model-based applications such as model-based control and monitoring with processing of uncertain model parameters. To demonstrate these research concepts, we employ simulations of a continuous reactor system and an oil producing reservoir system. The results are analyzed and discussed.Item An investigation of non-equilibrium electron kinetics in nitrogen(Texas Tech University, 1983-12) Tzeng, YonhuaThe kinetic behavior of electrons immersed in a background gas or gas mixture under the action of an externally applied electric field with or without the influence of the space charge induced electric fields have been investigated. Novel Monte Carlo techniques have been developed and applied to study the evolution of the electrons self-consistently , that is, including the effects of the electric field generated by the space charge and the effects of boundaries. Statistical fluctuations of the macroscopic variables describing the evolution of the electron assembly have been studied. This is important when the total number of electrons is less than about 100. Ensemble averaged descriptions, kinetic in nature, have been used to study the development of an avalanche and the formation and propagation of a streamer. The results from this research program have provided key fundamental knowledge necessary for the explanation of the approach to equilibrium of an assembly of electrons, the effects of scattering processes on the electron velocity distribution, prebreakdown phenomena and manipulating the dielectric properties of gas mixtures used in insulating and switching applications.Item Item Automated variance reduction for Monte Carlo shielding analyses with MCNP(2003) Radulescu, Georgeta; Landsberger, Sheldon; Tang, Jabo S.Item A discrete velocity method for the Boltzmann equation with internal energy and stochastic variance reduction(2015-12) Clarke, Peter Barry; Varghese, Philip L.; Goldstein, David Benjamin; Raja, Laxminarayan; Gamba, Irene; Magin, ThierryThe goal of this work is to develop an accurate and efficient flow solver based upon a discrete velocity description of the Boltzmann equation. Standard particle based methods such as Direct Simulation Monte Carlo (DSMC) have a number of difficulties with complex and transient flows, stochastic noise, trace species, and high level internal energy states. To address these issues, a discrete velocity method (DVM) was developed which models the evolution of a flow through the collisions and motion of variable mass quasi-particles defined as delta functions on a truncated, discrete velocity domain. The work is an extension of a previous method developed for a single, monatomic species solved on a uniformly spaced velocity grid. The collision integral was computed using a variance reduced stochastic model where the deviation from equilibrium was calculated and operated upon. This method produces fast, smooth solutions of near-equilibrium flows. Improvements to the method include additional cross-section models, diffuse boundary conditions, simple realignment of velocity grid lines into non-uniform grids, the capability to handle multiple species (specifically trace species or species with large molecular mass ratios), and both a single valued rotational energy model and a quantized rotational and vibrational model. A variance reduced form is presented for multi-species gases and gases with internal energy in order to maintain the computational benefits of the method. Every advance in the method allows for more complex flow simulations either by extending the available physics or by increasing computational efficiency. Each addition is tested and verified for an accurate implementation through homogeneous simulations where analytic solutions exist, and the efficiency and stochastic noise are inspected for many of the cases. Further simulations are run using a variety of classical one-dimensional flow problems such as normal shock waves and channel flows.Item Estimation of the distribution functions of the standardized statistics using Monte Carlo approximation to Edgeworth expansions(Texas Tech University, 2002-05) Gao, GuozhiNot availableItem First-principles and kinetic Monte Carlo simulation of dopant diffusion in strained Si and other materials(2006) Lin, Li, 1973-; Banerjee, SanjayItem Fundamental structural changes over time and predictability of exchange rates: a Monte Carlo study of time varying regression and applications(Texas Tech University, 1996-08) Wan, BinThe difficulties of modeling and forecasting foreign exchange rates have been well known since early 1970's. One of the possible explanations for our inability to provide an accurate model is the structural changes over time, especially in emerging markets. The traditional regression techniques that assume constant parameters are incapable of capturing the changing dynamics over time. Consequently, most foreign exchange regression models are ineffective. To better capture fundamental structural changes in a market, a moving block regression technique is recommended by the author. The moving block regression procedure utilizes sub-sample information, rather than the prevailing whole sample data that intends to increase regression efficiency with more observations. To find out the loss or gain of forecast efficiency, a Monte Carlo study is carried out under several different scenarios: data in compliance with the classic OLS assumptions, data with heteroscedasticity, data with autocorrelation, model with a missing variable, model with changing regression coefficients, and data with nonlinear relationships. Simulation results show a trivial loss of out-of-sample forecast efficiency with the moving block regressions and a small trade-off in the presence of minor violations of the assumptions. However, there is a clear dominance of the moving block regressions over the traditional whole sample regressions in terms of forecasting efficiency when the violations of assumptions are serious, such as missing variable, changing coefficients, or nonlinear relations. Then the moving block regressions are applied to exchange rates of six currencies against the U.S. dollar. The comparisons of forecasting residuals, both in-sample and out-of-sample, show a strong support for the moving block techniques, indicating the inevitable violations of regression assumptions in foreign exchange markets.Item The impact of the inappropriate modeling of cross-classified data structures(2004) Meyers, Jason Leon; Beretvas, Susan NatashaItem Investigation of random sampling in flowshop sequencing(Texas Tech University, 1978-08) Charles, Oliver EkepreNot availableItem Investigation of stochastic radiation transport methods in random heterogeneous mixtures(2008-05) Reinert, Dustin Ray, 1982-; Biegalski, Steven R.; Schneider, Erich A.Among the most formidable challenges facing our world is the need for safe, clean, affordable energy sources. Growing concerns over global warming induced climate change and the rising costs of fossil fuels threaten conventional means of electricity production and are driving the current nuclear renaissance. One concept at the forefront of international development efforts is the High Temperature Gas-Cooled Reactor (HTGR). With numerous passive safety features and a meltdown-proof design capable of attaining high thermodynamic efficiencies for electricity generation as well as high temperatures useful for the burgeoning hydrogen economy, the HTGR is an extremely promising technology. Unfortunately, the fundamental understanding of neutron behavior within HTGR fuels lags far behind that of more conventional watercooled reactors. HTGRs utilize a unique heterogeneous fuel element design consisting of thousands of tiny fissile fuel kernels randomly mixed with a non-fissile graphite matrix. Monte Carlo neutron transport simulations of the HTGR fuel element geometry in its full complexity are infeasible and this has motivated the development of more approximate computational techniques. A series of MATLAB codes was written to perform Monte Carlo simulations within HTGR fuel pebbles to establish a comprehensive understanding of the parameters under which the accuracy of the approximate techniques diminishes. This research identified the accuracy of the chord length sampling method to be a function of the matrix scattering optical thickness, the kernel optical thickness, and the kernel packing density. Two new Monte Carlo methods designed to focus the computational effort upon the parameter conditions shown to contribute most strongly to the overall computational error were implemented and evaluated. An extended memory chord length sampling routine that recalls a neutron’s prior material traversals was demonstrated to be effective in fixed source calculations containing densely packed, optically thick kernels. A hybrid continuous energy Monte Carlo algorithm that combines homogeneous and explicit geometry models according to the energy dependent optical thickness was also developed. This resonance switch approach exhibited a remarkably high degree of accuracy in performing criticality calculations. The versatility of this hybrid modeling approach makes it an attractive acceleration strategy for a vast array of Monte Carlo radiation transport applications.Item Light transport simulation in reflective displays(2012-05) Feng, Zhanpeng; Nutter, Brian; Mitra, Sunanda; Karp, Tanja; Gale, Richard O.; Westfall, Peter H.In the last several years, reflective displays have gained substantial popularity in mobile devices such as e-readers, because of their significant advantages in power consumption and sunlight readability. A typical reflective display consists of a stack of optical layers. Accurate and efficient simulation of light transport in these layers provides valuable information for optical design and analysis. Physically based ray tracing algorithms are able to produce simulation results that mirror the real world display performance in a wide range of illumination conditions, viewing angles, and distances. These simulation outcomes help system architects make far reaching decisions as early as possible in the design process. In this dissertation, a reflective display is modeled as a layered material, with a FOS (front of screen) layer on the top, a diffusive layer (diffuser) underneath the FOS, a transparent layer (glass) in the middle, and a wavelength-dependent reflective layer (pixel array) at the bottom. A set of simple and efficient spectral functions is developed to model the reflectance and absorption of FOS. A novel hybrid approach combining both spectro-radiometer based and imaging based measurement methods is developed to acquire high resolution reflectance data in both angular and spectral domains. A BTDF (bidirectional transmittance distribution function) is generated from the measured data to model the diffuser. A wavelength dependent BRDF (bidirectional reflectance distribution function) is used to model the pixels. Realistic light transport simulation requires interplay of three factors: surface geometry, lighting, and material reflectance. Monte Carlo ray tracing methods are used to link these factors together. Path tracing is employed to provide unbiased results. Stratified sampling and importance sampling are used for effective variance reduction. Stratified sampling produces well distributed random samples, and importance sampling helps Monte Carlo simulation converge more quickly. Different importance sampling methods are compared and analyzed. Simulation results of display performance, including reflectance, color gamut, contrast ratio, and daylight readability, are presented. The impact of different lighting conditions, diffusers, and FOS designs are studied. Measurement data and physically based analyses are used to confirm the validity of the simulation tool. The simulation tool provides the desired accuracy and predictability for display design in a wide range of lighting conditions, which makes it a valuable mechanism for display designers to find the optimal solution for real world applications.Item Magnetic field control of low pressure diffuse discharges(Texas Tech University, 1986-05) Cooper, James RandallThe application of a magnetic field in a direction transverse to the electric field in a diffuse discharge cam have a strong effect on the transport parameters in the discharge medium and on the external characteristics of the discharge as a whole. The deviations in these transport parameters have been investigated in this work by means of Monte Carlo calculations and the electrical characteristics of the total discharge have been observed experimentally. The results of the theoretical investigation show that in attaching gas mixtures both the ionization and attachment rate coefficients in the positive column of the discharge are changed such that the combined effect results in an increase in resistivity. Monte Carlo calculations performed by other researchers indicate that a transverse magnetic field could have an even stronger effect on the electron energy distribution in the cathode fall region and consequently on the cathode fall voltage. Experimentally, it is seen that application of a crossed magnetic field to an abnormal glow discharge in attaching gases in a certain parameter range causes the discharge voltage to increase significantly. The effect seems to be most strongly influenced by processes in the cathode fall region.Item MCMC algorithm, integrated 4D seismic reservoir characterization and uncertainty analysis in a Bayesian framework(2008-08) Hong, Tiancong, 1973-; Sen, Mrinal K.One of the important goals in petroleum exploration and production is to make quantitative estimates of a reservoir’s properties from all available but indirectly related surface data, which constitutes an inverse problem. Due to the inherent non-uniqueness of most inverse procedures, a deterministic solution may be impossible, and it makes more sense to formulate the inverse problem in a statistical Bayesian framework and to fully solve it by constructing the Posterior Probability Density (PPD) function using Markov Chain Monte Carlo (MCMC) algorithms. The derived PPD is the complete solution of an inverse problem and describes all the consistent models for the given data. Therefore, the estimated PPD not only leads to the most likely model or solution but also provides a theoretically correct way to quantify corresponding uncertainty. However, for many realistic applications, MCMC can be computationally expensive due to the strong nonlinearity and high dimensionality of the problem. In this research, to address the fundamental issues of efficiency and accuracy in parameter estimation and uncertainty quantification, I have incorporated some new developments and designed a new multiscale MCMC algorithm. The new algorithm is justified using an analytical example, and its performance is evaluated using a nonlinear pre-stack seismic waveform inversion application. I also find that the new technique of multi-scaling is particularly attractive in addressing model parameterization issues especially for the seismic waveform inversion. To derive an accurate reservoir model and therefore to obtain a reliable reservoir performance prediction with as little uncertainty as possible, I propose a workflow to integrate 4D seismic and well production data in a Bayesian framework. This challenging 4D seismic history matching problem is solved using the new multi-scale MCMC algorithm for reasonably accurate reservoir characterization and uncertainty analysis within an acceptable time period. To take advantage of the benefits from both the fine scale and the coarse scale, a 3D reservoir model is parameterized into two different scales. It is demonstrated that the coarse-scale model works like a regularization operator to make the derived fine-scale reservoir model smooth and more realistic. The derived best-fitting static petrophysical model is further used to image the evolution of a reservoir’s dynamic features such as pore pressure and fluid saturation, which provide a direct indication of the internal dynamic fluid flow.Item Monte Carlo analysis of life cycle reliability compatible with Bayesian analysis(Texas Tech University, 1987-08) Fant, Earnest WilliamThis research investigated life cycle reliability modeling in the early design stages. It was assumed that life cycle reliability modeling can be useful for the evaluation of reliability goals. In situations where new technology or a major change in application and/or in environment of a product are involved, the usual analysis techniques cannot compensate for a gross lack of data. The ability to model qualitative information about a product in a simple, straight-forward manner becomes a necessity. As quantitative data becomes available, the ability to update the qualitative model becomes an important requirement, thus allowing for the possible use of Bayesian analysis. To demonstrate the need for reliability assurance efforts as early as possible in the design process of a new technology product, a life cycle reliability, sensitivity analysis technique was developed. This technique produces results, using extremely limited data, to aid in establishing reliability goals. A Monte Carlo analysis system/subsystem level model and modeling procedure were developed, based on a stratified Monte Carlo sampling procedure called Latin hypercube sampling, which is capable of generating rank correlated random variates to induce dependences in the model. The Monte Carlo analysis was developed around an exponential failure model. The life cycle of a system was represented by a series arrangement of phases. Input into the model was in the form of mean time to failure distribution estimates and corresponding distribution estimates for the time in a life cycle phase. Algorithms were developed to fit various distributions to three estimates for mean time to failure and time in phase, similar to those used in project management/activity scheduling. Sensitivity analyses were performed to obtain an idea of how well a system must perform over its life cycle phases in order to fulfill a given life cycle reliability requirement. The results obtained from the modeling were used to assess possible reliability characteristics of a generic space based pulsed power system.Item Monte Carlo approaches to the protein folding problem(2002) Stone, Matthew Thad; Sanchez, Isaac C.The excluded volume of a polymer is defined and calculated by Monte Carlo integration. The excluded volume for a polymer with another polymer of the same length scales as N1.74. These results agree with theoretical predictions about the behavior of polymers in the dilute solution regime. The conformation of a Lennard-Jones chain in water is investigated. The chains remain collapsed from the triple point until 590 K. The presence of water increases the Θ temperature for a Lennard-Jones chain in water, and the transition is sharper in water than in vacuum. These results are explained by the breaking of hydrogen bonds as the chain expands. The solvation properties of model hydrophobic and hydrophilic solutes in SPC/E water are calculated by Monte Carlo simulation. Poor solubility correlates with poor solute/water interaction. At room temperature, energy dominates the aqueous solubility rather than entropy. The large cavities in water are unexpected and explain why a hard sphere solute is more soluble in water than in other solvents. Hydrogen bonding causes water to aggregate into clusters that produce large cavities. Hydrophobic solutes are found to maintain the orientational order in water, whereas hydrophilic solutes alter it. The gas solubility of n-alkanes in water is unexpected. The solubility shows a minimum as the carbon number is increased at C11. Using Monte Carlo simulations, the solubility of model alkanes is measured. These simulations capture the experimental anomaly qualitatively and attribute it to a growing importance of favorable energetic interactions. Microscopic contributions to the chemical potential for this system are defined and calculated through simulation. Partial expansion of a Lennard-Jones chain in water is seen by Monte Carlo simulation. This behavior is explained by entropically favorable large cavities in water at low temperatures. Cavity size distributions of water and the Lennard-Jones fluid are calculated by simulation and contrasted. Water is different from other fluids in its propensity for large cavities at low temperatures.Item Monte Carlo localization for mobile robots in dynamic environments(Texas Tech University, 2002-05) Bansail, AjayMobile robot localization is the problem of determining a robot's pose from sensor data. This thesis presents a family of probabilistic localization algorithms known as Monte Carlo Localization (MCL). MCL algorithms represent a robot's belief by a set of weighted hypotheses (samples) which approximate the posterior under a common Bayesian formulation of the localization problem. The MCL algorithm does not work well in dynamic environments. Thus, building on the basic MCL algorithm, this thesis develops a dynamic version of the algorithm, which applies filtering techniques to filter out the unexpected data and work well in dynamic environments. Systematic empirical results illustrate the robustness and computational efficiency of the approach.Item Monte Carlo methods for multi-stage stochastic programs(2003) Chiralaksanakul, Anukal; Morton, David P.Stochastic programming is a natural and powerful extension of deterministic mathematical programming, and it is effectively utilized for analyzing optimization problems when the problem’s parameters are not known with certainty. These uncertain parameters are treated as a random vector with a known distribution in the stochastic programming framework. Typically, the size of stochastic programming models is large due to the number of dimensions and realizations of the random vector. With recent advances in optimization algorithms and computing technology, an increasing number of realistically-sized two- and multi-stage stochastic programming models are being successfully formulated and solved. Despite these successes, multi-stage stochastic programs in which the random vector has a large number of dimensions and/or realizations (or is even continuous), still remain a computational challenge primarily because of the exponential growth of the model’s size with respect to the number of stages. In this dissertation, we exploit special structures in order to attack these computationally difficult problems. Our research can be broadly divided into three parts. First, we propose two Monte Carlo sampling-based solution methods for multi-stage stochastic programs. Both methods exploit special structures for a particular class of multi-stage problems, and result in feasible solution policies. These policies have desirable asymptotic properties, but, of course, in practice are generated using finite scenario trees. As a result, in the second part of the dissertation, we develop Monte Carlo techniques to determine the quality of an arbitrary feasible policy. In particular, we build a statistically-based point estimate for a lower bound of the optimal objective function value for a minimization problem, and use it to construct a confidence interval on the solution’s quality. In the third part, we aim to develop procedures to reduce the bias associated with the lower-bound estimator, thereby improving our ability to construct a reasonably tight confidence interval on the solution’s quality. Towards this goal, we vary the number of descendants in the sample tree to reduce the bias in the context of American-style option pricing and stochastic lot sizing. All proposed methodologies are numerically tested on problems from the literature.Item Monte Carlo sampling-based methods in stochastic programming(2005) Bayraksan, Güzin; Morton, David P.Many problems in business, engineering and science involve uncertainties but optimization of such complex systems is often done in practice with deterministic model parameters. Stochastic programming extends deterministic optimization by incorporating random variables and probabilistic statements. A major challenge in the analysis of large-scale stochastic systems is having to consider a large number, sometimes an infinite number, of scenarios. This usually leads to intractable models, even when specially-designed algorithms are used. A natural question that arises then is how to use a limited number of these scenarios and still obtain reasonable solutions to our problems. In this dissertation, we focus on Monte Carlo sampling-based methods for solving large-scale stochastic programs. Given a candidate solution, suggested as an approximate solution to the original problem, the first question we address is how to assess its quality. Determining whether a solution is of high quality (optimal or near optimal) is a fundamental question in optimization theory and algorithms. We define quality via the optimality gap and develop sampling-based procedures to form confidence intervals on this gap. Compared to an earlier procedure that requires solution of many optimization problems, our procedures require solving only one or two optimization problems. We discuss a number of enhancements to our basic procedure and present computational results. Next, we develop sequential sampling procedures for assessing solution quality, which control the sampling error of the confidence interval on the optimality gap. We present two methods, a fully sequential method, where we increase the sample size one by one, and an accelerated method, where we increase the sample size in jumps. We prove asymptotic validity of these confidence intervals and present computational results. Finally, using our results on assessing solution quality, we propose a sequential sampling procedure to solve stochastic programs. In this procedure, the sample size is sequentially increased until a stopping criterion is satisfied. The stopping rule depends on the optimality gap estimate of the current candidate solution and its sampling variance. We show asymptotically that this procedure finds a solution within a desired quality tolerance with high probability. We present preliminary computational results and discuss implementation issues.