# Browsing by Subject "sensitivity analysis"

Now showing 1 - 6 of 6

###### Results Per Page

###### Sort Options

Item A one-group parametric sensitivity analysis for the graphite isotope ratio method and other related techniques using ORIGEN 2.2(2009-06-02) Chesson, Kristin ElaineShow more Several methods have been developed previously for estimating cumulative energy production and plutonium production from graphite-moderated reactors. The Graphite Isotope Ratio Method (GIRM) is one well-known technique. This method is based on the measurement of trace isotopes in the reactor?s graphite matrix to determine the change in their isotopic ratios due to burnup. These measurements are then coupled with reactor calculations to determine the total plutonium and energy production of the reactor. To facilitate sensitivity analysis of these methods, a one-group cross section and fission product yield library for the fuel and graphite activation products has been developed for MAGNOX-style reactors. This library is intended for use in the ORIGEN computer code, which calculates the buildup, decay, and processing of radioactive materials. The library was developed using a fuel cell model in Monteburns. This model consisted of a single fuel rod including natural uranium metal fuel, magnesium cladding, carbon dioxide coolant, and Grade A United Kingdom (UK) graphite. Using this library a complete sensitivity analysis can be performed for GIRM and other techniques. The sensitivity analysis conducted in this study assessed various input parameters including 235U and 238U cross section values, aluminum alloy concentration in the fuel, and initial concentrations of trace elements in the graphite moderator. The results of the analysis yield insight into the GIRM method and the isotopic ratios the method uses as well as the level of uncertainty that may be found in the system results.Show more Item A Study of Predicted Energy Savings and Sensitivity Analysis(2013-07-22) Yang, YingShow more The sensitivity of the important inputs and the savings prediction function reliability for the WinAM 4.3 software is studied in this research. WinAM was developed by the Continuous Commissioning (CC) group in the Energy Systems Laboratory at Texas A&M University. For the sensitivity analysis task, fourteen inputs are studied by adjusting one input at a time within ? 30% compared with its baseline. The Single Duct Variable Air Volume (SDVAV) system with and without the economizer has been applied to the square zone model. Mean Bias Error (MBE) and Influence Coefficient (IC) have been selected as the statistical methods to analyze the outputs that are obtained from WinAM 4.3. For the saving prediction reliability analysis task, eleven Continuous Commissioning projects have been selected. After reviewing each project, seven of the eleven have been chosen. The measured energy consumption data for the seven projects is compared with the simulated energy consumption data that has been obtained from WinAM 4.3. Normalization Mean Bias Error (NMBE) and Coefficient of Variation of the Root Mean Squared Error (CV (RMSE)) statistical methods have been used to analyze the results from real measured data and simulated data. Highly sensitive parameters for each energy resource of the system with the economizer and the system without the economizer have been generated in the sensitivity analysis task. The main result of the savings prediction reliability analysis is that calibration improves the model?s quality. It also improves the predicted energy savings results compared with the results generated from the uncalibrated model.Show more Item Back-calculating emission rates for ammonia and particulate matter from area sources using dispersion modeling(Texas A&M University, 2004-11-15) Price, Jacqueline ElaineShow more Engineering directly impacts current and future regulatory policy decisions. The foundation of air pollution control and air pollution dispersion modeling lies in the math, chemistry, and physics of the environment. Therefore, regulatory decision making must rely upon sound science and engineering as the core of appropriate policy making (objective analysis in lieu of subjective opinion). This research evaluated particulate matter and ammonia concentration data as well as two modeling methods, a backward Lagrangian stochastic model and a Gaussian plume dispersion model. This analysis assessed the uncertainty surrounding each sampling procedure in order to gain a better understanding of the uncertainty in the final emission rate calculation (a basis for federal regulation), and it assessed the differences between emission rates generated using two different dispersion models. First, this research evaluated the uncertainty encompassing the gravimetric sampling of particulate matter and the passive ammonia sampling technique at an animal feeding operation. Future research will be to further determine the wind velocity profile as well as determining the vertical temperature gradient during the modeling time period. This information will help quantify the uncertainty of the meteorological model inputs into the dispersion model, which will aid in understanding the propagated uncertainty in the dispersion modeling outputs. Next, an evaluation of the emission rates generated by both the Industrial Source Complex (Gaussian) model and the WindTrax (backward-Lagrangian stochastic) model revealed that the calculated emission concentrations from each model using the average emission rate generated by the model are extremely close in value. However, the average emission rates calculated by the models vary by a factor of 10. This is extremely troubling. In conclusion, current and future sources are regulated based on emission rate data from previous time periods. Emission factors are published for regulation of various sources, and these emission factors are derived based upon back-calculated model emission rates and site management practices. Thus, this factor of 10 ratio in the emission rates could prove troubling in terms of regulation if the model that the emission rate is back-calculated from is not used as the model to predict a future downwind pollutant concentration.Show more Item Function-based Design Tools for Analyzing the Behavior and Sensitivity of Complex Systems During Conceptual Design(2010-01-16) Hutcheson, Ryan S.Show more Complex engineering systems involve large numbers of functional elements. Each functional element can exhibit complex behavior itself. Ensuring the ability of such systems to meet the customer's needs and requirements requires modeling the behavior of these systems. Behavioral modeling allows a quantitative assessment of the ability of a system to meet specific requirements. However, modeling the behavior of complex systems is difficult due to the complexity of the elements involved and more importantly the complexity of these elements' interactions. In prior work, formal functional modeling techniques have been applied as a means of performing a qualitative decomposition of systems to ensure that needs and requirements are addressed by the functional elements of the system. Extending this functional decomposition to a quantitative representation of the behavior of a system represents a significant opportunity to improve the design process of complex systems. To this end, a functionality-based behavioral modeling framework is proposed along with a sensitivity analysis method to support the design process of complex systems. These design tools have been implemented in a computational framework and have been used to model the behavior of various engineering systems to demonstrate their maturity, application and effectiveness. The most significant result is a multi-fidelity model of a hybrid internal combustion-electric racecar powertrain that enabled a comprehensive quantitative study of longitudinal vehicle performance during various stages in the design process. This model was developed using the functionality-based framework and allowed a thorough exploration of the design space at various levels of fidelity. The functionality-based sensitivity analysis implemented along with the behavioral modeling approach provides measures similar to a variance-based approach with a computation burden of a local approach. The use of a functional decomposition in both the behavioral modeling and sensitivity analysis significantly contributes to the flexibility of the models and their application in current and future design efforts. This contribution was demonstrated in the application of the model to the 2009 Texas A&M Formula Hybrid powertrain design.Show more Item Novel Evaluation Methods for Complex Systems via Adaptive Sequential Exploration of Variables Interactions(2014-12-01) Al Rashdan, Ahmad Y. M.Show more The complex and coupled behavior of variables in the currently developing Generation IV reactors and Small Modular Reactors is becoming a major incentive to seek efficient design methods. This research develops and validates new methods to evaluate systems with various degrees of variables? interactions using basic knowledge in variables? directions of effect and an adaptive number of experiments. The methods replace the commonly used assumption of negligible interactions with a broader assumption of monotonic variables? effects. The assumption was evaluated using studies of other physical systems? regularities, and is expected to be significantly present in physical systems. Four methods were developed and analyzed in this dissertation. Three of the introduced methods utilized an adaptive sequential spanning tree concept with a method specific criterion to construct piecewise multidimensional surfaces or subtrees. Each method then used a specific approach to project the results within the subtrees. The fourth method is an expansion to an existing method to explore any order of interactions through the introduction of a new domain of parameters. Three of the four methods significantly outperformed the common orthogonal arrays methods that rely on a uniform distribution of experiments in the design domain. Two of the three methods significantly outperformed the third method and were used in the dissertation?s application. The strength of the applicable methods was demonstrated through their application to two examples from literature, each of which has a different degree of variables? monotonic behavior. The most applicable method of the two most effective methods was used to decouple the effects of fourteen variables on six performance characteristics in the design of a Small Modular Reactor version of the Advanced Pressurized Water Reactor AP1000. The methods? application succeeded in finding the most important main effects and interactions of each performance characteristic. The performance of the methods? application to three performance characteristics was compared to the performance of fractional factorial designs. The methods were found to significantly reduce the projection error when the assumption of variables? monotonic behavior is valid.Show more Item Parameter Estimation of Complex Systems from Sparse and Noisy Data(2011-02-22) Chu, YunfeiShow more Mathematical modeling is a key component of various disciplines in science and engineering. A mathematical model which represents important behavior of a real system can be used as a substitute for the real process for many analysis and synthesis tasks. The performance of model based techniques, e.g. system analysis, computer simulation, controller design, sensor development, state filtering, product monitoring, and process optimization, is highly dependent on the quality of the model used. Therefore, it is very important to be able to develop an accurate model from available experimental data. Parameter estimation is usually formulated as an optimization problem where the parameter estimate is computed by minimizing the discrepancy between the model prediction and the experimental data. If a simple model and a large amount of data are available then the estimation problem is frequently well-posed and a small error in data fitting automatically results in an accurate model. However, this is not always the case. If the model is complex and only sparse and noisy data are available, then the estimation problem is often ill-conditioned and good data fitting does not ensure accurate model predictions. Many challenges that can often be neglected for estimation involving simple models need to be carefully considered for estimation problems involving complex models. To obtain a reliable and accurate estimate from sparse and noisy data, a set of techniques is developed by addressing the challenges encountered in estimation of complex models, including (1) model analysis and simplification which identifies the important sources of uncertainty and reduces the model complexity; (2) experimental design for collecting information-rich data by setting optimal experimental conditions; (3) regularization of estimation problem which solves the ill-conditioned large-scale optimization problem by reducing the number of parameters; (4) nonlinear estimation and filtering which fits the data by various estimation and filtering algorithms; (5) model verification by applying statistical hypothesis test to the prediction error. The developed methods are applied to different types of models ranging from models found in the process industries to biochemical networks, some of which are described by ordinary differential equations with dozens of state variables and more than a hundred parameters.Show more