Browsing by Subject "Sensitivity"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Effect of Synthesis Condition and Annealing on the Sensitivity and Stability of Gas Sensors Made of Zn-Doped y-Fe2O3 Particles(2010-10-12) Kim, TaeyangIn this study, the effect of synthesis conditions and annealing process on the sensitivity and stability of gas sensors made of flame-synthesized Zn-doped ?-Fe2O3 particles was investigated. Zn-doped ?-Fe2O3 particles were synthesized by flame spray pyrolysis using either H2/Air or H2/O2 coflow diffusion flames. The particles were then annealed at 325~350?C in a tube furnace under air atmosphere. Both as-synthesized and annealed particles were used as gas sensing materials to construct gas sensors. Transmission electron microscopy (TEM), X-ray diffraction (XRD), Brunauer-Emmett-Teller surface area measurement (BET), Williamson and Hall (WH) method were employed to characterize the particles. Gas sensors were fabricated by applying the as-synthesized and annealed particles on interdigitated electrodes. The response of the gas sensor to acetone vapor, H2 in dry synthetic air was measured before and after three days of aging. High-temperature flame (H2/O2) generated nanometer-sized particles; lower temperature flame (H2/Air) generated micrometer-sized particles. Fe2O3 particles doped with 15% Zn showed the highest sensitivity. The sensors made from as-synthesized particles showed a gas sensing sensitivity that was 20 times higher than the literature value. The sensors made of microparticles lost their sensing ability after three days of aging, but sensors made of nanoparticles did not show significant change after aging. Sensors made of annealed particles (either micro or nano) did not have significant gas sensing ability, but annealing process improved the stability of gas sensors. Analysis using the WH method showed that the microstrains decreased significantly in both H2/O2 and H2/Air flame particles after annealing. The results showed that sensors made of nanoparticles have higher gas sensing signal, and more resistant toward aging than sensors made of microparticles. In addition, annealing process affected on the stability favorably due to reduction of structural defects.Item Ensemble sensitivity analysis applied to Southern Plains convection(2013-08) Bednarczyk, Christopher N.; Ancell, Brian A.; Weiss, Christopher C.; Kang, Song-LakThe recent increase in use of ensembles in numerical weather prediction has led to new information being available to forecasters, including uncertainty statistics and probabilistic guidance. Ensemble Sensitivity Analysis (ESA) offers additional information that describes the relationship between a forecast metric known as the response function and initial or early forecast errors, and it is capable of revealing features of the flow that are dynamically relevant to the chosen forecast. The applicability of ESA to a high resolution convection forecast of April 2012 is investigated with an Ensemble Kalman Filter based on the Weather Research and Forecasting model. It is shown that forecasts of convection are primarily sensitive to positional differences in the synoptic-scale flow. The selection of the response function is also explored to determine how to choose a convective forecast metric. Sensitivity does vary with the choice of response, but the same features tend to be highlighted in all cases. Sensitivity is also compared with a standardized form in which the raw value is weighted by the ensemble spread in order to determine the merit of each type. The standardized sensitivity provides information on expected forecast error, and it reveals features that are not highlighted in the raw sensitivity. In addition, a cross-grid approach to sensitivity is studied in order to determine if it shows similar results as the same-grid method. Results show them to have differences, but the cross-grid method still reveals realistic features in the context of the event.Item High-Fidelity Nuclear Energy System Optimization towards an Environmentally Benign, Sustainable, and Secure Energy Source(2011-10-21) Ames, David E.A new high-fidelity integrated system method and analysis approach was developed and implemented for consistent and comprehensive evaluations of advanced fuel cycles leading to minimized Transuranic (TRU) inventories. The method has been implemented in a developed code system integrating capabilities of MCNPX for highfidelity fuel cycle component simulations. The impact associated with energy generation and utilization is immeasurable due to the immense, widespread, and myriad effects it has on the world and its inhabitants. The polar extremes are demonstrated on the one hand, by the high quality of life enjoyed by individuals with access to abundant reliable energy sources, and on the other hand by the global-scale environmental degradation attributed to the affects of energy production and use. Thus, nations strive to increase their energy generation, but are faced with the challenge of doing so with a minimal impact on the environment and in a manner that is self-reliant. Consequently, a revival of interest in nuclear energy has followed with much focus placed on technologies for transmuting nuclear spent fuel. In this dissertation, a Nuclear Energy System (NES) configuration was developed to take advantage of used fuel recycling and transmutation capabilities in waste management scenarios leading to minimized TRU waste inventories, long-term activities, and radiotoxicities. The reactor systems and fuel cycle components that make up the NES were selected for their ability to perform in tandem to produce clean, safe, and dependable energy in an environmentally conscious manner. The reactor systems include the AP1000, VHTR, and HEST. The diversity in performance and spectral characteristics for each was used to enhance TRU waste elimination while efficiently utilizing uranium resources and providing an abundant energy source. The High Level Waste (HLW) stream produced by typical nuclear systems was characterized according to the radionuclides that are key contributors to long-term waste management issues. The TRU component of the waste stream becomes the main radiological concern for time periods greater than 300 years. A TRU isotopic assessment was developed and implemented to produce a priority ranking system for the TRU nuclides as related to long-term waste management and their expected characteristics under irradiation in the different reactor systems of the NES. Detailed 3D whole-core models were developed for analysis of the individual reactor systems of the NES. As an inherent part of the process, the models were validated and verified by performing experiment-to-code and/or code-to-code benchmarking procedures, which provided substantiation for obtained data and results. Reactor core physics and material depletion calculations were performed and analyzed. A computational modeling approach was developed for integrating the individual models of the NES. A general approach was utilized allowing for the Integrated System Model (ISM) to be modified in order to provide simulation for other systems with similar attributes. By utilizing this approach, the ISM is capable of performing system evaluations under many different design parameter options. Additionally, the predictive capabilities of the ISM and its computational time efficiency allow for system sensitivity/uncertainty analysis and the implementation of optimization techniques. The NES has demonstrated great potential for providing safe, clean, and secure energy and doing so with foreseen advantages over the LEU once-through fuel cycle option. The main advantages exist due to better utilization of natural resources by recycling the used nuclear fuel, and by reducing the final amount and time span for which the resulting HLW must be isolated from the public and the environment due to radiological hazard. If deployed, the NES can substantially reduce the long-term radiological hazard posed by current HLW, extend uranium resources, and approach the characteristics of an environmentally benign energy system.Item Maternal depressive symptoms and children's behavior problems : peer relations and parenting as mediators(2012-08) Baeva, Sofia; Dix, Theodore H.; Hazen-Swann, Nancy; Anderson, EdwardMothers suffering from depression are likely to engage in poor parenting practices, have children with poorer peer relations and more behavior problems. It is likely that maternal depression follows different trajectories in different mothers. These trajectories may lead to differing child outcomes over time. The current study examined a large sample of mothers and children. Latent class growth analysis (LCGA) was used to demonstrate a four-class depressive symptom model, which included high stable, high decreasing, moderate increasing, and low stable trajectories of depressive symptoms measured using the CES-D instrument. Demographic risk was found to differ across classes, with high stable and high decreasing mothers being classified as more at-risk. Mothers in the high stable depression class were found to be less sensitive, and had children with worse outcomes including negative behaviors with peers, social support from peers, and behavior problems. High decreasing mothers were also less sensitive and had children with equally poor outcomes, even though the mothers recovered from their depressive symptoms by the time their children were 54 months of age. In conclusion, early clinical depressive symptoms were likely to predict poorer child outcomes, and more demographic risk was linked to high early depression scores.Item Mothers accommodating to resolve conflict with their children(2010-05) Day, William Harold, 1978-; Dix, Theodore H.; Jacobvitz, Deborah; Hazen-Swann, NancyMaternal sensitivity is known to have important implications on children’s development. This study examined the sensitivity with which mothers used to elicit compliance from their children. In particular, this study explored the goal-regulation strategy of accommodation. One hundred twenty-nine mother-toddler dyads from a non-clinical sample were observed during a 5-minute ‘clean-up’ activity. Results showed that mothers’ utilized numerous accommodation strategies. Moreover, the use of individual accommodation strategies was associated with maternal depression, mothers’ level of child-orientation, and children’s age.Item Multi-Dimensional Error Analysis of Nearshore Wave Modeling Tools, with Application Toward Data-Driven Boundary Correction(2012-02-14) Jiang, BoyangAs the forecasting models become more sophisticated in their physics and possible depictions of the nearshore hydrodynamics, they also become increasingly sensitive to errors in the inputs. These input errors include: mis-specification of the input parameters (bottom friction, eddy viscosity, etc.); errors in input fields and errors in the specification of boundary information (lateral boundary conditions, etc.). Errors in input parameters can be addressed with fairly straightforward parameter estimation techniques, while errors in input fields can be somewhat ameliorated by physical linkage between the scales of the bathymetric information and the associated model response. Evaluation of the errors on the boundary is less straightforward, and is the subject of this thesis. The model under investigation herein is the Delft3D modeling suite, developed at Deltares (formerly Delft Hydraulics) in Delft, the Netherlands. Coupling of the wave (SWAN) and hydrodynamic (FLOW) model requires care at the lateral boundaries in order to balance run time and error growth. To this extent, we use perturbation method and spatio-temporal analysis method such as Empirical Orthogonal Function (EOF) analysis to determine the various scales of motion in the flow field and the extent of their response to imposed boundary errors. From the Swirl Strength examinations, we find that the higher EOF modes are affected more by the lateral boundary errors than the lower ones.Item Naturally-occurring declines in antisocial behavior from ages 4 to 12 : relations with parental sensitivity and psychological processes in children(2013-05) Buck, Katharine Ann; Dix, Theodore H.Although common in toddlerhood, for most children, antisocial behavior declines with age. The current study examined whether changes in maternal sensitivity, children's social skills, emotion regulation, and hostile attributions account for these declines. Data from 1022 participants, (52% female; 87% Caucasian) from the NICHD SECCYD were examined from 54 months through 6th grade. Analyses revealed that increases in sensitivity, social skills, and emotion regulation predicted decreases in antisocial behavior. Increases in sensitivity predicted declines because they promoted social skills and emotion regulation. Decreases in antisocial behavior predicted subsequent increases in sensitivity, children's social skills, emotion regulation, and decreases in hostile attributions. Increasing sensitivity, children's social skills, and emotion regulation, appear to be critical factors for naturally-occurring declines in antisocial behavior.Item Parametric uncertainty and sensitivity methods for reacting flows(2014-05) Braman, Kalen Elvin; Raman, VenkatA Bayesian framework for quantification of uncertainties has been used to quantify the uncertainty introduced by chemistry models. This framework adopts a probabilistic view to describe the state of knowledge of the chemistry model parameters and simulation results. Given experimental data, this method updates the model parameters' values and uncertainties and propagates that parametric uncertainty into simulations. This study focuses on syngas, a combination in various ratios of H2 and CO, which is the product of coal gasification. Coal gasification promises to reduce emissions by replacing the burning of coal with the less polluting burning of syngas. Despite the simplicity of syngas chemistry models, they nonetheless fail to accurately predict burning rates at high pressure. Three syngas models have been calibrated using laminar flame speed measurements. After calibration the resulting uncertainty in the parameters is propagated forward into the simulation of laminar flame speeds. The model evidence is then used to compare candidate models. Sensitivity studies, in addition to Bayesian methods, can be used to assess chemistry models. Sensitivity studies provide a measure of how responsive target quantities of interest (QoIs) are to changes in the parameters. The adjoint equations have been derived for laminar, incompressible, variable density reacting flow and applied to hydrogen flame simulations. From the adjoint solution, the sensitivity of the QoI to the chemistry model parameters has been calculated. The results indicate the most sensitive parameters for flame tip temperature and NOx emission. Such information can be used in the development of new experiments by pointing out which are the critical chemistry model parameters. Finally, a broader goal for chemistry model development is set through the adjoint methodology. A new quantity, termed field sensitivity, is introduced to guide chemistry model development. Field sensitivity describes how information of perturbations in flowfields propagates to specified QoIs. The field sensitivity, mathematically shown as equivalent to finding the adjoint of the primal governing equations, is obtained for laminar hydrogen flame simulations using three different chemistry models. Results show that even when the primal solution is sufficiently close for the three mechanisms, the field sensitivity can vary.Item Selection, calibration, and validation of coarse-grained models of atomistic systems(2015-05) Farrell, Kathryn Anne; Oden, J. Tinsley (John Tinsley), 1936-; Prudhomme, Serge M.; Babuska, Ivo; Bui-Thanh, Tan; Demkowicz, Leszek; Elber, RonThis dissertation examines the development of coarse-grained models of atomistic systems for the purpose of predicting target quantities of interest in the presence of uncertainties. It addresses fundamental questions in computational science and engineering concerning model selection, calibration, and validation processes that are used to construct predictive reduced order models through a unified Bayesian framework. This framework, enhanced with the concepts of information theory, sensitivity analysis, and Occam's Razor, provides a systematic means of constructing coarse-grained models suitable for use in a prediction scenario. The novel application of a general framework of statistical calibration and validation to molecular systems is presented. Atomistic models, which themselves contain uncertainties, are treated as the ground truth and provide data for the Bayesian updating of model parameters. The open problem of the selection of appropriate coarse-grained models is addressed through the powerful notion of Bayesian model plausibility. A new, adaptive algorithm for model validation is presented. The Occam-Plausibility ALgorithm (OPAL), so named for its adherence to Occam's Razor and the use of Bayesian model plausibilities, identifies, among a large set of models, the simplest model that passes the Bayesian validation tests, and may therefore be used to predict chosen quantities of interest. By discarding or ignoring unnecessarily complex models, this algorithm contains the potential to reduce computational expense with the systematic process of considering subsets of models, as well as the implementation of the prediction scenario with the simplest valid model. An application to the construction of a coarse-grained system of polyethylene is given to demonstrate the implementation of molecular modeling techniques; the process of Bayesian selection, calibration, and validation of reduced-order models; and OPAL. The potential of the Bayesian framework for the process of coarse graining and of OPAL as a means of determining a computationally conservative valid model is illustrated on the polyethylene example.Item Sensitivity analysis of impedance-based fault location methods(2011-12) Karnik, Neeraj Anil; Santoso, Surya; Grady, MackImpedance-based methods are used to locate faults on distribution systems because of their simplicity and ease of implementation. These methods require fault voltage and current data along with the positive- and zero-sequence line impedance values (in ohm per unit length) to estimate the reactance or distance to fault location. Inaccuracies in line impedance values, which arise from circuit model errors, have an adverse impact on fault location estimates of the impedance-based methods. Measurement errors in current and voltage transformers can also lead to inaccuracy in estimation. Further, all methods use simplistic models to represent the system load. The load in a practical distribution system does not conform to the oversimplified models leading to errors in estimation of fault location. This thesis presents sensitivity analysis of four impedance-based methods. It focuses on the Takagi, positive-sequence reactance, loop reactance and balanced-load methods. Amongst these four methods, the first three are commonly used for fault location. The fourth method was developed as a part of this work. The objective of sensitivity analysis is to study and quantify the effect of circuit model, measurement and load model errors, on the fault location estimates of the four methods. The results of this analysis are used to establish upper and lower bounds on the estimation errors for each method. The analysis begins with creation of a baseline case using a modified version of the IEEE 34 Node Test Feeder. All the methods estimate the reactance to fault location as a part of this analysis. The baseline case uses accurate line impedances and measurement values in the four methods. The fault location estimates for this case serve as a means of comparison for all subsequent analyzes. Secondly, various circuit model errors are introduced while computing the line impedance values. These errors include inaccurate modeling of four parameters viz. phase conductor distances, conductor sizes, phase to neutral conductor distances and earth resistivity. The erroneous line impedance values, which arise from these circuit model errors, are used in the four methods. The resultant location estimates are compared with those for the baseline case. It is observed that modeling errors in earth resistivity can cause estimation errors of 2% to 5% in the Takagi and positive-sequence reactance methods. These errors can be positive or negative depending upon whether the modeled earth resistivity value is more than or less than the accurate value. The effect of inaccurate modeling of the other three parameters is marginal. Additionally, the Takagi and positive-sequence reactance methods assume line impedances to be uniform while estimating fault location. Although this assumption is a type of circuit model error, it does not lead to significant errors in estimation. The loop reactance and balanced-load methods are insensitive to circuit model errors as they do not use line impedance values while estimating reactance to fault location. The next part is analysis of effect of measurement errors on fault location estimates. Ratio and phase angle errors are deliberately introduced in the current and voltage transformers and the erroneous measurements are used to conduct fault location. This causes 5% to 6% errors in estimation for the Takagi and positive-sequence reactance methods. These estimation errors can be positive or negative depending upon the magnitude of the CT and VT ratio errors and the sign of the phase angle errors. For the loop reactance method, erroneous measurements introduce 8% to 30% errors in fault location. This indicates that the loop reactance method is highly sensitive to measurement errors. The balanced-load method is moderately sensitive and experiences 6% to 7% errors in fault location estimates. Lastly, the effect of load current on fault location estimates is analyzed. When the Takagi and positive-sequence reactance methods are used on a heavily loaded system, they estimate fault location with an error of 5% to 8%. The loop reactance method is severely affected by the level of load current in the system. This method can estimate fault location with nearly 100% accuracy, on a lightly loaded system. However, the estimation errors for this method increase significantly and are in the range of 15% to 30%, as load current in the system increases. In case of the balanced-load method, unbalanced, heavy loads can cause estimation errors of 7% to 25%. The combined effect of all the error sources is taken into account by creating a confidence interval for each method. For the Takagi and positive-sequence reactance methods, the actual fault location can be expected to lie within ±10% of the estimated value. The fault location estimation error for the loop reactance and balanced-load methods is always positive. The actual reactance-to-fault is within -30% of the value estimated by these methods.