Browsing by Subject "Sensitivity analysis"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item A Systems Biology Approach to Develop Models of Signal Transduction Pathways(2011-10-21) Huang, ZuyiMathematical models of signal transduction pathways are characterized by a large number of proteins and uncertain parameters, yet only a limited amount of quantitative data is available. The dissertation addresses this problem using two different approaches: the first approach deals with a model simplification procedure for signaling pathways that reduces the model size but retains the physical interpretation of the remaining states, while the second approach deals with creating rich data sets by computing transcription factor profiles from fluorescent images of green-fluorescent-protein (GFP) reporter cells. For the first approach a model simplification procedure for signaling pathway models is presented. The technique makes use of sensitivity and observability analysis to select the retained proteins for the simplified model. The presented technique is applied to an IL-6 signaling pathway model. It is found that the model size can be significantly reduced and the simplified model is able to adequately predict the dynamics of key proteins of the signaling pathway. An approach for quantitatively determining transcription factor profiles from GFP reporter data is developed as the second major contribution of this work. The procedure analyzes fluorescent images to determine fluorescence intensity profiles using principal component analysis and K-means clustering, and then computes the transcription factor concentration from the fluorescence intensity profiles by solving an inverse problem involving a model describing transcription, translation, and activation of green fluorescent proteins. Activation profiles of the transcription factors NF-?B, nuclear STAT3, and C/EBP? are obtained using the presented approach. The data for NF-?B is used to develop a model for TNF-? signal transduction while the data for nuclear STAT3 and C/EBP? is used to verify the simplified IL-6 model. Finally, an approach is developed to compute the distribution of transcription factor profiles among a population of cells. This approach consists of an algorithm for identifying individual fluorescent cells from fluorescent images, and an algorithm to compute the distribution of transcription factor profiles from the fluorescence intensity distribution by solving an inverse problem. The technique is applied to experimental data to derive the distribution of NF-?B concentrations from fluorescent images of a NF-?B GFP reporter system.Item Development of reliable pavement models(2011-08) Aguiar Moya, José Pablo, 1981-; Prozzi, Jorge Alberto; Manuel, Lance; Walton, Michael; Machemehl, Randy B.; Yilmaz, HilalAs the cost of designing and building new highway pavements increases and the number of new construction and major rehabilitation projects decreases, the importance of ensuring that a given pavement design performs as expected in the field becomes vital. To address this issue in other fields of civil engineering, reliability analysis has been used extensively. However, in the case of pavement structural design, the reliability component is usually neglected or overly simplified. To address this need, the current dissertation proposes a framework for estimating the reliability of a given pavement structure regardless of the pavement design or analysis procedure that is being used. As part of the dissertation, the framework is applied with the Mechanistic-Empirical Pavement Design Guide (MEPDG) and failure is considered as a function of rutting of the hot-mix asphalt (HMA) layer. The proposed methodology consists of fitting a response surface, in place of the time-demanding implicit limit state functions used within the MEPDG, in combination with an analytical approach to estimating reliability using second moment techniques: First-Order and Second-Order Reliability Methods (FORM and SORM) and simulation techniques: Monte Carlo and Latin Hypercube Simulation. In order to demonstrate the methodology, a three-layered pavement structure is selected consisting of a hot-mix asphalt (HMA) surface, a base layer, and subgrade. Several pavement design variables are treated as random; these include HMA and base layer thicknesses, base and subgrade modulus, and HMA layer binder and air void content. Information on the variability and correlation between these variables are obtained from the Long-Term Pavement Performance (LTPP) program, and likely distributions, coefficients of variation, and correlation between the variables are estimated. Additionally, several scenarios are defined to account for climatic differences (cool, warm, and hot climatic regions), truck traffic distributions (mostly consisting of single unit trucks versus mostly consisting of single trailer trucks), and the thickness of the HMA layer (thick versus thin). First and second order polynomial HMA rutting failure response surfaces with interaction terms are fit by running the MEPDG under a full factorial experimental design consisting of 3 levels of the aforementioned design variables. These response surfaces are then used to analyze the reliability of the given pavement structures under the different scenarios. Additionally, in order to check for the accuracy of the proposed framework, direct simulation using the MEPDG was performed for the different scenarios. Very small differences were found between the estimates based on response surfaces and direct simulation using the MEPDG, confirming the accurateness of the proposed procedure. Finally, sensitivity analysis on the number of MEPDG runs required to fit the response surfaces was performed and it was identified that reducing the experimental design by one level still results in response surfaces that properly fit the MEPDG, ensuring the applicability of the method for practical applications.Item Sensitivity calculations on a soot model using a partially stirred reactor(2010-05) Wu, Nathan Gabriel; Raman, Venkat; Clemens, Noel T.Sensitivity analysis was performed on a soot model using a partially stirred reactor (PaSR) in order to determine the effects of mixing model parameters on soot scalar values. The sensitivities of the mixture fraction zeta and progress variable C to the mixing model constant C_phi were calculated; these values were used to compute the sensitivity of water mass fraction Y_H2O to C_phi and several soot quantities to soot moments. Results were validated by evaluating the mean mixture fraction sensitivity and a long simulation time case. From the baseline case, it was noted that soot moment sensitivities tended to peak on the rich side of the stoichiometric mixture fraction zeta_st. Timestep, number of notional particles, mixing timescale tau_mix, and residence time tau_res were varied independently. Choices for timestep and notional particle count were shown to be sufficient to capture relevant scalar profiles, and did not greatly affect sensitivity calculations. Altering tau_mix or tau_res was shown to affect sensitivity to mixing, and it was concluded that the soot model is more heavily influenced by the chemistry than mixing.Item Sensitivity of Building Energy Simulation with Building Occupancy for a University Building(2014-08-01) Chhajed, ShreyansOccupancy plays a major role in determining energy use of any building. It plays an even more crucial role in the case of a university classroom building. These buildings are typically loaded with highly variable occupancies that vary from very low during breaks to very high during peak daytime hours in the middle of the semester. This paper presents how an energy simulation model was built and validated and then used to explore the effect of occupancy for a classroom/studio building on the campus of Texas A&M University. The energy model for the building was created using the DOE-2 engine and validated with actual energy consumption data. As constructed building characteristics and occupancy loading data were used in the DOE-2 model. Parametric runs were then completed with the validated energy model for variations in occupancy number, occupancy schedules, etc. With the exception of extremely high occupancy, the results show that all variations in occupancy or schedule resulted in less than a 10% deviation from the actual building performance model. These results demonstrate that though it plays a role in the energy performance of this type of a classroom building, occupancy and occupant schedules do not have a major effect on annual energy performance. The results show that, during the design stage of a building life-cycle, building designers do not need very accurate estimates for the occupancy of the proposed building.Item Statistical methods for the analysis of DSMC simulations of hypersonic shocks(2012-05) Strand, James Stephen; Goldstein, David Benjamin, doctor of aeronautics; Moser, Robert; Varghese, Philip; Ezekoye, Ofodike; Prudencio, ErnestoIn this work, statistical techniques were employed to study the modeling of a hypersonic shock with the Direct Simulation Monte Carlo (DSMC) method, and to gain insight into how the model interacts with a set of physical parameters. Direct Simulation Monte Carlo (DSMC) is a particle based method which is useful for simulating gas dynamics in rarefied and/or highly non-equilibrium flowfields. A DSMC code was written and optimized for use in this research. The code was developed with shock tube simulations in mind, and it includes a number of improvements which allow for the efficient simulation of 1D, hypersonic shocks. Most importantly, a moving sampling region is used to obtain an accurate steady shock profile from an unsteady, moving shock wave. The code is MPI parallel and an adaptive load balancing scheme ensures that the workload is distributed properly between processors over the course of a simulation. Global, Monte Carlo based sensitivity analyses were performed in order to determine which of the parameters examined in this work most strongly affect the simulation results for two scenarios: a 0D relaxation from an initial high temperature state and a hypersonic shock. The 0D relaxation scenario was included in order to examine whether, with appropriate initial conditions, it can be viewed in some regards as a substitute for the 1D shock in a statistical sensitivity analysis. In both analyses sensitivities were calculated based on both the square of the Pearson correlation coefficient and the mutual information. The quantity of interest (QoI) chosen for these analyses was the NO density profile. This vector QoI was broken into a set of scalar QoIs, each representing the density of NO at a specific point in time (for the relaxation) or a specific streamwise location (for the shock), and sensitivities were calculated for each scalar QoI based on both measures of sensitivity. The sensitivities were then integrated over the set of scalar QoIs to determine an overall sensitivity for each parameter. A weighting function was used in the integration in order to emphasize sensitivities in the region of greatest thermal and chemical non-equilibrium. The six parameters which most strongly affect the NO density profile were found to be the same for both scenarios, which provides justification for the claim that a 0D relaxation can in some situations be used as a substitute model for a hypersonic shock. These six parameters are the pre-exponential constants in the Arrhenius rate equations for the N2 dissociation reaction N2 + N ⇄ 3N, the O2 dissociation reaction O2 + O ⇄ 3O, the NO dissociation reactions NO + N ⇄ 2N + O and NO + O ⇄ N + 2O, and the exchange reactions N2 + O ⇄ NO + N and NO + O ⇄ O2 + N. After identification of the most sensitive parameters, a synthetic data calibration was performed to demonstrate that the statistical inverse problem could be solved for the 0D relaxation scenario. The calibration was performed using the QUESO code, developed at the PECOS center at UT Austin, which employs the Delayed Rejection Adaptive Metropolis (DRAM) algorithm. The six parameters identified by the sensitivity analysis were calibrated successfully with respect to a group of synthetic datasets.